最新消息:

突破GAE文件数量的限制

python admin 2928浏览 0评论

这些天有个项目是需要一部分Android开发。本想去官网看文档,众所周知的原因,官网无法连接。下载了本地的文件,由于是做的整站镜像,搜索功能无法正常使用,这对于经常要使用此功能的我来说很是麻烦。于是想到了把文件上传到GAE,再利用Google的本地功能来实现。 想的很简单,写好了app.yaml就要上传。谁知道试了几次也不成功。帮助手册上说的很明确:

限制
请求大小 10 兆字节
响应大小 10 兆字节
请求持续时间 30 秒
同时动态请求 30 *
应用程序文件的最大数目 1,000
静态文件的最大数目 1,000
应用程序文件的最大大小 10 兆字节
静态文件的最大大小 10 兆字节
所有应用程序和静态文件的最大总大小 150 兆字节

看了看要上传的文件夹,5000多个文件,小气的google根本满足不了我。 想到Python的灵活性,在手册中翻了到了zipfile类库。有门!就上传zip文档,边解压边浏览。Google要用时间来换空间那就累死他的CPU 写个GAE应用loadzip.py:

#!/bin/env python
##
##  This is a project for Google App Engine
##      that support create a webisite by ZIP packages!
##
##  By Litrin J. 2010/11
##  Website: www.litrin.net
##  Example: android-sdk.appspot.com
##

import wsgiref.handlers
from google.appengine.ext import webapp
from google.appengine.api import memcache
from zipfile import ZipFile
import os
import logging
import mimetypes
import re

class MainHandler(webapp.RequestHandler):
    '''''
This is a project for Google App Engine that support create a webisite by ZIP packages!
    '''
    URL = ''
    CACHEDTIME = 60*60*24*30
    #setting memcaching time (seconds)

    def get(self):

        self.URL=self.request.path

        if ( self.URL[-1:] == '/'):
        #Get the defaule file for the path
            self.URL+='index.html'

        sRealFileName = os.getcwd() + self.URL

        if (os.path.exists(sRealFileName)):
        #If the file not be ziped, read it drictory
            fNoZipedFile = open(sRealFileName,'r')
            Entry = fNoZipedFile.read()
            fNoZipedFile.close()

        else:
            Entry = self.loadFile()

        self.buildMineType()
        #Set the Mine type in header

        if (self.response.headers['Content-Type'] == 'text/html'):

            Entry = self.regex(Entry)

        self.response.out.write(Entry)
        #Response building finished!

    def loadZipFile(self):
    #Load the file from zip files. This is the core function!
        lFilename = self.URL.split('/')

        iPathLevel = 1
        #Loop count, The dir layers
        bLoaded = False
        #Sucessful marks

        while(iPathLevel < = len(lFilename)):
        #Get the zip file name and the filename from the URL, support muli-layer
            sFilePath = os.getcwd()
            sFileName = ''
            iElementCount = 1

            for sElement in lFilename:
            #Set or reset the Zip filename
                if ( iElementCount <= iPathLevel):
                    sFilePath += sElement + '/'

                if ( iElementCount >= iPathLevel ):
                    sFileName += sElement + '/'

                iElementCount += 1

            sFileName = sFileName[0:-1]
            sZipFilename = sFilePath[0:-1] + '.zip'

            if (os.path.exists(sZipFilename)):
            #Found the Zip file
                ZipFileHandle = ZipFile(sZipFilename)
                Entry = ZipFileHandle.read(sFileName)
                bLoaded = True
                ZipFileHandle.close()

                if (Entry is not None):
                    logging.info(sFileName + " in " + sZipFilename + " Loaded!")
                    return Entry
                    break

            iPathLevel +=1

        if (bLoaded==False):
        #Can't file the file in Zip packages
            logging.error('No found ' + self.request.path + '' +sZipFilename+' !')
            self.error(404)

            return None

    def loadFile(self):
    #Load in from memcache if cached
        Entry = self.loadFromMemcach()

        if Entry is None:
        #If not cached, cache it
            Entry = self.loadZipFile()
            if(Entry is not None):
                self.writeToMemcache(Entry)

        return Entry

    def loadFromMemcach(self):
        memcachkey = self.URL
        #The cache key is URL

        Entry = memcache.get(memcachkey)
        if Entry is not None:
            return Entry

        else:
            return None

    def writeToMemcache(self, data):
        memcachkey = self.URL
        #set the URL as the key
        memcache.add(memcachkey, data, self.CACHEDTIME)

        logging.info(memcachkey + ' cached!')
        return True

    def buildMineType(self):
    #Building the MineType in Http header
        sFilename = os.path.basename(self.URL)
        lFileName = sFilename.split(".")
        sExFilename = lFileName.pop()
        #Get the file ex-filename

        mimetypes.init()
        sMineType = mimetypes.types_map['.'+sExFilename]

        if (sMineType == ''):
        #Others can't be idented, set to HTML
            sMineType = 'text/html'

        self.response.headers['Content-Type'] = sMineType
        #Send Content-type

    def regex(self, Entry):

        lRegGroup=(
            ('n+', 'n'),
            #
            ('t+', 't'),
            #
        )

        for sRegCell in lRegGroup:
            (sSource, sTarget) = sRegCell
            rInfo = re.compile(sSource)
            Entry = rInfo.sub(sTarget, Entry)

        return Entry

def main():
    application = webapp.WSGIApplication([('.*', MainHandler)], debug=True)
    wsgiref.handlers.CGIHandler().run(application)

if __name__ == '__main__':
    main()

比较简单,我也没有写太多注释。考虑到Google一直被这么搞,挂掉也不好,于是启用了memcache。 重写app.yaml

application: android-sdk
version: 1
runtime: python
api_version: 1

handlers:

- url: .*
 script: zipload.py

#- url: /
# static_files: index.html
# upload: index.html

准备就绪,把手册中每一个子目录压缩成zip,极限大小也无所谓,然后把手册根目录下的所有文件和这两个文件放在同一目录下。上传至GAE便可。 捎带说下,由于GAE还有对文件大小的限制。用此方法还无法完全解决Adroid手册上传的问题,后来我用了分级的目录结构搞定的,原理上跟上面是一致的,在此不再累述了。 把我的Android手册网站共享给大家 http://android-sdk.appspot.com/index.html PS:

  1. Linux下的Idle工具真烂,远不及以及够烂的windows版。好在SPE这个工具很好用。
  2. GAE的sdk for linux没有桌面工具,只能用命令行方式,在此代表Linux用户表示抗议!

本项目的后续版本已经通过google code开源,敬请访问:

http://code.google.com/p/zipsite/

获取最新版本

转载请注明:爱开源 » 突破GAE文件数量的限制

您必须 登录 才能发表评论!