# HG changeset patch # User Paul Boddie # Date 1390995775 -3600 # Node ID ed584d9fa250b02b831dfb7640ecc9064d7bb6d8 # Parent 24482c1426b77af1502e32ad3df79c5a49024c99 Moved URL access in the acquisition of resources to a separate function. diff -r 24482c1426b7 -r ed584d9fa250 MoinRemoteSupport.py --- a/MoinRemoteSupport.py Mon Jan 27 21:59:37 2014 +0100 +++ b/MoinRemoteSupport.py Wed Jan 29 12:42:55 2014 +0100 @@ -2,7 +2,7 @@ """ MoinMoin - MoinRemoteSupport library - @copyright: 2011, 2012, 2013 by Paul Boddie + @copyright: 2011, 2012, 2013, 2014 by Paul Boddie @license: GNU GPL (v2 or later), see COPYING.txt for details. """ @@ -11,7 +11,7 @@ from MoinMoin import caching import urllib2, time -def getCachedResource(request, url, arena, scope, max_cache_age): +def getCachedResource(request, url, arena, scope, max_cache_age, reader=None): """ Using the given 'request', return the resource data for the given 'url', @@ -19,6 +19,10 @@ has already been downloaded. The 'max_cache_age' indicates the length in seconds that a cache entry remains valid. + If the optional 'reader' object is given, it will be used to access the + 'url' and write the downloaded data to a cache entry. Otherwise, a standard + URL reader will be used. + If the resource cannot be downloaded and cached, None is returned. Otherwise, the form of the data is as follows: @@ -29,6 +33,8 @@ content-body """ + reader = reader or urlreader + # See if the URL is cached. cache_key = cache.key(request, content=url) @@ -50,18 +56,9 @@ cache_entry.open(mode="w") try: - f = urllib2.urlopen(url) - try: - cache_entry.write(url + "\n") - cache_entry.write((f.headers.get("content-type") or "") + "\n") - for key, value in f.headers.items(): - if key.lower() != "content-type": - cache_entry.write("%s: %s\n" % (key, value)) - cache_entry.write("\n") - cache_entry.write(f.read()) - finally: - cache_entry.close() - f.close() + # Read from the source and write to the cache. + + reader(url, cache_entry) # In case of an exception, return None. @@ -78,6 +75,23 @@ finally: cache_entry.close() +def urlreader(url, cache_entry): + + "Retrieve data from the given 'url', writing it to the 'cache_entry'." + + f = urllib2.urlopen(url) + try: + cache_entry.write(url + "\n") + cache_entry.write((f.headers.get("content-type") or "") + "\n") + for key, value in f.headers.items(): + if key.lower() != "content-type": + cache_entry.write("%s: %s\n" % (key, value)) + cache_entry.write("\n") + cache_entry.write(f.read()) + finally: + cache_entry.close() + f.close() + def getCachedResourceMetadata(f): "Return a metadata dictionary for the given resource file-like object 'f'."