
GS
**


boto.gs.acl
===========

class class boto.gs.acl.ACL(parent=None)

   acl

   add_email_grant(permission, email_address)

   add_group_email_grant(permission, email_address)

   add_group_grant(permission, group_id)

   add_user_grant(permission, user_id)

   endElement(name, value, connection)

   startElement(name, attrs, connection)

   to_xml()

boto.gs.acl.CannedACLStrings = ['private', 'public-read', 'project-private', 'public-read-write', 'authenticated-read', 'bucket-owner-read', 'bucket-owner-full-control']

   A list of Google Cloud Storage predefined (canned) ACL strings.

class class boto.gs.acl.Entries(parent=None)

   endElement(name, value, connection)

   startElement(name, attrs, connection)

   to_xml()

class class boto.gs.acl.Entry(scope=None, type=None, id=None, name=None, email_address=None, domain=None, permission=None)

   endElement(name, value, connection)

   startElement(name, attrs, connection)

   to_xml()

class class boto.gs.acl.Scope(parent, type=None, id=None, name=None, email_address=None, domain=None)

   ALLOWED_SCOPE_TYPE_SUB_ELEMS = {'allusers': [], 'userbyemail': ['displayname', 'emailaddress', 'name'], 'userbyid': ['displayname', 'id', 'name'], 'groupbydomain': ['domain'], 'groupbyemail': ['displayname', 'emailaddress', 'name'], 'allauthenticatedusers': [], 'groupbyid': ['displayname', 'id', 'name']}

   endElement(name, value, connection)

   startElement(name, attrs, connection)

   to_xml()

boto.gs.acl.SupportedPermissions = ['READ', 'WRITE', 'FULL_CONTROL']

   A list of supported ACL permissions.


boto.gs.bucket
==============

class class boto.gs.bucket.Bucket(connection=None, name=None, key_class=<class 'boto.gs.key.Key'>)

   Represents a Google Cloud Storage bucket.

   add_email_grant(permission, email_address, recursive=False, headers=None)

      Convenience method that provides a quick way to add an email
      grant to a bucket. This method retrieves the current ACL,
      creates a new grant based on the parameters passed in, adds that
      grant to the ACL and then PUT's the new ACL back to GCS.

      Parameters:
         * **permission** (*string*) -- The permission being
           granted. Should be one of: (READ, WRITE, FULL_CONTROL).

         * **email_address** (*string*) -- The email address
           associated with the GS account your are granting the
           permission to.

         * **recursive** (*bool*) -- A boolean value to controls
           whether the call will apply the grant to all keys within
           the bucket or not.  The default value is False.  By passing
           a True value, the call will iterate through all keys in the
           bucket and apply the same grant to each key. CAUTION: If
           you have a lot of keys, this could take a long time!

   add_group_email_grant(permission, email_address, recursive=False, headers=None)

      Convenience method that provides a quick way to add an email
      group grant to a bucket. This method retrieves the current ACL,
      creates a new grant based on the parameters passed in, adds that
      grant to the ACL and then PUT's the new ACL back to GCS.

      Parameters:
         * **permission** (*string*) -- The permission being
           granted. Should be one of: READ|WRITE|FULL_CONTROL See
           http://code.google.com/apis/storage/docs/developer-
           guide.html#authorization for more details on permissions.

         * **email_address** (*string*) -- The email address
           associated with the Google Group to which you are granting
           the permission.

         * **recursive** (*bool*) -- A boolean value to controls
           whether the call will apply the grant to all keys within
           the bucket or not.  The default value is False.  By passing
           a True value, the call will iterate through all keys in the
           bucket and apply the same grant to each key. CAUTION: If
           you have a lot of keys, this could take a long time!

   add_user_grant(permission, user_id, recursive=False, headers=None)

      Convenience method that provides a quick way to add a canonical
      user grant to a bucket. This method retrieves the current ACL,
      creates a new grant based on the parameters passed in, adds that
      grant to the ACL and then PUTs the new ACL back to GCS.

      Parameters:
         * **permission** (*string*) -- The permission being
           granted. Should be one of: (READ|WRITE|FULL_CONTROL)

         * **user_id** (*string*) -- The canonical user id
           associated with the GS account you are granting the
           permission to.

         * **recursive** (*bool*) -- A boolean value to controls
           whether the call will apply the grant to all keys within
           the bucket or not.  The default value is False.  By passing
           a True value, the call will iterate through all keys in the
           bucket and apply the same grant to each key. CAUTION: If
           you have a lot of keys, this could take a long time!

   cancel_multipart_upload(key_name, upload_id, headers=None)

      To verify that all parts have been removed, so you don't get
      charged for the part storage, you should call the List Parts
      operation and ensure the parts list is empty.

   complete_multipart_upload(key_name, upload_id, xml_body, headers=None)

      Complete a multipart upload operation.

   configure_lifecycle(lifecycle_config, headers=None)

      Configure lifecycle for this bucket.

      Parameters:
         **lifecycle_config** ("boto.gs.lifecycle.LifecycleConfig") --
         The lifecycle configuration you want to configure for this
         bucket.

   configure_versioning(enabled, headers=None)

      Configure versioning for this bucket.

      Parameters:
         * **enabled** (*bool*) -- If set to True, enables
           versioning on this bucket. If set to False, disables
           versioning.

         * **headers** (*dict*) -- Additional headers to send with
           the request.

   configure_website(main_page_suffix=None, error_key=None, headers=None)

      Configure this bucket to act as a website

      Parameters:
         * **main_page_suffix** (*str*) -- Suffix that is appended
           to a request that is for a "directory" on the website
           endpoint (e.g. if the suffix is index.html and you make a
           request to samplebucket/images/ the data that is returned
           will be for the object with the key name
           images/index.html). The suffix must not be empty and must
           not include a slash character. This parameter is optional
           and the property is disabled if excluded.

         * **error_key** (*str*) -- The object key name to use when
           a 400 error occurs. This parameter is optional and the
           property is disabled if excluded.

         * **headers** (*dict*) -- Additional headers to send with
           the request.

   copy_key(new_key_name, src_bucket_name, src_key_name, metadata=None, src_version_id=None, storage_class='STANDARD', preserve_acl=False, encrypt_key=False, headers=None, query_args=None, src_generation=None)

      Create a new key in the bucket by copying an existing key.

      Parameters:
         * **new_key_name** (*string*) -- The name of the new key

         * **src_bucket_name** (*string*) -- The name of the source
           bucket

         * **src_key_name** (*string*) -- The name of the source key

         * **src_generation** (*int*) -- The generation number of
           the source key to copy. If not specified, the latest
           generation is copied.

         * **metadata** (*dict*) -- Metadata to be associated with
           new key.  If metadata is supplied, it will replace the
           metadata of the source key being copied.  If no metadata is
           supplied, the source key's metadata will be copied to the
           new key.

         * **version_id** (*string*) -- Unused in this subclass.

         * **storage_class** (*string*) -- The storage class of the
           new key.  By default, the new key will use the standard
           storage class. Possible values are: STANDARD |
           DURABLE_REDUCED_AVAILABILITY

         * **preserve_acl** (*bool*) -- If True, the ACL from the
           source key will be copied to the destination key.  If
           False, the destination key will have the default ACL.  Note
           that preserving the ACL in the new key object will require
           two additional API calls to GCS, one to retrieve the
           current ACL and one to set that ACL on the new object.  If
           you don't care about the ACL (or if you have a default ACL
           set on the bucket), a value of False will be significantly
           more efficient.

         * **encrypt_key** (*bool*) -- Included for compatibility
           with S3. This argument is ignored.

         * **headers** (*dict*) -- A dictionary of header name/value
           pairs.

         * **query_args** (*string*) -- A string of additional
           querystring arguments to append to the request

      Return type:
         "boto.gs.key.Key"

      Returns:
         An instance of the newly created key object

   delete(headers=None)

   delete_cors(headers=None)

      Removes all CORS configuration from the bucket.

   delete_key(key_name, headers=None, version_id=None, mfa_token=None, generation=None)

      Deletes a key from the bucket.

      Parameters:
         * **key_name** (*string*) -- The key name to delete

         * **headers** (*dict*) -- A dictionary of header name/value
           pairs.

         * **version_id** (*string*) -- Unused in this subclass.

         * **mfa_token** (*tuple or list of strings*) -- Unused in
           this subclass.

         * **generation** (*int*) -- The generation number of the
           key to delete. If not specified, the latest generation
           number will be deleted.

      Return type:
         "boto.gs.key.Key"

      Returns:
         A key object holding information on what was deleted.

   delete_keys(keys, quiet=False, mfa_token=None, headers=None)

      Deletes a set of keys using S3's Multi-object delete API. If a
      VersionID is specified for that key then that version is
      removed. Returns a MultiDeleteResult Object, which contains
      Deleted and Error elements for each key you ask to delete.

      Parameters:
         * **keys** (*list*) -- A list of either key_names or
           (key_name, versionid) pairs or a list of Key instances.

         * **quiet** (*boolean*) -- In quiet mode the response
           includes only keys where the delete operation encountered
           an error. For a successful deletion, the operation does not
           return any information about the delete in the response
           body.

         * **mfa_token** (*tuple or list of strings*) -- A tuple or
           list consisting of the serial number from the MFA device
           and the current value of the six-digit token associated
           with the device.  This value is required anytime you are
           deleting versioned objects from a bucket that has the
           MFADelete option on the bucket.

      Returns:
         An instance of MultiDeleteResult

   delete_lifecycle_configuration(headers=None)

      Removes all lifecycle configuration from the bucket.

   delete_policy(headers=None)

   delete_tags(headers=None)

   delete_website_configuration(headers=None)

      Remove the website configuration from this bucket.

      Parameters:
         **headers** (*dict*) -- Additional headers to send with the
         request.

   disable_logging(headers=None)

      Disable logging on this bucket.

      Parameters:
         **headers** (*dict*) -- Additional headers to send with the
         request.

   enable_logging(target_bucket, target_prefix=None, headers=None)

      Enable logging on a bucket.

      Parameters:
         * **target_bucket** (*bucket or string*) -- The bucket to
           log to.

         * **target_prefix** (*string*) -- The prefix which should
           be prepended to the generated log files written to the
           target_bucket.

         * **headers** (*dict*) -- Additional headers to send with
           the request.

   generate_url(expires_in, method='GET', headers=None, force_http=False, response_headers=None, expires_in_absolute=False)

   get_acl(key_name='', headers=None, version_id=None, generation=None)

      Returns the ACL of the bucket or an object in the bucket.

      Parameters:
         * **key_name** (*str*) -- The name of the object to get the
           ACL for. If not specified, the ACL for the bucket will be
           returned.

         * **headers** (*dict*) -- Additional headers to set during
           the request.

         * **version_id** (*string*) -- Unused in this subclass.

         * **generation** (*int*) -- If specified, gets the ACL for
           a specific generation of a versioned object. If not
           specified, the current version is returned. This parameter
           is only valid when retrieving the ACL of an object, not a
           bucket.

      Return type:
         "gs.acl.ACL"

   get_all_keys(headers=None, **params)

      A lower-level method for listing contents of a bucket.  This
      closely models the actual S3 API and requires you to manually
      handle the paging of results.  For a higher-level method that
      handles the details of paging for you, you can use the list
      method.

      Parameters:
         * **max_keys** (*int*) -- The maximum number of keys to
           retrieve

         * **prefix** (*string*) -- The prefix of the keys you want
           to retrieve

         * **marker** (*string*) -- The "marker" of where you are in
           the result set

         * **delimiter** (*string*) -- If this optional, Unicode
           string parameter is included with your request, then keys
           that contain the same string between the prefix and the
           first occurrence of the delimiter will be rolled up into a
           single result element in the CommonPrefixes collection.
           These rolled-up keys are not returned elsewhere in the
           response.

         * **encoding_type** (*string*) --

           Requests Amazon S3 to encode the response and specifies the
           encoding method to use.

           An object key can contain any Unicode character; however,
           XML 1.0 parser cannot parse some characters, such as
           characters with an ASCII value from 0 to 10. For characters
           that are not supported in XML 1.0, you can add this
           parameter to request that Amazon S3 encode the keys in the
           response.

           Valid options: "url"

      Return type:
         ResultSet

      Returns:
         The result from S3 listing the keys requested

   get_all_multipart_uploads(headers=None, **params)

      A lower-level, version-aware method for listing active MultiPart
      uploads for a bucket.  This closely models the actual S3 API and
      requires you to manually handle the paging of results.  For a
      higher-level method that handles the details of paging for you,
      you can use the list method.

      Parameters:
         * **max_uploads** (*int*) -- The maximum number of uploads
           to retrieve. Default value is 1000.

         * **key_marker** (*string*) --

           Together with upload_id_marker, this parameter specifies
           the multipart upload after which listing should begin.  If
           upload_id_marker is not specified, only the keys
           lexicographically greater than the specified key_marker
           will be included in the list.

           If upload_id_marker is specified, any multipart uploads for
           a key equal to the key_marker might also be included,
           provided those multipart uploads have upload IDs
           lexicographically greater than the specified
           upload_id_marker.

         * **upload_id_marker** (*string*) -- Together with key-
           marker, specifies the multipart upload after which listing
           should begin. If key_marker is not specified, the
           upload_id_marker parameter is ignored.  Otherwise, any
           multipart uploads for a key equal to the key_marker might
           be included in the list only if they have an upload ID
           lexicographically greater than the specified
           upload_id_marker.

         * **encoding_type** (*string*) --

           Requests Amazon S3 to encode the response and specifies the
           encoding method to use.

           An object key can contain any Unicode character; however,
           XML 1.0 parser cannot parse some characters, such as
           characters with an ASCII value from 0 to 10. For characters
           that are not supported in XML 1.0, you can add this
           parameter to request that Amazon S3 encode the keys in the
           response.

           Valid options: "url"

         * **delimiter** (*string*) -- Character you use to group
           keys. All keys that contain the same string between the
           prefix, if specified, and the first occurrence of the
           delimiter after the prefix are grouped under a single
           result element, CommonPrefixes. If you don't specify the
           prefix parameter, then the substring starts at the
           beginning of the key. The keys that are grouped under
           CommonPrefixes result element are not returned elsewhere in
           the response.

         * **prefix** (*string*) -- Lists in-progress uploads only
           for those keys that begin with the specified prefix. You
           can use prefixes to separate a bucket into different
           grouping of keys. (You can think of using prefix to make
           groups in the same way you'd use a folder in a file
           system.)

      Return type:
         ResultSet

      Returns:
         The result from S3 listing the uploads requested

   get_all_versions(headers=None, **params)

      A lower-level, version-aware method for listing contents of a
      bucket.  This closely models the actual S3 API and requires you
      to manually handle the paging of results.  For a higher-level
      method that handles the details of paging for you, you can use
      the list method.

      Parameters:
         * **max_keys** (*int*) -- The maximum number of keys to
           retrieve

         * **prefix** (*string*) -- The prefix of the keys you want
           to retrieve

         * **key_marker** (*string*) -- The "marker" of where you
           are in the result set with respect to keys.

         * **version_id_marker** (*string*) -- The "marker" of where
           you are in the result set with respect to version-id's.

         * **delimiter** (*string*) -- If this optional, Unicode
           string parameter is included with your request, then keys
           that contain the same string between the prefix and the
           first occurrence of the delimiter will be rolled up into a
           single result element in the CommonPrefixes collection.
           These rolled-up keys are not returned elsewhere in the
           response.

         * **encoding_type** (*string*) --

           Requests Amazon S3 to encode the response and specifies the
           encoding method to use.

           An object key can contain any Unicode character; however,
           XML 1.0 parser cannot parse some characters, such as
           characters with an ASCII value from 0 to 10. For characters
           that are not supported in XML 1.0, you can add this
           parameter to request that Amazon S3 encode the keys in the
           response.

           Valid options: "url"

      Return type:
         ResultSet

      Returns:
         The result from S3 listing the keys requested

   get_cors(headers=None)

      Returns a bucket's CORS XML document.

      Parameters:
         **headers** (*dict*) -- Additional headers to send with the
         request.

      Return type:
         "Cors"

   get_cors_xml(headers=None)

      Returns the current CORS configuration on the bucket as an XML
      document.

   get_def_acl(headers=None)

      Returns the bucket's default ACL.

      Parameters:
         **headers** (*dict*) -- Additional headers to set during the
         request.

      Return type:
         "gs.acl.ACL"

   get_key(key_name, headers=None, version_id=None, response_headers=None, generation=None)

      Returns a Key instance for an object in this bucket.

         Note that this method uses a HEAD request to check for the
         existence of the key.

      Parameters:
         * **key_name** (*string*) -- The name of the key to
           retrieve

         * **response_headers** (*dict*) -- A dictionary containing
           HTTP headers/values that will override any headers
           associated with the stored object in the response.  See
           http://goo.gl/06N3b for details.

         * **version_id** (*string*) -- Unused in this subclass.

         * **generation** (*int*) -- A specific generation number to
           fetch the key at. If not specified, the latest generation
           is fetched.

      Return type:
         "boto.gs.key.Key"

      Returns:
         A Key object from this bucket.

   get_lifecycle_config(headers=None)

      Returns the current lifecycle configuration on the bucket.

      Return type:
         "boto.gs.lifecycle.LifecycleConfig"

      Returns:
         A LifecycleConfig object that describes all current lifecycle
         rules in effect for the bucket.

   get_location()

      Returns the LocationConstraint for the bucket.

      Return type:
         str

      Returns:
         The LocationConstraint for the bucket or the empty string if
         no constraint was specified when bucket was created.

   get_logging_config(headers=None)

      Returns the current status of logging configuration on the
      bucket.

      Parameters:
         **headers** (*dict*) -- Additional headers to send with the
         request.

      Return type:
         dict

      Returns:
         A dictionary containing the parsed XML response from GCS. The
         overall structure is:

         * Logging

           * LogObjectPrefix: Prefix that is prepended to log
             objects.

           * LogBucket: Target bucket for log objects.

   get_logging_config_with_xml(headers=None)

      Returns the current status of logging configuration on the
      bucket as unparsed XML.

      Parameters:
         **headers** (*dict*) -- Additional headers to send with the
         request.

      Return type:
         2-Tuple

      Returns:
         2-tuple containing:

         1. A dictionary containing the parsed XML response from
            GCS. The

            overall structure is:

            * Logging

              * LogObjectPrefix: Prefix that is prepended to log
                objects.

              * LogBucket: Target bucket for log objects.

         2. Unparsed XML describing the bucket's logging
            configuration.

   get_logging_status(headers=None)

      Get the logging status for this bucket.

      Return type:
         "boto.s3.bucketlogging.BucketLogging"

      Returns:
         A BucketLogging object for this bucket.

   get_policy(headers=None)

      Returns the JSON policy associated with the bucket.  The policy
      is returned as an uninterpreted JSON string.

   get_request_payment(headers=None)

   get_storage_class()

      Returns the StorageClass for the bucket.

      Return type:
         str

      Returns:
         The StorageClass for the bucket.

   get_subresource(subresource, key_name='', headers=None, version_id=None)

      Get a subresource for a bucket or key.

      Parameters:
         * **subresource** (*string*) -- The subresource to get.

         * **key_name** (*string*) -- The key to operate on, or None
           to operate on the bucket.

         * **headers** (*dict*) -- Additional HTTP headers to
           include in the request.

         * **src_version_id** (*string*) -- Optional. The version id
           of the key to operate on. If not specified, operate on the
           newest version.

      Return type:
         string

      Returns:
         The value of the subresource.

   get_tags()

   get_versioning_status(headers=None)

      Returns the current status of versioning configuration on the
      bucket.

      Return type:
         bool

   get_website_configuration(headers=None)

      Returns the current status of website configuration on the
      bucket.

      Parameters:
         **headers** (*dict*) -- Additional headers to send with the
         request.

      Return type:
         dict

      Returns:
         A dictionary containing the parsed XML response from GCS. The
         overall structure is:

         * WebsiteConfiguration

           * MainPageSuffix: suffix that is appended to request that
             is for a "directory" on the website endpoint.

           * NotFoundPage: name of an object to serve when site
             visitors encounter a 404.

   get_website_configuration_obj(headers=None)

      Get the website configuration as a
      "boto.s3.website.WebsiteConfiguration" object.

   get_website_configuration_with_xml(headers=None)

      Returns the current status of website configuration on the
      bucket as unparsed XML.

      Parameters:
         **headers** (*dict*) -- Additional headers to send with the
         request.

      Return type:
         2-Tuple

      Returns:
         2-tuple containing:

         1. A dictionary containing the parsed XML response from
            GCS. The

            overall structure is:

            * WebsiteConfiguration

              * MainPageSuffix: suffix that is appended to request
                that is for a "directory" on the website endpoint.

              * NotFoundPage: name of an object to serve when site
                visitors encounter a 404

         2. Unparsed XML describing the bucket's website
            configuration.

   get_website_configuration_xml(headers=None)

      Get raw website configuration xml

   get_website_endpoint()

      Returns the fully qualified hostname to use is you want to
      access this bucket as a website.  This doesn't validate whether
      the bucket has been correctly configured as a website or not.

   get_xml_acl(key_name='', headers=None, version_id=None, generation=None)

      Returns the ACL string of the bucket or an object in the bucket.

      Parameters:
         * **key_name** (*str*) -- The name of the object to get the
           ACL for. If not specified, the ACL for the bucket will be
           returned.

         * **headers** (*dict*) -- Additional headers to set during
           the request.

         * **version_id** (*string*) -- Unused in this subclass.

         * **generation** (*int*) -- If specified, gets the ACL for
           a specific generation of a versioned object. If not
           specified, the current version is returned. This parameter
           is only valid when retrieving the ACL of an object, not a
           bucket.

      Return type:
         str

   get_xml_tags()

   initiate_multipart_upload(key_name, headers=None, reduced_redundancy=False, metadata=None, encrypt_key=False, policy=None)

      Start a multipart upload operation.

      Note: Note: After you initiate multipart upload and upload one
        or more parts, you must either complete or abort multipart
        upload in order to stop getting charged for storage of the
        uploaded parts. Only after you either complete or abort
        multipart upload, Amazon S3 frees up the parts storage and
        stops charging you for the parts storage.

      Parameters:
         * **key_name** (*string*) -- The name of the key that will
           ultimately result from this multipart upload operation.
           This will be exactly as the key appears in the bucket after
           the upload process has been completed.

         * **headers** (*dict*) -- Additional HTTP headers to send
           and store with the resulting key in S3.

         * **reduced_redundancy** (*boolean*) -- In multipart
           uploads, the storage class is specified when initiating the
           upload, not when uploading individual parts.  So if you
           want the resulting key to use the reduced redundancy
           storage class set this flag when you initiate the upload.

         * **metadata** (*dict*) -- Any metadata that you would like
           to set on the key that results from the multipart upload.

         * **encrypt_key** (*bool*) -- If True, the new copy of the
           object will be encrypted on the server-side by S3 and will
           be stored in an encrypted form while at rest in S3.

         * **policy** ("boto.s3.acl.CannedACLStrings") -- A canned
           ACL policy that will be applied to the new key (once
           completed) in S3.

   list(prefix='', delimiter='', marker='', headers=None, encoding_type=None)

      List key objects within a bucket.  This returns an instance of
      an BucketListResultSet that automatically handles all of the
      result paging, etc. from S3.  You just need to keep iterating
      until there are no more results.

      Called with no arguments, this will return an iterator object
      across all keys within the bucket.

      The Key objects returned by the iterator are obtained by parsing
      the results of a GET on the bucket, also known as the List
      Objects request.  The XML returned by this request contains only
      a subset of the information about each key.  Certain metadata
      fields such as Content-Type and user metadata are not available
      in the XML. Therefore, if you want these additional metadata
      fields you will have to do a HEAD request on the Key in the
      bucket.

      Parameters:
         * **prefix** (*string*) -- allows you to limit the listing
           to a particular prefix.  For example, if you call the
           method with prefix='/foo/' then the iterator will only
           cycle through the keys that begin with the string '/foo/'.

         * **delimiter** (*string*) -- can be used in conjunction
           with the prefix to allow you to organize and browse your
           keys hierarchically. See http://goo.gl/Xx63h for more
           details.

         * **marker** (*string*) -- The "marker" of where you are in
           the result set

         * **encoding_type** (*string*) --

           Requests Amazon S3 to encode the response and specifies the
           encoding method to use.

           An object key can contain any Unicode character; however,
           XML 1.0 parser cannot parse some characters, such as
           characters with an ASCII value from 0 to 10. For characters
           that are not supported in XML 1.0, you can add this
           parameter to request that Amazon S3 encode the keys in the
           response.

           Valid options: "url"

      Return type:
         "boto.s3.bucketlistresultset.BucketListResultSet"

      Returns:
         an instance of a BucketListResultSet that handles paging, etc

   list_grants(headers=None)

      Returns the ACL entries applied to this bucket.

      Parameters:
         **headers** (*dict*) -- Additional headers to send with the
         request.

      Return type:
         list containing "Entry" objects.

   list_multipart_uploads(key_marker='', upload_id_marker='', headers=None, encoding_type=None)

      List multipart upload objects within a bucket.  This returns an
      instance of an MultiPartUploadListResultSet that automatically
      handles all of the result paging, etc. from S3.  You just need
      to keep iterating until there are no more results.

      Parameters:
         * **key_marker** (*string*) -- The "marker" of where you
           are in the result set

         * **upload_id_marker** (*string*) -- The upload identifier

         * **encoding_type** (*string*) --

           Requests Amazon S3 to encode the response and specifies the
           encoding method to use.

           An object key can contain any Unicode character; however,
           XML 1.0 parser cannot parse some characters, such as
           characters with an ASCII value from 0 to 10. For characters
           that are not supported in XML 1.0, you can add this
           parameter to request that Amazon S3 encode the keys in the
           response.

           Valid options: "url"

      Return type:
         "boto.s3.bucketlistresultset.BucketListResultSet"

      Returns:
         an instance of a BucketListResultSet that handles paging, etc

   list_versions(prefix='', delimiter='', marker='', generation_marker='', headers=None)

      List versioned objects within a bucket.  This returns an
      instance of an VersionedBucketListResultSet that automatically
      handles all of the result paging, etc. from GCS.  You just need
      to keep iterating until there are no more results.  Called with
      no arguments, this will return an iterator object across all
      keys within the bucket.

      Parameters:
         * **prefix** (*string*) -- allows you to limit the listing
           to a particular prefix.  For example, if you call the
           method with prefix='/foo/' then the iterator will only
           cycle through the keys that begin with the string '/foo/'.

         * **delimiter** (*string*) -- can be used in conjunction
           with the prefix to allow you to organize and browse your
           keys hierarchically. See:
           https://developers.google.com/storage/docs/reference-
           headers#delimiter for more details.

         * **marker** (*string*) -- The "marker" of where you are in
           the result set

         * **generation_marker** (*string*) -- The "generation
           marker" of where you are in the result set.

         * **headers** (*dict*) -- A dictionary of header name/value
           pairs.

      Return type:
         "boto.gs.bucketlistresultset.VersionedBucketListResultSet"

      Returns:
         an instance of a BucketListResultSet that handles paging,
         etc.

   lookup(key_name, headers=None)

      Deprecated: Please use get_key method.

      Parameters:
         **key_name** (*string*) -- The name of the key to retrieve

      Return type:
         "boto.s3.key.Key"

      Returns:
         A Key object from this bucket.

   make_public(recursive=False, headers=None)

   new_key(key_name=None)

      Creates a new key

      Parameters:
         **key_name** (*string*) -- The name of the key to create

      Return type:
         "boto.s3.key.Key" or subclass

      Returns:
         An instance of the newly created key object

   set_acl(acl_or_str, key_name='', headers=None, version_id=None, generation=None, if_generation=None, if_metageneration=None)

      Sets or changes a bucket's or key's ACL.

      Parameters:
         * **acl_or_str** (string or "boto.gs.acl.ACL") -- A canned
           ACL string (see "CannedACLStrings") or an ACL object.

         * **key_name** (*string*) -- A key name within the bucket
           to set the ACL for. If not specified, the ACL for the
           bucket will be set.

         * **headers** (*dict*) -- Additional headers to set during
           the request.

         * **version_id** (*string*) -- Unused in this subclass.

         * **generation** (*int*) -- If specified, sets the ACL for
           a specific generation of a versioned object. If not
           specified, the current version is modified.

         * **if_generation** (*int*) -- (optional) If set to a
           generation number, the acl will only be updated if its
           current generation number is this value.

         * **if_metageneration** (*int*) -- (optional) If set to a
           metageneration number, the acl will only be updated if its
           current metageneration number is this value.

   set_as_logging_target(headers=None)

      Setup the current bucket as a logging target by granting the
      necessary permissions to the LogDelivery group to write log
      files to this bucket.

   set_canned_acl(acl_str, key_name='', headers=None, version_id=None, generation=None, if_generation=None, if_metageneration=None)

      Sets a bucket's or objects's ACL using a predefined (canned)
      value.

      Parameters:
         * **acl_str** (*string*) -- A canned ACL string. See
           "CannedACLStrings".

         * **key_name** (*string*) -- A key name within the bucket
           to set the ACL for. If not specified, the ACL for the
           bucket will be set.

         * **headers** (*dict*) -- Additional headers to set during
           the request.

         * **version_id** (*string*) -- Unused in this subclass.

         * **generation** (*int*) -- If specified, sets the ACL for
           a specific generation of a versioned object. If not
           specified, the current version is modified.

         * **if_generation** (*int*) -- (optional) If set to a
           generation number, the acl will only be updated if its
           current generation number is this value.

         * **if_metageneration** (*int*) -- (optional) If set to a
           metageneration number, the acl will only be updated if its
           current metageneration number is this value.

   set_cors(cors, headers=None)

      Sets a bucket's CORS XML document.

      Parameters:
         * **cors** (*str*) -- A string containing the CORS XML.

         * **headers** (*dict*) -- Additional headers to send with
           the request.

   set_cors_xml(cors_xml, headers=None)

      Set the CORS (Cross-Origin Resource Sharing) for a bucket.

      Parameters:
         **cors_xml** (*str*) -- The XML document describing your
         desired CORS configuration.  See the S3 documentation for
         details of the exact syntax required.

   set_def_acl(acl_or_str, headers=None)

      Sets or changes a bucket's default ACL.

      Parameters:
         * **acl_or_str** (string or "boto.gs.acl.ACL") -- A canned
           ACL string (see "CannedACLStrings") or an ACL object.

         * **headers** (*dict*) -- Additional headers to set during
           the request.

   set_def_canned_acl(acl_str, headers=None)

      Sets a bucket's default ACL using a predefined (canned) value.

      Parameters:
         * **acl_str** (*string*) -- A canned ACL string. See
           "CannedACLStrings".

         * **headers** (*dict*) -- Additional headers to set during
           the request.

   set_def_xml_acl(acl_str, headers=None)

      Sets a bucket's default ACL to an XML string.

      Parameters:
         * **acl_str** (*string*) -- A string containing the ACL
           XML.

         * **headers** (*dict*) -- Additional headers to set during
           the request.

   set_key_class(key_class)

      Set the Key class associated with this bucket.  By default, this
      would be the boto.s3.key.Key class but if you want to subclass
      that for some reason this allows you to associate your new class
      with a bucket so that when you call bucket.new_key() or when you
      get a listing of keys in the bucket you will get an instances of
      your key class rather than the default.

      Parameters:
         **key_class** (*class*) -- A subclass of Key that can be more
         specific

   set_policy(policy, headers=None)

      Add or replace the JSON policy associated with the bucket.

      Parameters:
         **policy** (*str*) -- The JSON policy as a string.

   set_request_payment(payer='BucketOwner', headers=None)

   set_subresource(subresource, value, key_name='', headers=None, version_id=None)

      Set a subresource for a bucket or key.

      Parameters:
         * **subresource** (*string*) -- The subresource to set.

         * **value** (*string*) -- The value of the subresource.

         * **key_name** (*string*) -- The key to operate on, or None
           to operate on the bucket.

         * **headers** (*dict*) -- Additional HTTP headers to
           include in the request.

         * **src_version_id** (*string*) -- Optional. The version id
           of the key to operate on. If not specified, operate on the
           newest version.

   set_tags(tags, headers=None)

   set_website_configuration(config, headers=None)

      Parameters:
         **config** (*boto.s3.website.WebsiteConfiguration*) --
         Configuration data

   set_website_configuration_xml(xml, headers=None)

      Upload xml website configuration

   set_xml_acl(acl_str, key_name='', headers=None, version_id=None, query_args='acl', generation=None, if_generation=None, if_metageneration=None)

      Sets a bucket's or objects's ACL to an XML string.

      Parameters:
         * **acl_str** (*string*) -- A string containing the ACL
           XML.

         * **key_name** (*string*) -- A key name within the bucket
           to set the ACL for. If not specified, the ACL for the
           bucket will be set.

         * **headers** (*dict*) -- Additional headers to set during
           the request.

         * **version_id** (*string*) -- Unused in this subclass.

         * **query_args** (*str*) -- The query parameters to pass
           with the request.

         * **generation** (*int*) -- If specified, sets the ACL for
           a specific generation of a versioned object. If not
           specified, the current version is modified.

         * **if_generation** (*int*) -- (optional) If set to a
           generation number, the acl will only be updated if its
           current generation number is this value.

         * **if_metageneration** (*int*) -- (optional) If set to a
           metageneration number, the acl will only be updated if its
           current metageneration number is this value.

   set_xml_logging(logging_str, headers=None)

      Set logging on a bucket directly to the given xml string.

      Parameters:
         **logging_str** (*unicode string*) -- The XML for the
         bucketloggingstatus which will be set.  The string will be
         converted to utf-8 before it is sent.  Usually, you will
         obtain this XML from the BucketLogging object.

      Return type:
         bool

      Returns:
         True if ok or raises an exception.

   set_xml_tags(tag_str, headers=None, query_args='tagging')

   validate_get_all_versions_params(params)

      See documentation in boto/s3/bucket.py.

   validate_kwarg_names(kwargs, names)

      Checks that all named arguments are in the specified list of
      names.

      Parameters:
         * **kwargs** (*dict*) -- Dictionary of kwargs to validate.

         * **names** (*list*) -- List of possible named arguments.


boto.gs.bucketlistresultset
===========================

class class boto.gs.bucketlistresultset.VersionedBucketListResultSet(bucket=None, prefix='', delimiter='', marker='', generation_marker='', headers=None)

   A resultset for listing versions within a bucket.  Uses the
   bucket_lister generator function and implements the iterator
   interface.  This transparently handles the results paging from GCS
   so even if you have many thousands of keys within the bucket you
   can iterate over all keys in a reasonably efficient manner.

boto.gs.bucketlistresultset.versioned_bucket_lister(bucket, prefix='', delimiter='', marker='', generation_marker='', headers=None)

   A generator function for listing versioned objects.


boto.gs.connection
==================

class class boto.gs.connection.GSConnection(gs_access_key_id=None, gs_secret_access_key=None, is_secure=True, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, host='storage.googleapis.com', debug=0, https_connection_factory=None, calling_format=<boto.s3.connection.SubdomainCallingFormat object at 0xa4e2f4b0>, path='/', suppress_consec_slashes=True)

   DefaultCallingFormat = 'boto.s3.connection.SubdomainCallingFormat'

   DefaultHost = 'storage.googleapis.com'

   QueryString = 'Signature=%s&Expires=%d&GoogleAccessId=%s'

   access_key

   auth_region_name

   auth_service_name

   aws_access_key_id

   aws_secret_access_key

   build_base_http_request(method, path, auth_path, params=None, headers=None, data='', host=None)

   build_post_form_args(bucket_name, key, expires_in=6000, acl=None, success_action_redirect=None, max_content_length=None, http_method='http', fields=None, conditions=None, storage_class='STANDARD', server_side_encryption=None)

      Taken from the AWS book Python examples and modified for use
      with boto This only returns the arguments required for the post
      form, not the actual form.  This does not return the file input
      field which also needs to be added

      Parameters:
         * **bucket_name** (*string*) -- Bucket to submit to

         * **key** (*string*) -- Key name, optionally add
           ${filename} to the end to attach the submitted filename

         * **expires_in** (*integer*) -- Time (in seconds) before
           this expires, defaults to 6000

         * **acl** (*string*) -- A canned ACL.  One of: * private *
           public-read * public-read-write * authenticated-read *
           bucket-owner-read * bucket-owner-full-control

         * **success_action_redirect** (*string*) -- URL to redirect
           to on success

         * **max_content_length** (*integer*) -- Maximum size for
           this file

         * **http_method** (*string*) -- HTTP Method to use, "http"
           or "https"

         * **storage_class** (*string*) -- Storage class to use for
           storing the object. Valid values: STANDARD |
           REDUCED_REDUNDANCY

         * **server_side_encryption** (*string*) -- Specifies
           server- side encryption algorithm to use when Amazon S3
           creates an object. Valid values: None | AES256

      Return type:
         dict

      Returns:
         A dictionary containing field names/values as well as a url
         to POST to

   build_post_policy(expiration_time, conditions)

      Taken from the AWS book Python examples and modified for use
      with boto

   close()

      (Optional) Close any open HTTP connections.  This is non-
      destructive, and making a new request will open a connection
      again.

   connection

   create_bucket(bucket_name, headers=None, location='US', policy=None, storage_class='STANDARD')

      Creates a new bucket. By default it's located in the USA. You
      can pass Location.EU to create bucket in the EU. You can also
      pass a LocationConstraint for where the bucket should be
      located, and a StorageClass describing how the data should be
      stored.

      Parameters:
         * **bucket_name** (*string*) -- The name of the new bucket.

         * **headers** (*dict*) -- Additional headers to pass along
           with the request to GCS.

         * **location** ("boto.gs.connection.Location") -- The
           location of the new bucket.

         * **policy** ("boto.gs.acl.CannedACLStrings") -- A canned
           ACL policy that will be applied to the new key in GCS.

         * **storage_class** (*string*) -- Either 'STANDARD' or
           'DURABLE_REDUCED_AVAILABILITY'.

   delete_bucket(bucket, headers=None)

      Removes an S3 bucket.

      In order to remove the bucket, it must first be empty. If the
      bucket is not empty, an "S3ResponseError" will be raised.

      Parameters:
         * **bucket_name** (*string*) -- The name of the bucket

         * **headers** (*dict*) -- Additional headers to pass along
           with the request to AWS.

   generate_url(expires_in, method, bucket='', key='', headers=None, query_auth=True, force_http=False, response_headers=None, expires_in_absolute=False, version_id=None)

   get_all_buckets(headers=None)

   get_bucket(bucket_name, validate=True, headers=None)

      Retrieves a bucket by name.

      If the bucket does not exist, an "S3ResponseError" will be
      raised. If you are unsure if the bucket exists or not, you can
      use the "S3Connection.lookup" method, which will either return a
      valid bucket or "None".

      Parameters:
         * **bucket_name** (*string*) -- The name of the bucket

         * **headers** (*dict*) -- Additional headers to pass along
           with the request to AWS.

         * **validate** (*boolean*) -- If "True", it will try to
           fetch all keys within the given bucket. (Default: "True")

   get_canonical_user_id(headers=None)

      Convenience method that returns the "CanonicalUserID" of the
      user who's credentials are associated with the connection. The
      only way to get this value is to do a GET request on the service
      which returns all buckets associated with the account. As part
      of that response, the canonical userid is returned. This method
      simply does all of that and then returns just the user id.

      Return type:
         string

      Returns:
         A string containing the canonical user id.

   get_http_connection(host, port, is_secure)

   get_path(path='/')

   get_proxy_auth_header()

   gs_access_key_id

   gs_secret_access_key

   handle_proxy(proxy, proxy_port, proxy_user, proxy_pass)

   head_bucket(bucket_name, headers=None)

      Determines if a bucket exists by name.

      If the bucket does not exist, an "S3ResponseError" will be
      raised.

      Parameters:
         * **bucket_name** (*string*) -- The name of the bucket

         * **headers** (*dict*) -- Additional headers to pass along
           with the request to AWS.

      Returns:
         A <Bucket> object

   lookup(bucket_name, validate=True, headers=None)

      Attempts to get a bucket from S3.

      Works identically to "S3Connection.get_bucket", save for that it
      will return "None" if the bucket does not exist instead of
      throwing an exception.

      Parameters:
         * **bucket_name** (*string*) -- The name of the bucket

         * **headers** (*dict*) -- Additional headers to pass along
           with the request to AWS.

         * **validate** (*boolean*) -- If "True", it will try to
           fetch all keys within the given bucket. (Default: "True")

   make_request(method, bucket='', key='', headers=None, data='', query_args=None, sender=None, override_num_retries=None, retry_handler=None)

   new_http_connection(host, port, is_secure)

   prefix_proxy_to_path(path, host=None)

   profile_name

   proxy_ssl(host=None, port=None)

   put_http_connection(host, port, is_secure, connection)

   secret_key

   server_name(port=None)

   set_bucket_class(bucket_class)

      Set the Bucket class associated with this bucket.  By default,
      this would be the boto.s3.key.Bucket class but if you want to
      subclass that for some reason this allows you to associate your
      new class.

      Parameters:
         **bucket_class** (*class*) -- A subclass of Bucket that can
         be more specific

   set_host_header(request)

   set_request_hook(hook)

   skip_proxy(host)

class class boto.gs.connection.Location

   DEFAULT = 'US'

   EU = 'EU'


boto.gs.cors
============

class class boto.gs.cors.Cors

   Encapsulates the CORS configuration XML document

   endElement(name, value, connection)

      SAX XML logic for parsing new element found.

   startElement(name, attrs, connection)

      SAX XML logic for parsing new element found.

   to_xml()

      Convert CORS object into XML string representation.

   validateParseLevel(tag, level)

      Verify parse level for a given tag.


boto.gs.key
===========

class class boto.gs.key.Key(bucket=None, name=None, generation=None)

   Represents a key (object) in a GS bucket.

   Variables:
      * **bucket** -- The parent "boto.gs.bucket.Bucket".

      * **name** -- The name of this Key object.

      * **metadata** -- A dictionary containing user metadata that
        you wish to store with the object or that has been retrieved
        from an existing object.

      * **cache_control** -- The value of the *Cache-Control* HTTP
        header.

      * **content_type** -- The value of the *Content-Type* HTTP
        header.

      * **content_encoding** -- The value of the *Content-Encoding*
        HTTP header.

      * **content_disposition** -- The value of the *Content-
        Disposition* HTTP header.

      * **content_language** -- The value of the *Content-Language*
        HTTP header.

      * **etag** -- The *etag* associated with this object.

      * **last_modified** -- The string timestamp representing the
        last time this object was modified in GS.

      * **owner** -- The ID of the owner of this object.

      * **storage_class** -- The storage class of the object.
        Currently, one of: STANDARD | DURABLE_REDUCED_AVAILABILITY.

      * **md5** -- The MD5 hash of the contents of the object.

      * **size** -- The size, in bytes, of the object.

      * **generation** -- The generation number of the object.

      * **metageneration** -- The generation number of the object
        metadata.

      * **encrypted** -- Whether the object is encrypted while at
        rest on the server.

      * **cloud_hashes** -- Dictionary of checksums as supplied by
        the storage provider.

   BufferSize = 8192

   DefaultContentType = 'application/octet-stream'

   RestoreBody = '<?xml version="1.0" encoding="UTF-8"?>\n      <RestoreRequest xmlns="http://s3.amazonaws.com/doc/2006-03-01">\n        <Days>%s</Days>\n      </RestoreRequest>'

   add_email_grant(permission, email_address)

      Convenience method that provides a quick way to add an email
      grant to a key. This method retrieves the current ACL, creates a
      new grant based on the parameters passed in, adds that grant to
      the ACL and then PUT's the new ACL back to GS.

      Parameters:
         * **permission** (*string*) -- The permission being
           granted. Should be one of: READ|FULL_CONTROL See
           http://code.google.com/apis/storage/docs/developer-
           guide.html#authorization for more details on permissions.

         * **email_address** (*string*) -- The email address
           associated with the Google account to which you are
           granting the permission.

   add_group_email_grant(permission, email_address, headers=None)

      Convenience method that provides a quick way to add an email
      group grant to a key. This method retrieves the current ACL,
      creates a new grant based on the parameters passed in, adds that
      grant to the ACL and then PUT's the new ACL back to GS.

      Parameters:
         * **permission** (*string*) -- The permission being
           granted. Should be one of: READ|FULL_CONTROL See
           http://code.google.com/apis/storage/docs/developer-
           guide.html#authorization for more details on permissions.

         * **email_address** (*string*) -- The email address
           associated with the Google Group to which you are granting
           the permission.

   add_group_grant(permission, group_id)

      Convenience method that provides a quick way to add a canonical
      group grant to a key. This method retrieves the current ACL,
      creates a new grant based on the parameters passed in, adds that
      grant to the ACL and then PUT's the new ACL back to GS.

      Parameters:
         * **permission** (*string*) -- The permission being
           granted. Should be one of: READ|FULL_CONTROL See
           http://code.google.com/apis/storage/docs/developer-
           guide.html#authorization for more details on permissions.

         * **group_id** (*string*) -- The canonical group id
           associated with the Google Groups account you are granting
           the permission to.

   add_user_grant(permission, user_id)

      Convenience method that provides a quick way to add a canonical
      user grant to a key. This method retrieves the current ACL,
      creates a new grant based on the parameters passed in, adds that
      grant to the ACL and then PUT's the new ACL back to GS.

      Parameters:
         * **permission** (*string*) -- The permission being
           granted. Should be one of: READ|FULL_CONTROL See
           http://code.google.com/apis/storage/docs/developer-
           guide.html#authorization for more details on permissions.

         * **user_id** (*string*) -- The canonical user id
           associated with the GS account to which you are granting
           the permission.

   base64md5

   base_user_settable_fields = set(['content-disposition', 'content-language', 'content-encoding', 'content-md5', 'cache-control', 'content-type'])

   change_storage_class(new_storage_class, dst_bucket=None, validate_dst_bucket=True)

      Change the storage class of an existing key. Depending on
      whether a different destination bucket is supplied or not, this
      will either move the item within the bucket, preserving all
      metadata and ACL info bucket changing the storage class or it
      will copy the item to the provided destination bucket, also
      preserving metadata and ACL info.

      Parameters:
         * **new_storage_class** (*string*) -- The new storage class
           for the Key. Possible values are: * STANDARD *
           REDUCED_REDUNDANCY

         * **dst_bucket** (*string*) -- The name of a destination
           bucket.  If not provided the current bucket of the key will
           be used.

         * **validate_dst_bucket** (*bool*) -- If True, will
           validate the dst_bucket by using an extra list request.

   close(fast=False)

      Close this key.

      Parameters:
         **fast** (*bool*) -- True if you want the connection to be
         closed without first

      reading the content. This should only be used in cases where
      subsequent calls don't need to return the content from the open
      HTTP connection. Note: As explained at http://docs.python.org/2
      /library/httplib.html#httplib.HTTPConnection.getresponse,
      callers must read the whole response before sending a new
      request to the server. Calling Key.close(fast=True) and making a
      subsequent request to the server will work because boto will get
      an httplib exception and close/reopen the connection.

   closed = False

   compose(components, content_type=None, headers=None)

      Create a new object from a sequence of existing objects.

      The content of the object representing this Key will be the
      concatenation of the given object sequence. For more detail,
      visit

         https://developers.google.com/storage/docs/composite-objects

      :type components list of Keys :param components List of gs.Keys
      representing the component objects

      :type content_type (optional) string :param content_type Content
      type for the new composite object.

   compute_hash(fp, algorithm, size=None)

      Parameters:
         * **fp** (*file*) -- File pointer to the file to hash. The
           file pointer will be reset to the same position before the
           method returns.

         * **size** (*int*) -- (optional) The Maximum number of
           bytes to read from the file pointer (fp). This is useful
           when uploading a file in multiple parts where the file is
           being split in place into different parts. Less bytes may
           be available.

   compute_md5(fp, size=None)

      Parameters:
         * **fp** (*file*) -- File pointer to the file to MD5 hash.
           The file pointer will be reset to the same position before
           the method returns.

         * **size** (*int*) -- (optional) The Maximum number of
           bytes to read from the file pointer (fp). This is useful
           when uploading a file in multiple parts where the file is
           being split in place into different parts. Less bytes may
           be available.

   copy(dst_bucket, dst_key, metadata=None, reduced_redundancy=False, preserve_acl=False, encrypt_key=False, validate_dst_bucket=True)

      Copy this Key to another bucket.

      Parameters:
         * **dst_bucket** (*string*) -- The name of the destination
           bucket

         * **dst_key** (*string*) -- The name of the destination key

         * **metadata** (*dict*) -- Metadata to be associated with
           new key.  If metadata is supplied, it will replace the
           metadata of the source key being copied.  If no metadata is
           supplied, the source key's metadata will be copied to the
           new key.

         * **reduced_redundancy** (*bool*) -- If True, this will
           force the storage class of the new Key to be
           REDUCED_REDUNDANCY regardless of the storage class of the
           key being copied. The Reduced Redundancy Storage (RRS)
           feature of S3, provides lower redundancy at lower storage
           cost.

         * **preserve_acl** (*bool*) -- If True, the ACL from the
           source key will be copied to the destination key.  If
           False, the destination key will have the default ACL.  Note
           that preserving the ACL in the new key object will require
           two additional API calls to S3, one to retrieve the current
           ACL and one to set that ACL on the new object.  If you
           don't care about the ACL, a value of False will be
           significantly more efficient.

         * **encrypt_key** (*bool*) -- If True, the new copy of the
           object will be encrypted on the server-side by S3 and will
           be stored in an encrypted form while at rest in S3.

         * **validate_dst_bucket** (*bool*) -- If True, will
           validate the dst_bucket by using an extra list request.

      Return type:
         "boto.s3.key.Key" or subclass

      Returns:
         An instance of the newly created key object

   delete(headers=None)

   endElement(name, value, connection)

   exists(headers=None)

      Returns True if the key exists

      Return type:
         bool

      Returns:
         Whether the key exists on S3

   f = 'content-type'

   generate_url(expires_in, method='GET', headers=None, query_auth=True, force_http=False, response_headers=None, expires_in_absolute=False, version_id=None, policy=None, reduced_redundancy=False, encrypt_key=False)

      Generate a URL to access this key.

      Parameters:
         * **expires_in** (*int*) -- How long the url is valid for,
           in seconds

         * **method** (*string*) -- The method to use for retrieving
           the file (default is GET)

         * **headers** (*dict*) -- Any headers to pass along in the
           request

         * **query_auth** (*bool*) --

         * **force_http** (*bool*) -- If True, http will be used
           instead of https.

         * **response_headers** (*dict*) -- A dictionary containing
           HTTP headers/values that will override any headers
           associated with the stored object in the response.  See
           http://goo.gl/EWOPb for details.

         * **expires_in_absolute** (*bool*) --

         * **version_id** (*string*) -- The version_id of the object
           to GET. If specified this overrides any value in the key.

         * **policy** ("boto.s3.acl.CannedACLStrings") -- A canned
           ACL policy that will be applied to the new key in S3.

         * **reduced_redundancy** (*bool*) -- If True, this will set
           the storage class of the new Key to be REDUCED_REDUNDANCY.
           The Reduced Redundancy Storage (RRS) feature of S3,
           provides lower redundancy at lower storage cost.

         * **encrypt_key** (*bool*) -- If True, the new copy of the
           object will be encrypted on the server-side by S3 and will
           be stored in an encrypted form while at rest in S3.

      Return type:
         string

      Returns:
         The URL to access the key

   get_acl(headers=None, generation=None)

      Returns the ACL of this object.

      Parameters:
         * **headers** (*dict*) -- Additional headers to set during
           the request.

         * **generation** (*int*) -- If specified, gets the ACL for
           a specific generation of a versioned object. If not
           specified, the current version is returned.

      Return type:
         "gs.acl.ACL"

   get_contents_as_string(headers=None, cb=None, num_cb=10, torrent=False, version_id=None, response_headers=None)

      Retrieve an object from S3 using the name of the Key object as
      the key in S3.  Return the contents of the object as a string.
      See get_contents_to_file method for details about the
      parameters.

      Parameters:
         * **headers** (*dict*) -- Any additional headers to send in
           the request

         * **cb** (*int*) -- a callback function that will be called
           to report progress on the upload.  The callback should
           accept two integer parameters, the first representing the
           number of bytes that have been successfully transmitted to
           S3 and the second representing the size of the to be
           transmitted object.

         * **num_cb** -- (optional) If a callback is specified with
           the cb parameter this parameter determines the granularity
           of the callback by defining the maximum number of times the
           callback will be called during the file transfer.

         * **torrent** (*bool*) -- If True, returns the contents of
           a torrent file as a string.

         * **response_headers** (*dict*) -- A dictionary containing
           HTTP headers/values that will override any headers
           associated with the stored object in the response.  See
           http://goo.gl/EWOPb for details.

         * **version_id** (*str*) -- The ID of a particular version
           of the object. If this parameter is not supplied but the
           Key object has a "version_id" attribute, that value will be
           used when retrieving the object.  You can set the Key
           object's "version_id" attribute to None to always grab the
           latest version from a version-enabled bucket.

      Return type:
         string

      Returns:
         The contents of the file as a string

   get_contents_to_file(fp, headers=None, cb=None, num_cb=10, torrent=False, version_id=None, res_download_handler=None, response_headers=None, hash_algs=None)

      Retrieve an object from GCS using the name of the Key object as
      the key in GCS. Write the contents of the object to the file
      pointed to by 'fp'.

      Parameters:
         * **fp** (*File -like object*) --

         * **headers** (*dict*) -- additional HTTP headers that will
           be sent with the GET request.

         * **cb** (*int*) -- a callback function that will be called
           to report progress on the upload. The callback should
           accept two integer parameters, the first representing the
           number of bytes that have been successfully transmitted to
           GCS and the second representing the size of the to be
           transmitted object.

         * **num_cb** -- (optional) If a callback is specified with
           the cb parameter this parameter determines the granularity
           of the callback by defining the maximum number of times the
           callback will be called during the file transfer.

         * **torrent** (*bool*) -- If True, returns the contents of
           a torrent file as a string.

         * **res_download_handler** -- If provided, this handler
           will perform the download.

         * **response_headers** (*dict*) -- A dictionary containing
           HTTP headers/values that will override any headers
           associated with the stored object in the response. See
           http://goo.gl/sMkcC for details.

   get_contents_to_filename(filename, headers=None, cb=None, num_cb=10, torrent=False, version_id=None, res_download_handler=None, response_headers=None)

      Retrieve an object from S3 using the name of the Key object as
      the key in S3.  Store contents of the object to a file named by
      'filename'. See get_contents_to_file method for details about
      the parameters.

      Parameters:
         * **filename** (*string*) -- The filename of where to put
           the file contents

         * **headers** (*dict*) -- Any additional headers to send in
           the request

         * **cb** (*int*) -- a callback function that will be called
           to report progress on the upload.  The callback should
           accept two integer parameters, the first representing the
           number of bytes that have been successfully transmitted to
           S3 and the second representing the size of the to be
           transmitted object.

         * **num_cb** -- (optional) If a callback is specified with
           the cb parameter this parameter determines the granularity
           of the callback by defining the maximum number of times the
           callback will be called during the file transfer.

         * **torrent** (*bool*) -- If True, returns the contents of
           a torrent file as a string.

         * **res_download_handler** -- If provided, this handler
           will perform the download.

         * **response_headers** (*dict*) -- A dictionary containing
           HTTP headers/values that will override any headers
           associated with the stored object in the response.  See
           http://goo.gl/EWOPb for details.

         * **version_id** (*str*) -- The ID of a particular version
           of the object. If this parameter is not supplied but the
           Key object has a "version_id" attribute, that value will be
           used when retrieving the object.  You can set the Key
           object's "version_id" attribute to None to always grab the
           latest version from a version-enabled bucket.

   get_file(fp, headers=None, cb=None, num_cb=10, torrent=False, version_id=None, override_num_retries=None, response_headers=None, hash_algs=None)

   get_md5_from_hexdigest(md5_hexdigest)

      A utility function to create the 2-tuple (md5hexdigest,
      base64md5) from just having a precalculated md5_hexdigest.

   get_metadata(name)

   get_redirect()

      Return the redirect location configured for this key.

      If no redirect is configured (via set_redirect), then None will
      be returned.

   get_torrent_file(fp, headers=None, cb=None, num_cb=10)

      Get a torrent file (see to get_file)

      Parameters:
         * **fp** (*file*) -- The file pointer of where to put the
           torrent

         * **headers** (*dict*) -- Headers to be passed

         * **cb** (*int*) -- a callback function that will be called
           to report progress on the upload.  The callback should
           accept two integer parameters, the first representing the
           number of bytes that have been successfully transmitted to
           S3 and the second representing the size of the to be
           transmitted object.

         * **num_cb** -- (optional) If a callback is specified with
           the cb parameter this parameter determines the granularity
           of the callback by defining the maximum number of times the
           callback will be called during the file transfer.

   get_xml_acl(headers=None, generation=None)

      Returns the ACL string of this object.

      Parameters:
         * **headers** (*dict*) -- Additional headers to set during
           the request.

         * **generation** (*int*) -- If specified, gets the ACL for
           a specific generation of a versioned object. If not
           specified, the current version is returned.

      Return type:
         str

   handle_addl_headers(headers)

   handle_encryption_headers(resp)

   handle_restore_headers(response)

   handle_version_headers(resp, force=False)

   key

   make_public(headers=None)

   md5

   next()

      By providing a next method, the key object supports use as an
      iterator. For example, you can now say:

      for bytes in key:
         write bytes to a file or whatever

      All of the HTTP connection stuff is handled for you.

   open(mode='r', headers=None, query_args=None, override_num_retries=None)

   open_read(headers=None, query_args='', override_num_retries=None, response_headers=None)

      Open this key for reading

      Parameters:
         * **headers** (*dict*) -- Headers to pass in the web
           request

         * **query_args** (*string*) -- Arguments to pass in the
           query string (ie, 'torrent')

         * **override_num_retries** (*int*) -- If not None will
           override configured num_retries parameter for underlying
           GET.

         * **response_headers** (*dict*) -- A dictionary containing
           HTTP headers/values that will override any headers
           associated with the stored object in the response.  See
           http://goo.gl/EWOPb for details.

   open_write(headers=None, override_num_retries=None)

      Open this key for writing. Not yet implemented

      Parameters:
         * **headers** (*dict*) -- Headers to pass in the write
           request

         * **override_num_retries** (*int*) -- If not None will
           override configured num_retries parameter for underlying
           PUT.

   provider

   read(size=0)

   restore(days, headers=None)

      Restore an object from an archive.

      Parameters:
         **days** (*int*) -- The lifetime of the restored object (must
         be at least 1 day).  If the object is already restored then
         this parameter can be used to readjust the lifetime of the
         restored object.  In this case, the days param is with
         respect to the initial time of the request. If the object has
         not been restored, this param is with respect to the
         completion time of the request.

   send_file(fp, headers=None, cb=None, num_cb=10, query_args=None, chunked_transfer=False, size=None, hash_algs=None)

      Upload a file to GCS.

      Parameters:
         * **fp** (*file*) -- The file pointer to upload. The file
           pointer must point point at the offset from which you wish
           to upload. ie. if uploading the full file, it should point
           at the start of the file. Normally when a file is opened
           for reading, the fp will point at the first byte. See the
           bytes parameter below for more info.

         * **headers** (*dict*) -- The headers to pass along with
           the PUT request

         * **num_cb** (*int*) -- (optional) If a callback is
           specified with the cb parameter this parameter determines
           the granularity of the callback by defining the maximum
           number of times the callback will be called during the file
           transfer. Providing a negative integer will cause your
           callback to be called with each buffer read.

         * **query_args** (*string*) -- Arguments to pass in the
           query string.

         * **chunked_transfer** (*boolean*) -- (optional) If true,
           we use chunked Transfer-Encoding.

         * **size** (*int*) -- (optional) The Maximum number of
           bytes to read from the file pointer (fp). This is useful
           when uploading a file in multiple parts where you are
           splitting the file up into different ranges to be uploaded.
           If not specified, the default behaviour is to read all
           bytes from the file pointer. Less bytes may be available.

         * **hash_algs** (*dictionary*) -- (optional) Dictionary of
           hash algorithms and corresponding hashing class that
           implements update() and digest(). Defaults to {'md5':
           hashlib.md5}.

   set_acl(acl_or_str, headers=None, generation=None, if_generation=None, if_metageneration=None)

      Sets the ACL for this object.

      Parameters:
         * **acl_or_str** (string or "boto.gs.acl.ACL") -- A canned
           ACL string (see "CannedACLStrings") or an ACL object.

         * **headers** (*dict*) -- Additional headers to set during
           the request.

         * **generation** (*int*) -- If specified, sets the ACL for
           a specific generation of a versioned object. If not
           specified, the current version is modified.

         * **if_generation** (*int*) -- (optional) If set to a
           generation number, the acl will only be updated if its
           current generation number is this value.

         * **if_metageneration** (*int*) -- (optional) If set to a
           metageneration number, the acl will only be updated if its
           current metageneration number is this value.

   set_canned_acl(acl_str, headers=None, generation=None, if_generation=None, if_metageneration=None)

      Sets this objects's ACL using a predefined (canned) value.

      Parameters:
         * **acl_str** (*string*) -- A canned ACL string. See
           "CannedACLStrings".

         * **headers** (*dict*) -- Additional headers to set during
           the request.

         * **generation** (*int*) -- If specified, sets the ACL for
           a specific generation of a versioned object. If not
           specified, the current version is modified.

         * **if_generation** (*int*) -- (optional) If set to a
           generation number, the acl will only be updated if its
           current generation number is this value.

         * **if_metageneration** (*int*) -- (optional) If set to a
           metageneration number, the acl will only be updated if its
           current metageneration number is this value.

   set_contents_from_file(fp, headers=None, replace=True, cb=None, num_cb=10, policy=None, md5=None, res_upload_handler=None, size=None, rewind=False, if_generation=None)

      Store an object in GS using the name of the Key object as the
      key in GS and the contents of the file pointed to by 'fp' as the
      contents.

      Parameters:
         * **fp** (*file*) -- The file whose contents are to be
           uploaded.

         * **headers** (*dict*) -- (optional) Additional HTTP
           headers to be sent with the PUT request.

         * **replace** (*bool*) -- (optional) If this parameter is
           False, the method will first check to see if an object
           exists in the bucket with the same key. If it does, it
           won't overwrite it. The default value is True which will
           overwrite the object.

         * **cb** (*function*) -- (optional) Callback function that
           will be called to report progress on the upload. The
           callback should accept two integer parameters, the first
           representing the number of bytes that have been
           successfully transmitted to GS and the second representing
           the total number of bytes that need to be transmitted.

         * **num_cb** (*int*) -- (optional) If a callback is
           specified with the cb parameter, this parameter determines
           the granularity of the callback by defining the maximum
           number of times the callback will be called during the file
           transfer.

         * **policy** ("boto.gs.acl.CannedACLStrings") -- (optional)
           A canned ACL policy that will be applied to the new key in
           GS.

         * **md5** (*tuple*) --

           (optional) A tuple containing the hexdigest version of the
           MD5 checksum of the file as the first element and the
           Base64-encoded version of the plain checksum as the second
           element. This is the same format returned by the
           compute_md5 method.

           If you need to compute the MD5 for any reason prior to
           upload, it's silly to have to do it twice so this param, if
           present, will be used as the MD5 values of the file.
           Otherwise, the checksum will be computed.

         * **res_upload_handler**
           ("boto.gs.resumable_upload_handler.ResumableUploadHandler")
           -- (optional) If provided, this handler will perform the
           upload.

         * **size** (*int*) --

           (optional) The Maximum number of bytes to read from the
           file pointer (fp). This is useful when uploading a file in
           multiple parts where you are splitting the file up into
           different ranges to be uploaded. If not specified, the
           default behaviour is to read all bytes from the file
           pointer. Less bytes may be available.

           Notes:

              1. The "size" parameter currently cannot be used when
                 a resumable upload handler is given but is still
                 useful for uploading part of a file as implemented by
                 the parent class.

              2. At present Google Cloud Storage does not support
                 multipart uploads.

         * **rewind** (*bool*) -- (optional) If True, the file
           pointer (fp) will be rewound to the start before any bytes
           are read from it. The default behaviour is False which
           reads from the current position of the file pointer (fp).

         * **if_generation** (*int*) -- (optional) If set to a
           generation number, the object will only be written to if
           its current generation number is this value. If set to the
           value 0, the object will only be written if it doesn't
           already exist.

      Return type:
         int

      Returns:
         The number of bytes written to the key.

      TODO: At some point we should refactor the Bucket and Key
      classes, to move functionality common to all providers into a
      parent class, and provider-specific functionality into
      subclasses (rather than just overriding/sharing code the way it
      currently works).

   set_contents_from_filename(filename, headers=None, replace=True, cb=None, num_cb=10, policy=None, md5=None, reduced_redundancy=None, res_upload_handler=None, if_generation=None)

      Store an object in GS using the name of the Key object as the
      key in GS and the contents of the file named by 'filename'. See
      set_contents_from_file method for details about the parameters.

      Parameters:
         * **filename** (*string*) -- The name of the file that you
           want to put onto GS.

         * **headers** (*dict*) -- (optional) Additional headers to
           pass along with the request to GS.

         * **replace** (*bool*) -- (optional) If True, replaces the
           contents of the file if it already exists.

         * **cb** (*function*) -- (optional) Callback function that
           will be called to report progress on the upload. The
           callback should accept two integer parameters, the first
           representing the number of bytes that have been
           successfully transmitted to GS and the second representing
           the total number of bytes that need to be transmitted.

         * **num_cb** (*int*) -- (optional) If a callback is
           specified with the cb parameter this parameter determines
           the granularity of the callback by defining the maximum
           number of times the callback will be called during the file
           transfer.

         * **policy**
           (>>:py:attribute:`boto.gs.acl.CannedACLStrings`<<) --
           (optional) A canned ACL policy that will be applied to the
           new key in GS.

         * **md5** (*tuple*) --

           (optional) A tuple containing the hexdigest version of the
           MD5 checksum of the file as the first element and the
           Base64-encoded version of the plain checksum as the second
           element. This is the same format returned by the
           compute_md5 method.

           If you need to compute the MD5 for any reason prior to
           upload, it's silly to have to do it twice so this param, if
           present, will be used as the MD5 values of the file.
           Otherwise, the checksum will be computed.

         * **res_upload_handler**
           ("boto.gs.resumable_upload_handler.ResumableUploadHandler")
           -- (optional) If provided, this handler will perform the
           upload.

         * **if_generation** (*int*) -- (optional) If set to a
           generation number, the object will only be written to if
           its current generation number is this value. If set to the
           value 0, the object will only be written if it doesn't
           already exist.

   set_contents_from_stream(*args, **kwargs)

      Store an object using the name of the Key object as the key in
      cloud and the contents of the data stream pointed to by 'fp' as
      the contents.

      The stream object is not seekable and total size is not known.
      This has the implication that we can't specify the Content-Size
      and Content-MD5 in the header. So for huge uploads, the delay in
      calculating MD5 is avoided but with a penalty of inability to
      verify the integrity of the uploaded data.

      Parameters:
         * **fp** (*file*) -- the file whose contents are to be
           uploaded

         * **headers** (*dict*) -- additional HTTP headers to be
           sent with the PUT request.

         * **replace** (*bool*) -- If this parameter is False, the
           method will first check to see if an object exists in the
           bucket with the same key. If it does, it won't overwrite
           it. The default value is True which will overwrite the
           object.

         * **cb** (*function*) -- a callback function that will be
           called to report progress on the upload. The callback
           should accept two integer parameters, the first
           representing the number of bytes that have been
           successfully transmitted to GS and the second representing
           the total number of bytes that need to be transmitted.

         * **num_cb** (*int*) -- (optional) If a callback is
           specified with the cb parameter, this parameter determines
           the granularity of the callback by defining the maximum
           number of times the callback will be called during the file
           transfer.

         * **policy** ("boto.gs.acl.CannedACLStrings") -- A canned
           ACL policy that will be applied to the new key in GS.

         * **size** (*int*) -- (optional) The Maximum number of
           bytes to read from the file pointer (fp). This is useful
           when uploading a file in multiple parts where you are
           splitting the file up into different ranges to be uploaded.
           If not specified, the default behaviour is to read all
           bytes from the file pointer. Less bytes may be available.

         * **if_generation** (*int*) -- (optional) If set to a
           generation number, the object will only be written to if
           its current generation number is this value. If set to the
           value 0, the object will only be written if it doesn't
           already exist.

   set_contents_from_string(s, headers=None, replace=True, cb=None, num_cb=10, policy=None, md5=None, if_generation=None)

      Store an object in GCS using the name of the Key object as the
      key in GCS and the string 's' as the contents. See
      set_contents_from_file method for details about the parameters.

      Parameters:
         * **headers** (*dict*) -- Additional headers to pass along
           with the request to AWS.

         * **replace** (*bool*) -- If True, replaces the contents of
           the file if it already exists.

         * **cb** (*int*) -- a callback function that will be called
           to report progress on the upload. The callback should
           accept two integer parameters, the first representing the
           number of bytes that have been successfully transmitted to
           GCS and the second representing the size of the to be
           transmitted object.

         * **num_cb** -- (optional) If a callback is specified with
           the cb parameter this parameter determines the granularity
           of the callback by defining the maximum number of times the
           callback will be called during the file transfer.

         * **policy** ("boto.gs.acl.CannedACLStrings") -- A canned
           ACL policy that will be applied to the new key in GCS.

         * **md5** (*A tuple containing the hexdigest version of the
           MD5 checksum of the file as the first element and the
           Base64-encoded version of the plain checksum as the second
           element. This is the same format returned by the
           compute_md5 method.*) -- If you need to compute the MD5 for
           any reason prior to upload, it's silly to have to do it
           twice so this param, if present, will be used as the MD5
           values of the file. Otherwise, the checksum will be
           computed.

         * **if_generation** (*int*) -- (optional) If set to a
           generation number, the object will only be written to if
           its current generation number is this value. If set to the
           value 0, the object will only be written if it doesn't
           already exist.

   set_metadata(name, value)

   set_redirect(redirect_location, headers=None)

      Configure this key to redirect to another location.

      When the bucket associated with this key is accessed from the
      website endpoint, a 301 redirect will be issued to the specified
      *redirect_location*.

      Parameters:
         **redirect_location** (*string*) -- The location to redirect.

   set_remote_metadata(metadata_plus, metadata_minus, preserve_acl, headers=None)

   set_xml_acl(acl_str, headers=None, generation=None, if_generation=None, if_metageneration=None)

      Sets this objects's ACL to an XML string.

      Parameters:
         * **acl_str** (*string*) -- A string containing the ACL
           XML.

         * **headers** (*dict*) -- Additional headers to set during
           the request.

         * **generation** (*int*) -- If specified, sets the ACL for
           a specific generation of a versioned object. If not
           specified, the current version is modified.

         * **if_generation** (*int*) -- (optional) If set to a
           generation number, the acl will only be updated if its
           current generation number is this value.

         * **if_metageneration** (*int*) -- (optional) If set to a
           metageneration number, the acl will only be updated if its
           current metageneration number is this value.

   should_retry(response, chunked_transfer=False)

   startElement(name, attrs, connection)

   update_metadata(d)


boto.gs.user
============

class class boto.gs.user.User(parent=None, id='', name='')

   endElement(name, value, connection)

   startElement(name, attrs, connection)

   to_xml(element_name='Owner')


boto.gs.resumable_upload_handler
================================

class class boto.gs.resumable_upload_handler.ResumableUploadHandler(tracker_file_name=None, num_retries=None)

   Constructor. Instantiate once for each uploaded file.

   Parameters:
      * **tracker_file_name** (*string*) -- optional file name to
        save tracker URI. If supplied and the current process fails
        the upload, it can be retried in a new process. If called with
        an existing file containing a valid tracker URI, we'll resume
        the upload from this URI; else we'll start a new resumable
        upload (and write the URI to this tracker file).

      * **num_retries** (*int*) -- the number of times we'll re-try
        a resumable upload making no progress. (Count resets every
        time we get progress, so upload can span many more than this
        number of retries.)

   BUFFER_SIZE = 8192

   RETRYABLE_EXCEPTIONS = (<class 'httplib.HTTPException'>, <type 'exceptions.IOError'>, <class 'socket.error'>, <class 'socket.gaierror'>)

   SERVER_HAS_NOTHING = (0, -1)

   get_tracker_uri()

      Returns upload tracker URI, or None if the upload has not yet
      started.

   get_upload_id()

      Returns the upload ID for the resumable upload, or None if the
      upload has not yet started.

   handle_resumable_upload_exception(e, debug)

   send_file(key, fp, headers, cb=None, num_cb=10, hash_algs=None)

      Upload a file to a key into a bucket on GS, using GS resumable
      upload protocol.

      Parameters:
         * **key** ("boto.s3.key.Key" or subclass) -- The Key object
           to which data is to be uploaded

         * **fp** (*file-like object*) -- The file pointer to upload

         * **headers** (*dict*) -- The headers to pass along with
           the PUT request

         * **cb** (*function*) -- a callback function that will be
           called to report progress on the upload.  The callback
           should accept two integer parameters, the first
           representing the number of bytes that have been
           successfully transmitted to GS, and the second representing
           the total number of bytes that need to be transmitted.

         * **num_cb** (*int*) -- (optional) If a callback is
           specified with the cb parameter, this parameter determines
           the granularity of the callback by defining the maximum
           number of times the callback will be called during the file
           transfer. Providing a negative integer will cause your
           callback to be called with each buffer read.

         * **hash_algs** (*dictionary*) -- (optional) Dictionary
           mapping hash algorithm descriptions to corresponding state-
           ful hashing objects that implement update(), digest(), and
           copy() (e.g. hashlib.md5()). Defaults to {'md5': md5()}.

      Raises ResumableUploadException if a problem occurs during the
      transfer.

   track_progress_less_iterations(server_had_bytes_before_attempt, roll_back_md5=True, debug=0)
