
S3
**


boto.s3.acl
===========

class class boto.s3.acl.ACL(policy=None)

   add_email_grant(permission, email_address)

   add_grant(grant)

   add_user_grant(permission, user_id, display_name=None)

   endElement(name, value, connection)

   startElement(name, attrs, connection)

   to_xml()

class class boto.s3.acl.Grant(permission=None, type=None, id=None, display_name=None, uri=None, email_address=None)

   NameSpace = 'xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"'

   endElement(name, value, connection)

   startElement(name, attrs, connection)

   to_xml()

class class boto.s3.acl.Policy(parent=None)

   endElement(name, value, connection)

   startElement(name, attrs, connection)

   to_xml()


boto.s3.bucket
==============

class class boto.s3.bucket.Bucket(connection=None, name=None, key_class=<class 'boto.s3.key.Key'>)

   BucketPaymentBody = '<?xml version="1.0" encoding="UTF-8"?>\n       <RequestPaymentConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">\n         <Payer>%s</Payer>\n       </RequestPaymentConfiguration>'

   LoggingGroup = 'http://acs.amazonaws.com/groups/s3/LogDelivery'

   MFADeleteRE = '<MfaDelete>([A-Za-z]+)</MfaDelete>'

   VersionRE = '<Status>([A-Za-z]+)</Status>'

   VersioningBody = '<?xml version="1.0" encoding="UTF-8"?>\n       <VersioningConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">\n         <Status>%s</Status>\n         <MfaDelete>%s</MfaDelete>\n       </VersioningConfiguration>'

   add_email_grant(permission, email_address, recursive=False, headers=None)

      Convenience method that provides a quick way to add an email
      grant to a bucket. This method retrieves the current ACL,
      creates a new grant based on the parameters passed in, adds that
      grant to the ACL and then PUT's the new ACL back to S3.

      Parameters:
         * **permission** (*string*) -- The permission being
           granted. Should be one of: (READ, WRITE, READ_ACP,
           WRITE_ACP, FULL_CONTROL).

         * **email_address** (*string*) -- The email address
           associated with the AWS account your are granting the
           permission to.

         * **recursive** (*boolean*) -- A boolean value to controls
           whether the command will apply the grant to all keys within
           the bucket or not.  The default value is False.  By passing
           a True value, the call will iterate through all keys in the
           bucket and apply the same grant to each key.  CAUTION: If
           you have a lot of keys, this could take a long time!

   add_user_grant(permission, user_id, recursive=False, headers=None, display_name=None)

      Convenience method that provides a quick way to add a canonical
      user grant to a bucket.  This method retrieves the current ACL,
      creates a new grant based on the parameters passed in, adds that
      grant to the ACL and then PUT's the new ACL back to S3.

      Parameters:
         * **permission** (*string*) -- The permission being
           granted. Should be one of: (READ, WRITE, READ_ACP,
           WRITE_ACP, FULL_CONTROL).

         * **user_id** (*string*) -- The canonical user id
           associated with the AWS account your are granting the
           permission to.

         * **recursive** (*boolean*) -- A boolean value to controls
           whether the command will apply the grant to all keys within
           the bucket or not.  The default value is False.  By passing
           a True value, the call will iterate through all keys in the
           bucket and apply the same grant to each key.  CAUTION: If
           you have a lot of keys, this could take a long time!

         * **display_name** (*string*) -- An option string
           containing the user's Display Name.  Only required on
           Walrus.

   cancel_multipart_upload(key_name, upload_id, headers=None)

      To verify that all parts have been removed, so you don't get
      charged for the part storage, you should call the List Parts
      operation and ensure the parts list is empty.

   complete_multipart_upload(key_name, upload_id, xml_body, headers=None)

      Complete a multipart upload operation.

   configure_lifecycle(lifecycle_config, headers=None)

      Configure lifecycle for this bucket.

      Parameters:
         **lifecycle_config** ("boto.s3.lifecycle.Lifecycle") -- The
         lifecycle configuration you want to configure for this
         bucket.

   configure_versioning(versioning, mfa_delete=False, mfa_token=None, headers=None)

      Configure versioning for this bucket.

      ..note:: This feature is currently in beta.

      Parameters:
         * **versioning** (*bool*) -- A boolean indicating whether
           version is enabled (True) or disabled (False).

         * **mfa_delete** (*bool*) -- A boolean indicating whether
           the Multi-Factor Authentication Delete feature is enabled
           (True) or disabled (False).  If mfa_delete is enabled then
           all Delete operations will require the token from your MFA
           device to be passed in the request.

         * **mfa_token** (*tuple or list of strings*) -- A tuple or
           list consisting of the serial number from the MFA device
           and the current value of the six-digit token associated
           with the device.  This value is required when you are
           changing the status of the MfaDelete property of the
           bucket.

   configure_website(suffix=None, error_key=None, redirect_all_requests_to=None, routing_rules=None, headers=None)

      Configure this bucket to act as a website

      Parameters:
         * **suffix** (*str*) -- Suffix that is appended to a
           request that is for a "directory" on the website endpoint
           (e.g. if the suffix is index.html and you make a request to
           samplebucket/images/ the data that is returned will be for
           the object with the key name images/index.html).  The
           suffix must not be empty and must not include a slash
           character.

         * **error_key** (*str*) -- The object key name to use when
           a 4XX class error occurs.  This is optional.

         * **redirect_all_requests_to**
           ("boto.s3.website.RedirectLocation") -- Describes the
           redirect behavior for every request to this bucket's
           website endpoint. If this value is non None, no other
           values are considered when configuring the website
           configuration for the bucket. This is an instance of
           "RedirectLocation".

         * **routing_rules** ("boto.s3.website.RoutingRules") --
           Object which specifies conditions and redirects that apply
           when the conditions are met.

   copy_key(new_key_name, src_bucket_name, src_key_name, metadata=None, src_version_id=None, storage_class='STANDARD', preserve_acl=False, encrypt_key=False, headers=None, query_args=None)

      Create a new key in the bucket by copying another existing key.

      Parameters:
         * **new_key_name** (*string*) -- The name of the new key

         * **src_bucket_name** (*string*) -- The name of the source
           bucket

         * **src_key_name** (*string*) -- The name of the source key

         * **src_version_id** (*string*) -- The version id for the
           key.  This param is optional.  If not specified, the newest
           version of the key will be copied.

         * **metadata** (*dict*) -- Metadata to be associated with
           new key.  If metadata is supplied, it will replace the
           metadata of the source key being copied.  If no metadata is
           supplied, the source key's metadata will be copied to the
           new key.

         * **storage_class** (*string*) -- The storage class of the
           new key.  By default, the new key will use the standard
           storage class. Possible values are: STANDARD |
           REDUCED_REDUNDANCY

         * **preserve_acl** (*bool*) -- If True, the ACL from the
           source key will be copied to the destination key.  If
           False, the destination key will have the default ACL.  Note
           that preserving the ACL in the new key object will require
           two additional API calls to S3, one to retrieve the current
           ACL and one to set that ACL on the new object.  If you
           don't care about the ACL, a value of False will be
           significantly more efficient.

         * **encrypt_key** (*bool*) -- If True, the new copy of the
           object will be encrypted on the server-side by S3 and will
           be stored in an encrypted form while at rest in S3.

         * **headers** (*dict*) -- A dictionary of header name/value
           pairs.

         * **query_args** (*string*) -- A string of additional
           querystring arguments to append to the request

      Return type:
         "boto.s3.key.Key" or subclass

      Returns:
         An instance of the newly created key object

   delete(headers=None)

   delete_cors(headers=None)

      Removes all CORS configuration from the bucket.

   delete_key(key_name, headers=None, version_id=None, mfa_token=None)

      Deletes a key from the bucket.  If a version_id is provided,
      only that version of the key will be deleted.

      Parameters:
         * **key_name** (*string*) -- The key name to delete

         * **version_id** (*string*) -- The version ID (optional)

         * **mfa_token** (*tuple or list of strings*) -- A tuple or
           list consisting of the serial number from the MFA device
           and the current value of the six-digit token associated
           with the device.  This value is required anytime you are
           deleting versioned objects from a bucket that has the
           MFADelete option on the bucket.

      Return type:
         "boto.s3.key.Key" or subclass

      Returns:
         A key object holding information on what was deleted.  The
         Caller can see if a delete_marker was created or removed and
         what version_id the delete created or removed.

   delete_keys(keys, quiet=False, mfa_token=None, headers=None)

      Deletes a set of keys using S3's Multi-object delete API. If a
      VersionID is specified for that key then that version is
      removed. Returns a MultiDeleteResult Object, which contains
      Deleted and Error elements for each key you ask to delete.

      Parameters:
         * **keys** (*list*) -- A list of either key_names or
           (key_name, versionid) pairs or a list of Key instances.

         * **quiet** (*boolean*) -- In quiet mode the response
           includes only keys where the delete operation encountered
           an error. For a successful deletion, the operation does not
           return any information about the delete in the response
           body.

         * **mfa_token** (*tuple or list of strings*) -- A tuple or
           list consisting of the serial number from the MFA device
           and the current value of the six-digit token associated
           with the device.  This value is required anytime you are
           deleting versioned objects from a bucket that has the
           MFADelete option on the bucket.

      Returns:
         An instance of MultiDeleteResult

   delete_lifecycle_configuration(headers=None)

      Removes all lifecycle configuration from the bucket.

   delete_policy(headers=None)

   delete_tags(headers=None)

   delete_website_configuration(headers=None)

      Removes all website configuration from the bucket.

   disable_logging(headers=None)

      Disable logging on a bucket.

      Return type:
         bool

      Returns:
         True if ok or raises an exception.

   enable_logging(target_bucket, target_prefix='', grants=None, headers=None)

      Enable logging on a bucket.

      Parameters:
         * **target_bucket** (*bucket or string*) -- The bucket to
           log to.

         * **target_prefix** (*string*) -- The prefix which should
           be prepended to the generated log files written to the
           target_bucket.

         * **grants** (*list of Grant objects*) -- A list of extra
           permissions which will be granted on the log files which
           are created.

      Return type:
         bool

      Returns:
         True if ok or raises an exception.

   endElement(name, value, connection)

   generate_url(expires_in, method='GET', headers=None, force_http=False, response_headers=None, expires_in_absolute=False)

   get_acl(key_name='', headers=None, version_id=None)

   get_all_keys(headers=None, **params)

      A lower-level method for listing contents of a bucket.  This
      closely models the actual S3 API and requires you to manually
      handle the paging of results.  For a higher-level method that
      handles the details of paging for you, you can use the list
      method.

      Parameters:
         * **max_keys** (*int*) -- The maximum number of keys to
           retrieve

         * **prefix** (*string*) -- The prefix of the keys you want
           to retrieve

         * **marker** (*string*) -- The "marker" of where you are in
           the result set

         * **delimiter** (*string*) -- If this optional, Unicode
           string parameter is included with your request, then keys
           that contain the same string between the prefix and the
           first occurrence of the delimiter will be rolled up into a
           single result element in the CommonPrefixes collection.
           These rolled-up keys are not returned elsewhere in the
           response.

         * **encoding_type** (*string*) --

           Requests Amazon S3 to encode the response and specifies the
           encoding method to use.

           An object key can contain any Unicode character; however,
           XML 1.0 parser cannot parse some characters, such as
           characters with an ASCII value from 0 to 10. For characters
           that are not supported in XML 1.0, you can add this
           parameter to request that Amazon S3 encode the keys in the
           response.

           Valid options: "url"

      Return type:
         ResultSet

      Returns:
         The result from S3 listing the keys requested

   get_all_multipart_uploads(headers=None, **params)

      A lower-level, version-aware method for listing active MultiPart
      uploads for a bucket.  This closely models the actual S3 API and
      requires you to manually handle the paging of results.  For a
      higher-level method that handles the details of paging for you,
      you can use the list method.

      Parameters:
         * **max_uploads** (*int*) -- The maximum number of uploads
           to retrieve. Default value is 1000.

         * **key_marker** (*string*) --

           Together with upload_id_marker, this parameter specifies
           the multipart upload after which listing should begin.  If
           upload_id_marker is not specified, only the keys
           lexicographically greater than the specified key_marker
           will be included in the list.

           If upload_id_marker is specified, any multipart uploads for
           a key equal to the key_marker might also be included,
           provided those multipart uploads have upload IDs
           lexicographically greater than the specified
           upload_id_marker.

         * **upload_id_marker** (*string*) -- Together with key-
           marker, specifies the multipart upload after which listing
           should begin. If key_marker is not specified, the
           upload_id_marker parameter is ignored.  Otherwise, any
           multipart uploads for a key equal to the key_marker might
           be included in the list only if they have an upload ID
           lexicographically greater than the specified
           upload_id_marker.

         * **encoding_type** (*string*) --

           Requests Amazon S3 to encode the response and specifies the
           encoding method to use.

           An object key can contain any Unicode character; however,
           XML 1.0 parser cannot parse some characters, such as
           characters with an ASCII value from 0 to 10. For characters
           that are not supported in XML 1.0, you can add this
           parameter to request that Amazon S3 encode the keys in the
           response.

           Valid options: "url"

         * **delimiter** (*string*) -- Character you use to group
           keys. All keys that contain the same string between the
           prefix, if specified, and the first occurrence of the
           delimiter after the prefix are grouped under a single
           result element, CommonPrefixes. If you don't specify the
           prefix parameter, then the substring starts at the
           beginning of the key. The keys that are grouped under
           CommonPrefixes result element are not returned elsewhere in
           the response.

         * **prefix** (*string*) -- Lists in-progress uploads only
           for those keys that begin with the specified prefix. You
           can use prefixes to separate a bucket into different
           grouping of keys. (You can think of using prefix to make
           groups in the same way you'd use a folder in a file
           system.)

      Return type:
         ResultSet

      Returns:
         The result from S3 listing the uploads requested

   get_all_versions(headers=None, **params)

      A lower-level, version-aware method for listing contents of a
      bucket.  This closely models the actual S3 API and requires you
      to manually handle the paging of results.  For a higher-level
      method that handles the details of paging for you, you can use
      the list method.

      Parameters:
         * **max_keys** (*int*) -- The maximum number of keys to
           retrieve

         * **prefix** (*string*) -- The prefix of the keys you want
           to retrieve

         * **key_marker** (*string*) -- The "marker" of where you
           are in the result set with respect to keys.

         * **version_id_marker** (*string*) -- The "marker" of where
           you are in the result set with respect to version-id's.

         * **delimiter** (*string*) -- If this optional, Unicode
           string parameter is included with your request, then keys
           that contain the same string between the prefix and the
           first occurrence of the delimiter will be rolled up into a
           single result element in the CommonPrefixes collection.
           These rolled-up keys are not returned elsewhere in the
           response.

         * **encoding_type** (*string*) --

           Requests Amazon S3 to encode the response and specifies the
           encoding method to use.

           An object key can contain any Unicode character; however,
           XML 1.0 parser cannot parse some characters, such as
           characters with an ASCII value from 0 to 10. For characters
           that are not supported in XML 1.0, you can add this
           parameter to request that Amazon S3 encode the keys in the
           response.

           Valid options: "url"

      Return type:
         ResultSet

      Returns:
         The result from S3 listing the keys requested

   get_cors(headers=None)

      Returns the current CORS configuration on the bucket.

      Return type:
         "boto.s3.cors.CORSConfiguration"

      Returns:
         A CORSConfiguration object that describes all current CORS
         rules in effect for the bucket.

   get_cors_xml(headers=None)

      Returns the current CORS configuration on the bucket as an XML
      document.

   get_key(key_name, headers=None, version_id=None, response_headers=None, validate=True)

      Check to see if a particular key exists within the bucket.  This
      method uses a HEAD request to check for the existance of the
      key. Returns: An instance of a Key object or None

      Parameters:
         * **key_name** (*string*) -- The name of the key to
           retrieve

         * **headers** (*dict*) -- The headers to send when
           retrieving the key

         * **version_id** (*string*) --

         * **response_headers** (*dict*) -- A dictionary containing
           HTTP headers/values that will override any headers
           associated with the stored object in the response.  See
           http://goo.gl/EWOPb for details.

         * **validate** (*bool*) -- Verifies whether the key exists.
           If "False", this will not hit the service, constructing an
           in-memory object. Default is "True".

      Return type:
         "boto.s3.key.Key"

      Returns:
         A Key object from this bucket.

   get_lifecycle_config(headers=None)

      Returns the current lifecycle configuration on the bucket.

      Return type:
         "boto.s3.lifecycle.Lifecycle"

      Returns:
         A LifecycleConfig object that describes all current lifecycle
         rules in effect for the bucket.

   get_location()

      Returns the LocationConstraint for the bucket.

      Return type:
         str

      Returns:
         The LocationConstraint for the bucket or the empty string if
         no constraint was specified when bucket was created.

   get_logging_status(headers=None)

      Get the logging status for this bucket.

      Return type:
         "boto.s3.bucketlogging.BucketLogging"

      Returns:
         A BucketLogging object for this bucket.

   get_policy(headers=None)

      Returns the JSON policy associated with the bucket.  The policy
      is returned as an uninterpreted JSON string.

   get_request_payment(headers=None)

   get_subresource(subresource, key_name='', headers=None, version_id=None)

      Get a subresource for a bucket or key.

      Parameters:
         * **subresource** (*string*) -- The subresource to get.

         * **key_name** (*string*) -- The key to operate on, or None
           to operate on the bucket.

         * **headers** (*dict*) -- Additional HTTP headers to
           include in the request.

         * **src_version_id** (*string*) -- Optional. The version id
           of the key to operate on. If not specified, operate on the
           newest version.

      Return type:
         string

      Returns:
         The value of the subresource.

   get_tags()

   get_versioning_status(headers=None)

      Returns the current status of versioning on the bucket.

      Return type:
         dict

      Returns:
         A dictionary containing a key named 'Versioning' that can
         have a value of either Enabled, Disabled, or Suspended. Also,
         if MFADelete has ever been enabled on the bucket, the
         dictionary will contain a key named 'MFADelete' which will
         have a value of either Enabled or Suspended.

   get_website_configuration(headers=None)

      Returns the current status of website configuration on the
      bucket.

      Return type:
         dict

      Returns:
         A dictionary containing a Python representation of the XML
         response from S3. The overall structure is:

      * WebsiteConfiguration

        * IndexDocument

          * Suffix : suffix that is appended to request that is for
            a "directory" on the website endpoint

          * ErrorDocument

            * Key : name of object to serve when an error occurs

   get_website_configuration_obj(headers=None)

      Get the website configuration as a
      "boto.s3.website.WebsiteConfiguration" object.

   get_website_configuration_with_xml(headers=None)

      Returns the current status of website configuration on the
      bucket as unparsed XML.

      Return type:
         2-Tuple

      Returns:
         2-tuple containing:

         1. A dictionary containing a Python representation of the
            XML response. The overall structure is:

            * WebsiteConfiguration

              * IndexDocument

                * Suffix : suffix that is appended to request that
                  is for a "directory" on the website endpoint

                * ErrorDocument

                  * Key : name of object to serve when an error
                    occurs

         2. unparsed XML describing the bucket's website
            configuration

   get_website_configuration_xml(headers=None)

      Get raw website configuration xml

   get_website_endpoint()

      Returns the fully qualified hostname to use is you want to
      access this bucket as a website.  This doesn't validate whether
      the bucket has been correctly configured as a website or not.

   get_xml_acl(key_name='', headers=None, version_id=None)

   get_xml_tags()

   initiate_multipart_upload(key_name, headers=None, reduced_redundancy=False, metadata=None, encrypt_key=False, policy=None)

      Start a multipart upload operation.

      Note: Note: After you initiate multipart upload and upload one
        or more parts, you must either complete or abort multipart
        upload in order to stop getting charged for storage of the
        uploaded parts. Only after you either complete or abort
        multipart upload, Amazon S3 frees up the parts storage and
        stops charging you for the parts storage.

      Parameters:
         * **key_name** (*string*) -- The name of the key that will
           ultimately result from this multipart upload operation.
           This will be exactly as the key appears in the bucket after
           the upload process has been completed.

         * **headers** (*dict*) -- Additional HTTP headers to send
           and store with the resulting key in S3.

         * **reduced_redundancy** (*boolean*) -- In multipart
           uploads, the storage class is specified when initiating the
           upload, not when uploading individual parts.  So if you
           want the resulting key to use the reduced redundancy
           storage class set this flag when you initiate the upload.

         * **metadata** (*dict*) -- Any metadata that you would like
           to set on the key that results from the multipart upload.

         * **encrypt_key** (*bool*) -- If True, the new copy of the
           object will be encrypted on the server-side by S3 and will
           be stored in an encrypted form while at rest in S3.

         * **policy** ("boto.s3.acl.CannedACLStrings") -- A canned
           ACL policy that will be applied to the new key (once
           completed) in S3.

   list(prefix='', delimiter='', marker='', headers=None, encoding_type=None)

      List key objects within a bucket.  This returns an instance of
      an BucketListResultSet that automatically handles all of the
      result paging, etc. from S3.  You just need to keep iterating
      until there are no more results.

      Called with no arguments, this will return an iterator object
      across all keys within the bucket.

      The Key objects returned by the iterator are obtained by parsing
      the results of a GET on the bucket, also known as the List
      Objects request.  The XML returned by this request contains only
      a subset of the information about each key.  Certain metadata
      fields such as Content-Type and user metadata are not available
      in the XML. Therefore, if you want these additional metadata
      fields you will have to do a HEAD request on the Key in the
      bucket.

      Parameters:
         * **prefix** (*string*) -- allows you to limit the listing
           to a particular prefix.  For example, if you call the
           method with prefix='/foo/' then the iterator will only
           cycle through the keys that begin with the string '/foo/'.

         * **delimiter** (*string*) -- can be used in conjunction
           with the prefix to allow you to organize and browse your
           keys hierarchically. See http://goo.gl/Xx63h for more
           details.

         * **marker** (*string*) -- The "marker" of where you are in
           the result set

         * **encoding_type** (*string*) --

           Requests Amazon S3 to encode the response and specifies the
           encoding method to use.

           An object key can contain any Unicode character; however,
           XML 1.0 parser cannot parse some characters, such as
           characters with an ASCII value from 0 to 10. For characters
           that are not supported in XML 1.0, you can add this
           parameter to request that Amazon S3 encode the keys in the
           response.

           Valid options: "url"

      Return type:
         "boto.s3.bucketlistresultset.BucketListResultSet"

      Returns:
         an instance of a BucketListResultSet that handles paging, etc

   list_grants(headers=None)

   list_multipart_uploads(key_marker='', upload_id_marker='', headers=None, encoding_type=None)

      List multipart upload objects within a bucket.  This returns an
      instance of an MultiPartUploadListResultSet that automatically
      handles all of the result paging, etc. from S3.  You just need
      to keep iterating until there are no more results.

      Parameters:
         * **key_marker** (*string*) -- The "marker" of where you
           are in the result set

         * **upload_id_marker** (*string*) -- The upload identifier

         * **encoding_type** (*string*) --

           Requests Amazon S3 to encode the response and specifies the
           encoding method to use.

           An object key can contain any Unicode character; however,
           XML 1.0 parser cannot parse some characters, such as
           characters with an ASCII value from 0 to 10. For characters
           that are not supported in XML 1.0, you can add this
           parameter to request that Amazon S3 encode the keys in the
           response.

           Valid options: "url"

      Return type:
         "boto.s3.bucketlistresultset.BucketListResultSet"

      Returns:
         an instance of a BucketListResultSet that handles paging, etc

   list_versions(prefix='', delimiter='', key_marker='', version_id_marker='', headers=None, encoding_type=None)

      List version objects within a bucket.  This returns an instance
      of an VersionedBucketListResultSet that automatically handles
      all of the result paging, etc. from S3.  You just need to keep
      iterating until there are no more results.  Called with no
      arguments, this will return an iterator object across all keys
      within the bucket.

      Parameters:
         * **prefix** (*string*) -- allows you to limit the listing
           to a particular prefix.  For example, if you call the
           method with prefix='/foo/' then the iterator will only
           cycle through the keys that begin with the string '/foo/'.

         * **delimiter** (*string*) --

           can be used in conjunction with the prefix to allow you to
           organize and browse your keys hierarchically. See:

           http://aws.amazon.com/releasenotes/Amazon-S3/213

           for more details.

         * **key_marker** (*string*) -- The "marker" of where you
           are in the result set

         * **encoding_type** (*string*) --

           Requests Amazon S3 to encode the response and specifies the
           encoding method to use.

           An object key can contain any Unicode character; however,
           XML 1.0 parser cannot parse some characters, such as
           characters with an ASCII value from 0 to 10. For characters
           that are not supported in XML 1.0, you can add this
           parameter to request that Amazon S3 encode the keys in the
           response.

           Valid options: "url"

      Return type:
         "boto.s3.bucketlistresultset.BucketListResultSet"

      Returns:
         an instance of a BucketListResultSet that handles paging, etc

   lookup(key_name, headers=None)

      Deprecated: Please use get_key method.

      Parameters:
         **key_name** (*string*) -- The name of the key to retrieve

      Return type:
         "boto.s3.key.Key"

      Returns:
         A Key object from this bucket.

   make_public(recursive=False, headers=None)

   new_key(key_name=None)

      Creates a new key

      Parameters:
         **key_name** (*string*) -- The name of the key to create

      Return type:
         "boto.s3.key.Key" or subclass

      Returns:
         An instance of the newly created key object

   set_acl(acl_or_str, key_name='', headers=None, version_id=None)

   set_as_logging_target(headers=None)

      Setup the current bucket as a logging target by granting the
      necessary permissions to the LogDelivery group to write log
      files to this bucket.

   set_canned_acl(acl_str, key_name='', headers=None, version_id=None)

   set_cors(cors_config, headers=None)

      Set the CORS for this bucket given a boto CORSConfiguration
      object.

      Parameters:
         **cors_config** ("boto.s3.cors.CORSConfiguration") -- The
         CORS configuration you want to configure for this bucket.

   set_cors_xml(cors_xml, headers=None)

      Set the CORS (Cross-Origin Resource Sharing) for a bucket.

      Parameters:
         **cors_xml** (*str*) -- The XML document describing your
         desired CORS configuration.  See the S3 documentation for
         details of the exact syntax required.

   set_key_class(key_class)

      Set the Key class associated with this bucket.  By default, this
      would be the boto.s3.key.Key class but if you want to subclass
      that for some reason this allows you to associate your new class
      with a bucket so that when you call bucket.new_key() or when you
      get a listing of keys in the bucket you will get an instances of
      your key class rather than the default.

      Parameters:
         **key_class** (*class*) -- A subclass of Key that can be more
         specific

   set_policy(policy, headers=None)

      Add or replace the JSON policy associated with the bucket.

      Parameters:
         **policy** (*str*) -- The JSON policy as a string.

   set_request_payment(payer='BucketOwner', headers=None)

   set_subresource(subresource, value, key_name='', headers=None, version_id=None)

      Set a subresource for a bucket or key.

      Parameters:
         * **subresource** (*string*) -- The subresource to set.

         * **value** (*string*) -- The value of the subresource.

         * **key_name** (*string*) -- The key to operate on, or None
           to operate on the bucket.

         * **headers** (*dict*) -- Additional HTTP headers to
           include in the request.

         * **src_version_id** (*string*) -- Optional. The version id
           of the key to operate on. If not specified, operate on the
           newest version.

   set_tags(tags, headers=None)

   set_website_configuration(config, headers=None)

      Parameters:
         **config** (*boto.s3.website.WebsiteConfiguration*) --
         Configuration data

   set_website_configuration_xml(xml, headers=None)

      Upload xml website configuration

   set_xml_acl(acl_str, key_name='', headers=None, version_id=None, query_args='acl')

   set_xml_logging(logging_str, headers=None)

      Set logging on a bucket directly to the given xml string.

      Parameters:
         **logging_str** (*unicode string*) -- The XML for the
         bucketloggingstatus which will be set.  The string will be
         converted to utf-8 before it is sent.  Usually, you will
         obtain this XML from the BucketLogging object.

      Return type:
         bool

      Returns:
         True if ok or raises an exception.

   set_xml_tags(tag_str, headers=None, query_args='tagging')

   startElement(name, attrs, connection)

   validate_get_all_versions_params(params)

      Validate that the parameters passed to get_all_versions are
      valid. Overridden by subclasses that allow a different set of
      parameters.

      Parameters:
         **params** (*dict*) -- Parameters to validate.

   validate_kwarg_names(kwargs, names)

      Checks that all named arguments are in the specified list of
      names.

      Parameters:
         * **kwargs** (*dict*) -- Dictionary of kwargs to validate.

         * **names** (*list*) -- List of possible named arguments.

class class boto.s3.bucket.S3WebsiteEndpointTranslate

   trans_region = defaultdict(<function <lambda> at 0x79466c18>, {'cn-north-1': 's3-website.cn-north-1', 'ap-northeast-1': 's3-website-ap-northeast-1', 'sa-east-1': 's3-website-sa-east-1', 'ap-southeast-1': 's3-website-ap-southeast-1', 'ap-southeast-2': 's3-website-ap-southeast-2', 'us-west-2': 's3-website-us-west-2', 'us-west-1': 's3-website-us-west-1', 'eu-west-1': 's3-website-eu-west-1'})

   classmethod translate_region(reg)


boto.s3.bucketlistresultset
===========================

class class boto.s3.bucketlistresultset.BucketListResultSet(bucket=None, prefix='', delimiter='', marker='', headers=None, encoding_type=None)

   A resultset for listing keys within a bucket.  Uses the
   bucket_lister generator function and implements the iterator
   interface.  This transparently handles the results paging from S3
   so even if you have many thousands of keys within the bucket you
   can iterate over all keys in a reasonably efficient manner.

class class boto.s3.bucketlistresultset.MultiPartUploadListResultSet(bucket=None, key_marker='', upload_id_marker='', headers=None, encoding_type=None)

   A resultset for listing multipart uploads within a bucket. Uses the
   multipart_upload_lister generator function and implements the
   iterator interface.  This transparently handles the results paging
   from S3 so even if you have many thousands of uploads within the
   bucket you can iterate over all keys in a reasonably efficient
   manner.

class class boto.s3.bucketlistresultset.VersionedBucketListResultSet(bucket=None, prefix='', delimiter='', key_marker='', version_id_marker='', headers=None, encoding_type=None)

   A resultset for listing versions within a bucket.  Uses the
   bucket_lister generator function and implements the iterator
   interface.  This transparently handles the results paging from S3
   so even if you have many thousands of keys within the bucket you
   can iterate over all keys in a reasonably efficient manner.

boto.s3.bucketlistresultset.bucket_lister(bucket, prefix='', delimiter='', marker='', headers=None, encoding_type=None)

   A generator function for listing keys in a bucket.

boto.s3.bucketlistresultset.multipart_upload_lister(bucket, key_marker='', upload_id_marker='', headers=None, encoding_type=None)

   A generator function for listing multipart uploads in a bucket.

boto.s3.bucketlistresultset.versioned_bucket_lister(bucket, prefix='', delimiter='', key_marker='', version_id_marker='', headers=None, encoding_type=None)

   A generator function for listing versions in a bucket.


boto.s3.connection
==================

exception exception boto.s3.connection.HostRequiredError(reason, *args)

class class boto.s3.connection.Location

   APNortheast = 'ap-northeast-1'

   APSoutheast = 'ap-southeast-1'

   APSoutheast2 = 'ap-southeast-2'

   CNNorth1 = 'cn-north-1'

   DEFAULT = ''

   EU = 'EU'

   SAEast = 'sa-east-1'

   USWest = 'us-west-1'

   USWest2 = 'us-west-2'

class class boto.s3.connection.NoHostProvided

class class boto.s3.connection.OrdinaryCallingFormat

   build_path_base(bucket, key='')

   get_bucket_server(server, bucket)

class class boto.s3.connection.ProtocolIndependentOrdinaryCallingFormat

   build_url_base(connection, protocol, server, bucket, key='')

class class boto.s3.connection.S3Connection(aws_access_key_id=None, aws_secret_access_key=None, is_secure=True, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, host=<class 'boto.s3.connection.NoHostProvided'>, debug=0, https_connection_factory=None, calling_format='boto.s3.connection.SubdomainCallingFormat', path='/', provider='aws', bucket_class=<class 'boto.s3.bucket.Bucket'>, security_token=None, suppress_consec_slashes=True, anon=False, validate_certs=None, profile_name=None)

   DefaultCallingFormat = 'boto.s3.connection.SubdomainCallingFormat'

   DefaultHost = 's3.amazonaws.com'

   QueryString = 'Signature=%s&Expires=%d&AWSAccessKeyId=%s'

   build_post_form_args(bucket_name, key, expires_in=6000, acl=None, success_action_redirect=None, max_content_length=None, http_method='http', fields=None, conditions=None, storage_class='STANDARD', server_side_encryption=None)

      Taken from the AWS book Python examples and modified for use
      with boto This only returns the arguments required for the post
      form, not the actual form.  This does not return the file input
      field which also needs to be added

      Parameters:
         * **bucket_name** (*string*) -- Bucket to submit to

         * **key** (*string*) -- Key name, optionally add
           ${filename} to the end to attach the submitted filename

         * **expires_in** (*integer*) -- Time (in seconds) before
           this expires, defaults to 6000

         * **acl** (*string*) -- A canned ACL.  One of: * private *
           public-read * public-read-write * authenticated-read *
           bucket-owner-read * bucket-owner-full-control

         * **success_action_redirect** (*string*) -- URL to redirect
           to on success

         * **max_content_length** (*integer*) -- Maximum size for
           this file

         * **http_method** (*string*) -- HTTP Method to use, "http"
           or "https"

         * **storage_class** (*string*) -- Storage class to use for
           storing the object. Valid values: STANDARD |
           REDUCED_REDUNDANCY

         * **server_side_encryption** (*string*) -- Specifies
           server- side encryption algorithm to use when Amazon S3
           creates an object. Valid values: None | AES256

      Return type:
         dict

      Returns:
         A dictionary containing field names/values as well as a url
         to POST to

   build_post_policy(expiration_time, conditions)

      Taken from the AWS book Python examples and modified for use
      with boto

   create_bucket(bucket_name, headers=None, location='', policy=None)

      Creates a new located bucket. By default it's in the USA. You
      can pass Location.EU to create a European bucket (S3) or
      European Union bucket (GCS).

      Parameters:
         * **bucket_name** (*string*) -- The name of the new bucket

         * **headers** (*dict*) -- Additional headers to pass along
           with the request to AWS.

         * **location** (*str*) -- The location of the new bucket.
           You can use one of the constants in
           "boto.s3.connection.Location" (e.g. Location.EU,
           Location.USWest, etc.).

         * **policy** ("boto.s3.acl.CannedACLStrings") -- A canned
           ACL policy that will be applied to the new key in S3.

   delete_bucket(bucket, headers=None)

      Removes an S3 bucket.

      In order to remove the bucket, it must first be empty. If the
      bucket is not empty, an "S3ResponseError" will be raised.

      Parameters:
         * **bucket_name** (*string*) -- The name of the bucket

         * **headers** (*dict*) -- Additional headers to pass along
           with the request to AWS.

   generate_url(expires_in, method, bucket='', key='', headers=None, query_auth=True, force_http=False, response_headers=None, expires_in_absolute=False, version_id=None)

   get_all_buckets(headers=None)

   get_bucket(bucket_name, validate=True, headers=None)

      Retrieves a bucket by name.

      If the bucket does not exist, an "S3ResponseError" will be
      raised. If you are unsure if the bucket exists or not, you can
      use the "S3Connection.lookup" method, which will either return a
      valid bucket or "None".

      If "validate=False" is passed, no request is made to the service
      (no charge/communication delay). This is only safe to do if you
      are **sure** the bucket exists.

      If the default "validate=True" is passed, a request is made to
      the service to ensure the bucket exists. Prior to Boto v2.25.0,
      this fetched a list of keys (but with a max limit set to "0",
      always returning an empty list) in the bucket (& included better
      error messages), at an increased expense. As of Boto v2.25.0,
      this now performs a HEAD request (less expensive but worse error
      messages).

      If you were relying on parsing the error message before, you
      should call something like:

         bucket = conn.get_bucket('<bucket_name>', validate=False)
         bucket.get_all_keys(maxkeys=0)

      Parameters:
         * **bucket_name** (*string*) -- The name of the bucket

         * **headers** (*dict*) -- Additional headers to pass along
           with the request to AWS.

         * **validate** (*boolean*) -- If "True", it will try to
           verify the bucket exists on the service-side. (Default:
           "True")

   get_canonical_user_id(headers=None)

      Convenience method that returns the "CanonicalUserID" of the
      user who's credentials are associated with the connection. The
      only way to get this value is to do a GET request on the service
      which returns all buckets associated with the account. As part
      of that response, the canonical userid is returned. This method
      simply does all of that and then returns just the user id.

      Return type:
         string

      Returns:
         A string containing the canonical user id.

   head_bucket(bucket_name, headers=None)

      Determines if a bucket exists by name.

      If the bucket does not exist, an "S3ResponseError" will be
      raised.

      Parameters:
         * **bucket_name** (*string*) -- The name of the bucket

         * **headers** (*dict*) -- Additional headers to pass along
           with the request to AWS.

      Returns:
         A <Bucket> object

   lookup(bucket_name, validate=True, headers=None)

      Attempts to get a bucket from S3.

      Works identically to "S3Connection.get_bucket", save for that it
      will return "None" if the bucket does not exist instead of
      throwing an exception.

      Parameters:
         * **bucket_name** (*string*) -- The name of the bucket

         * **headers** (*dict*) -- Additional headers to pass along
           with the request to AWS.

         * **validate** (*boolean*) -- If "True", it will try to
           fetch all keys within the given bucket. (Default: "True")

   make_request(method, bucket='', key='', headers=None, data='', query_args=None, sender=None, override_num_retries=None, retry_handler=None)

   set_bucket_class(bucket_class)

      Set the Bucket class associated with this bucket.  By default,
      this would be the boto.s3.key.Bucket class but if you want to
      subclass that for some reason this allows you to associate your
      new class.

      Parameters:
         **bucket_class** (*class*) -- A subclass of Bucket that can
         be more specific

class class boto.s3.connection.SubdomainCallingFormat

   get_bucket_server(*args, **kwargs)

class class boto.s3.connection.VHostCallingFormat

   get_bucket_server(*args, **kwargs)

boto.s3.connection.assert_case_insensitive(f)

boto.s3.connection.check_lowercase_bucketname(n)

   Bucket names must not contain uppercase characters. We check for
   this by appending a lowercase character and testing with islower().
   Note this also covers cases like numeric bucket names with dashes.

   >>> check_lowercase_bucketname("Aaaa")
   Traceback (most recent call last):
   ...
   BotoClientError: S3Error: Bucket names cannot contain upper-case
   characters when using either the sub-domain or virtual hosting calling
   format.

   >>> check_lowercase_bucketname("1234-5678-9123")
   True
   >>> check_lowercase_bucketname("abcdefg1234")
   True


boto.s3.cors
============

class class boto.s3.cors.CORSConfiguration

   A container for the rules associated with a CORS configuration.

   add_rule(allowed_method, allowed_origin, id=None, allowed_header=None, max_age_seconds=None, expose_header=None)

      Add a rule to this CORS configuration.  This only adds the rule
      to the local copy.  To install the new rule(s) on the bucket,
      you need to pass this CORS config object to the set_cors method
      of the Bucket object.

      Parameters:
         * **allowed_methods** (*list of str*) -- An HTTP method
           that you want to allow the origin to execute.  Each
           CORSRule must identify at least one origin and one method.
           Valid values are: GET|PUT|HEAD|POST|DELETE

         * **allowed_origin** (*list of str*) -- An origin that you
           want to allow cross-domain requests from. This can contain
           at most one * wild character. Each CORSRule must identify
           at least one origin and one method. The origin value can
           include at most one '*' wild character. For example,
           "http://>>*<<.example.com". You can also specify only * as
           the origin value allowing all origins cross-domain access.

         * **id** (*str*) -- A unique identifier for the rule.  The
           ID value can be up to 255 characters long.  The IDs help
           you find a rule in the configuration.

         * **allowed_header** (*list of str*) -- Specifies which
           headers are allowed in a pre-flight OPTIONS request via the
           Access-Control-Request-Headers header. Each header name
           specified in the Access-Control-Request-Headers header must
           have a corresponding entry in the rule. Amazon S3 will send
           only the allowed headers in a response that were requested.
           This can contain at most one * wild character.

         * **max_age_seconds** (*int*) -- The time in seconds that
           your browser is to cache the preflight response for the
           specified resource.

         * **expose_header** (*list of str*) -- One or more headers
           in the response that you want customers to be able to
           access from their applications (for example, from a
           JavaScript XMLHttpRequest object).  You add one
           ExposeHeader element in the rule for each header.

   endElement(name, value, connection)

   startElement(name, attrs, connection)

   to_xml()

      Returns a string containing the XML version of the Lifecycle
      configuration as defined by S3.

class class boto.s3.cors.CORSRule(allowed_method=None, allowed_origin=None, id=None, allowed_header=None, max_age_seconds=None, expose_header=None)

   CORS rule for a bucket.

   Variables:
      * **id** -- A unique identifier for the rule.  The ID value
        can be up to 255 characters long.  The IDs help you find a
        rule in the configuration.

      * **allowed_methods** -- An HTTP method that you want to allow
        the origin to execute.  Each CORSRule must identify at least
        one origin and one method. Valid values are:
        GET|PUT|HEAD|POST|DELETE

      * **allowed_origin** -- An origin that you want to allow
        cross- domain requests from. This can contain at most one *
        wild character. Each CORSRule must identify at least one
        origin and one method. The origin value can include at most
        one '*' wild character. For example,
        "http://>>*<<.example.com". You can also specify only * as the
        origin value allowing all origins cross-domain access.

      * **allowed_header** -- Specifies which headers are allowed in
        a pre-flight OPTIONS request via the Access-Control-Request-
        Headers header. Each header name specified in the Access-
        Control-Request-Headers header must have a corresponding entry
        in the rule. Amazon S3 will send only the allowed headers in a
        response that were requested. This can contain at most one *
        wild character.

      * **max_age_seconds** -- The time in seconds that your browser
        is to cache the preflight response for the specified resource.

      * **expose_header** -- One or more headers in the response
        that you want customers to be able to access from their
        applications (for example, from a JavaScript XMLHttpRequest
        object).  You add one ExposeHeader element in the rule for
        each header.

   endElement(name, value, connection)

   startElement(name, attrs, connection)

   to_xml()


boto.s3.deletemarker
====================

class class boto.s3.deletemarker.DeleteMarker(bucket=None, name=None)

   endElement(name, value, connection)

   startElement(name, attrs, connection)


boto.s3.key
===========

class class boto.s3.key.Key(bucket=None, name=None)

   Represents a key (object) in an S3 bucket.

   Variables:
      * **bucket** -- The parent "boto.s3.bucket.Bucket".

      * **name** -- The name of this Key object.

      * **metadata** -- A dictionary containing user metadata that
        you wish to store with the object or that has been retrieved
        from an existing object.

      * **cache_control** -- The value of the *Cache-Control* HTTP
        header.

      * **content_type** -- The value of the *Content-Type* HTTP
        header.

      * **content_encoding** -- The value of the *Content-Encoding*
        HTTP header.

      * **content_disposition** -- The value of the *Content-
        Disposition* HTTP header.

      * **content_language** -- The value of the *Content-Language*
        HTTP header.

      * **etag** -- The *etag* associated with this object.

      * **last_modified** -- The string timestamp representing the
        last time this object was modified in S3.

      * **owner** -- The ID of the owner of this object.

      * **storage_class** -- The storage class of the object.
        Currently, one of: STANDARD | REDUCED_REDUNDANCY | GLACIER

      * **md5** -- The MD5 hash of the contents of the object.

      * **size** -- The size, in bytes, of the object.

      * **version_id** -- The version ID of this object, if it is a
        versioned object.

      * **encrypted** -- Whether the object is encrypted while at
        rest on the server.

   BufferSize = 8192

   DefaultContentType = 'application/octet-stream'

   RestoreBody = '<?xml version="1.0" encoding="UTF-8"?>\n      <RestoreRequest xmlns="http://s3.amazonaws.com/doc/2006-03-01">\n        <Days>%s</Days>\n      </RestoreRequest>'

   add_email_grant(permission, email_address, headers=None)

      Convenience method that provides a quick way to add an email
      grant to a key. This method retrieves the current ACL, creates a
      new grant based on the parameters passed in, adds that grant to
      the ACL and then PUT's the new ACL back to S3.

      Parameters:
         * **permission** (*string*) -- The permission being
           granted. Should be one of: (READ, WRITE, READ_ACP,
           WRITE_ACP, FULL_CONTROL).

         * **email_address** (*string*) -- The email address
           associated with the AWS account your are granting the
           permission to.

         * **recursive** (*boolean*) -- A boolean value to controls
           whether the command will apply the grant to all keys within
           the bucket or not.  The default value is False.  By passing
           a True value, the call will iterate through all keys in the
           bucket and apply the same grant to each key.  CAUTION: If
           you have a lot of keys, this could take a long time!

   add_user_grant(permission, user_id, headers=None, display_name=None)

      Convenience method that provides a quick way to add a canonical
      user grant to a key.  This method retrieves the current ACL,
      creates a new grant based on the parameters passed in, adds that
      grant to the ACL and then PUT's the new ACL back to S3.

      Parameters:
         * **permission** (*string*) -- The permission being
           granted. Should be one of: (READ, WRITE, READ_ACP,
           WRITE_ACP, FULL_CONTROL).

         * **user_id** (*string*) -- The canonical user id
           associated with the AWS account your are granting the
           permission to.

         * **display_name** (*string*) -- An option string
           containing the user's Display Name.  Only required on
           Walrus.

   base64md5

   base_user_settable_fields = set(['content-disposition', 'content-language', 'content-encoding', 'content-md5', 'cache-control', 'content-type'])

   change_storage_class(new_storage_class, dst_bucket=None, validate_dst_bucket=True)

      Change the storage class of an existing key. Depending on
      whether a different destination bucket is supplied or not, this
      will either move the item within the bucket, preserving all
      metadata and ACL info bucket changing the storage class or it
      will copy the item to the provided destination bucket, also
      preserving metadata and ACL info.

      Parameters:
         * **new_storage_class** (*string*) -- The new storage class
           for the Key. Possible values are: * STANDARD *
           REDUCED_REDUNDANCY

         * **dst_bucket** (*string*) -- The name of a destination
           bucket.  If not provided the current bucket of the key will
           be used.

         * **validate_dst_bucket** (*bool*) -- If True, will
           validate the dst_bucket by using an extra list request.

   close(fast=False)

      Close this key.

      Parameters:
         **fast** (*bool*) -- True if you want the connection to be
         closed without first

      reading the content. This should only be used in cases where
      subsequent calls don't need to return the content from the open
      HTTP connection. Note: As explained at http://docs.python.org/2
      /library/httplib.html#httplib.HTTPConnection.getresponse,
      callers must read the whole response before sending a new
      request to the server. Calling Key.close(fast=True) and making a
      subsequent request to the server will work because boto will get
      an httplib exception and close/reopen the connection.

   closed = False

   compute_md5(fp, size=None)

      Parameters:
         * **fp** (*file*) -- File pointer to the file to MD5 hash.
           The file pointer will be reset to the same position before
           the method returns.

         * **size** (*int*) -- (optional) The Maximum number of
           bytes to read from the file pointer (fp). This is useful
           when uploading a file in multiple parts where the file is
           being split in place into different parts. Less bytes may
           be available.

   copy(dst_bucket, dst_key, metadata=None, reduced_redundancy=False, preserve_acl=False, encrypt_key=False, validate_dst_bucket=True)

      Copy this Key to another bucket.

      Parameters:
         * **dst_bucket** (*string*) -- The name of the destination
           bucket

         * **dst_key** (*string*) -- The name of the destination key

         * **metadata** (*dict*) -- Metadata to be associated with
           new key.  If metadata is supplied, it will replace the
           metadata of the source key being copied.  If no metadata is
           supplied, the source key's metadata will be copied to the
           new key.

         * **reduced_redundancy** (*bool*) -- If True, this will
           force the storage class of the new Key to be
           REDUCED_REDUNDANCY regardless of the storage class of the
           key being copied. The Reduced Redundancy Storage (RRS)
           feature of S3, provides lower redundancy at lower storage
           cost.

         * **preserve_acl** (*bool*) -- If True, the ACL from the
           source key will be copied to the destination key.  If
           False, the destination key will have the default ACL.  Note
           that preserving the ACL in the new key object will require
           two additional API calls to S3, one to retrieve the current
           ACL and one to set that ACL on the new object.  If you
           don't care about the ACL, a value of False will be
           significantly more efficient.

         * **encrypt_key** (*bool*) -- If True, the new copy of the
           object will be encrypted on the server-side by S3 and will
           be stored in an encrypted form while at rest in S3.

         * **validate_dst_bucket** (*bool*) -- If True, will
           validate the dst_bucket by using an extra list request.

      Return type:
         "boto.s3.key.Key" or subclass

      Returns:
         An instance of the newly created key object

   delete(headers=None)

      Delete this key from S3

   endElement(name, value, connection)

   exists(headers=None)

      Returns True if the key exists

      Return type:
         bool

      Returns:
         Whether the key exists on S3

   f = 'content-type'

   generate_url(expires_in, method='GET', headers=None, query_auth=True, force_http=False, response_headers=None, expires_in_absolute=False, version_id=None, policy=None, reduced_redundancy=False, encrypt_key=False)

      Generate a URL to access this key.

      Parameters:
         * **expires_in** (*int*) -- How long the url is valid for,
           in seconds

         * **method** (*string*) -- The method to use for retrieving
           the file (default is GET)

         * **headers** (*dict*) -- Any headers to pass along in the
           request

         * **query_auth** (*bool*) --

         * **force_http** (*bool*) -- If True, http will be used
           instead of https.

         * **response_headers** (*dict*) -- A dictionary containing
           HTTP headers/values that will override any headers
           associated with the stored object in the response.  See
           http://goo.gl/EWOPb for details.

         * **expires_in_absolute** (*bool*) --

         * **version_id** (*string*) -- The version_id of the object
           to GET. If specified this overrides any value in the key.

         * **policy** ("boto.s3.acl.CannedACLStrings") -- A canned
           ACL policy that will be applied to the new key in S3.

         * **reduced_redundancy** (*bool*) -- If True, this will set
           the storage class of the new Key to be REDUCED_REDUNDANCY.
           The Reduced Redundancy Storage (RRS) feature of S3,
           provides lower redundancy at lower storage cost.

         * **encrypt_key** (*bool*) -- If True, the new copy of the
           object will be encrypted on the server-side by S3 and will
           be stored in an encrypted form while at rest in S3.

      Return type:
         string

      Returns:
         The URL to access the key

   get_acl(headers=None)

   get_contents_as_string(headers=None, cb=None, num_cb=10, torrent=False, version_id=None, response_headers=None)

      Retrieve an object from S3 using the name of the Key object as
      the key in S3.  Return the contents of the object as a string.
      See get_contents_to_file method for details about the
      parameters.

      Parameters:
         * **headers** (*dict*) -- Any additional headers to send in
           the request

         * **cb** (*int*) -- a callback function that will be called
           to report progress on the upload.  The callback should
           accept two integer parameters, the first representing the
           number of bytes that have been successfully transmitted to
           S3 and the second representing the size of the to be
           transmitted object.

         * **num_cb** -- (optional) If a callback is specified with
           the cb parameter this parameter determines the granularity
           of the callback by defining the maximum number of times the
           callback will be called during the file transfer.

         * **torrent** (*bool*) -- If True, returns the contents of
           a torrent file as a string.

         * **response_headers** (*dict*) -- A dictionary containing
           HTTP headers/values that will override any headers
           associated with the stored object in the response.  See
           http://goo.gl/EWOPb for details.

         * **version_id** (*str*) -- The ID of a particular version
           of the object. If this parameter is not supplied but the
           Key object has a "version_id" attribute, that value will be
           used when retrieving the object.  You can set the Key
           object's "version_id" attribute to None to always grab the
           latest version from a version-enabled bucket.

      Return type:
         string

      Returns:
         The contents of the file as a string

   get_contents_to_file(fp, headers=None, cb=None, num_cb=10, torrent=False, version_id=None, res_download_handler=None, response_headers=None)

      Retrieve an object from S3 using the name of the Key object as
      the key in S3.  Write the contents of the object to the file
      pointed to by 'fp'.

      Parameters:
         * **fp** (*File -like object*) --

         * **headers** (*dict*) -- additional HTTP headers that will
           be sent with the GET request.

         * **cb** (*int*) -- a callback function that will be called
           to report progress on the upload.  The callback should
           accept two integer parameters, the first representing the
           number of bytes that have been successfully transmitted to
           S3 and the second representing the size of the to be
           transmitted object.

         * **num_cb** -- (optional) If a callback is specified with
           the cb parameter this parameter determines the granularity
           of the callback by defining the maximum number of times the
           callback will be called during the file transfer.

         * **torrent** (*bool*) -- If True, returns the contents of
           a torrent file as a string.

         * **res_download_handler** -- If provided, this handler
           will perform the download.

         * **response_headers** (*dict*) -- A dictionary containing
           HTTP headers/values that will override any headers
           associated with the stored object in the response.  See
           http://goo.gl/EWOPb for details.

         * **version_id** (*str*) -- The ID of a particular version
           of the object. If this parameter is not supplied but the
           Key object has a "version_id" attribute, that value will be
           used when retrieving the object.  You can set the Key
           object's "version_id" attribute to None to always grab the
           latest version from a version-enabled bucket.

   get_contents_to_filename(filename, headers=None, cb=None, num_cb=10, torrent=False, version_id=None, res_download_handler=None, response_headers=None)

      Retrieve an object from S3 using the name of the Key object as
      the key in S3.  Store contents of the object to a file named by
      'filename'. See get_contents_to_file method for details about
      the parameters.

      Parameters:
         * **filename** (*string*) -- The filename of where to put
           the file contents

         * **headers** (*dict*) -- Any additional headers to send in
           the request

         * **cb** (*int*) -- a callback function that will be called
           to report progress on the upload.  The callback should
           accept two integer parameters, the first representing the
           number of bytes that have been successfully transmitted to
           S3 and the second representing the size of the to be
           transmitted object.

         * **num_cb** -- (optional) If a callback is specified with
           the cb parameter this parameter determines the granularity
           of the callback by defining the maximum number of times the
           callback will be called during the file transfer.

         * **torrent** (*bool*) -- If True, returns the contents of
           a torrent file as a string.

         * **res_download_handler** -- If provided, this handler
           will perform the download.

         * **response_headers** (*dict*) -- A dictionary containing
           HTTP headers/values that will override any headers
           associated with the stored object in the response.  See
           http://goo.gl/EWOPb for details.

         * **version_id** (*str*) -- The ID of a particular version
           of the object. If this parameter is not supplied but the
           Key object has a "version_id" attribute, that value will be
           used when retrieving the object.  You can set the Key
           object's "version_id" attribute to None to always grab the
           latest version from a version-enabled bucket.

   get_file(fp, headers=None, cb=None, num_cb=10, torrent=False, version_id=None, override_num_retries=None, response_headers=None)

      Retrieves a file from an S3 Key

      Parameters:
         * **fp** (*file*) -- File pointer to put the data into

         * **cb** (*int*) -- a callback function that will be called
           to report progress on the upload.  The callback should
           accept two integer parameters, the first representing the
           number of bytes that have been successfully transmitted to
           S3 and the second representing the size of the to be
           transmitted object.

         * **num_cb** -- (optional) If a callback is specified with
           the cb parameter this parameter determines the granularity
           of the callback by defining the maximum number of times the
           callback will be called during the file transfer.

         * **torrent** (*bool*) -- Flag for whether to get a torrent
           for the file

         * **override_num_retries** (*int*) -- If not None will
           override configured num_retries parameter for underlying
           GET.

         * **response_headers** (*dict*) -- A dictionary containing
           HTTP headers/values that will override any headers
           associated with the stored object in the response.  See
           http://goo.gl/EWOPb for details.

         * **version_id** (*str*) -- The ID of a particular version
           of the object. If this parameter is not supplied but the
           Key object has a "version_id" attribute, that value will be
           used when retrieving the object.  You can set the Key
           object's "version_id" attribute to None to always grab the
           latest version from a version-enabled bucket.

      Param:
         headers to send when retrieving the files

   get_md5_from_hexdigest(md5_hexdigest)

      A utility function to create the 2-tuple (md5hexdigest,
      base64md5) from just having a precalculated md5_hexdigest.

   get_metadata(name)

   get_redirect()

      Return the redirect location configured for this key.

      If no redirect is configured (via set_redirect), then None will
      be returned.

   get_torrent_file(fp, headers=None, cb=None, num_cb=10)

      Get a torrent file (see to get_file)

      Parameters:
         * **fp** (*file*) -- The file pointer of where to put the
           torrent

         * **headers** (*dict*) -- Headers to be passed

         * **cb** (*int*) -- a callback function that will be called
           to report progress on the upload.  The callback should
           accept two integer parameters, the first representing the
           number of bytes that have been successfully transmitted to
           S3 and the second representing the size of the to be
           transmitted object.

         * **num_cb** -- (optional) If a callback is specified with
           the cb parameter this parameter determines the granularity
           of the callback by defining the maximum number of times the
           callback will be called during the file transfer.

   get_xml_acl(headers=None)

   handle_addl_headers(headers)

      Used by Key subclasses to do additional, provider-specific
      processing of response headers. No-op for this base class.

   handle_encryption_headers(resp)

   handle_restore_headers(response)

   handle_version_headers(resp, force=False)

   key

   make_public(headers=None)

   md5

   next()

      By providing a next method, the key object supports use as an
      iterator. For example, you can now say:

      for bytes in key:
         write bytes to a file or whatever

      All of the HTTP connection stuff is handled for you.

   open(mode='r', headers=None, query_args=None, override_num_retries=None)

   open_read(headers=None, query_args='', override_num_retries=None, response_headers=None)

      Open this key for reading

      Parameters:
         * **headers** (*dict*) -- Headers to pass in the web
           request

         * **query_args** (*string*) -- Arguments to pass in the
           query string (ie, 'torrent')

         * **override_num_retries** (*int*) -- If not None will
           override configured num_retries parameter for underlying
           GET.

         * **response_headers** (*dict*) -- A dictionary containing
           HTTP headers/values that will override any headers
           associated with the stored object in the response.  See
           http://goo.gl/EWOPb for details.

   open_write(headers=None, override_num_retries=None)

      Open this key for writing. Not yet implemented

      Parameters:
         * **headers** (*dict*) -- Headers to pass in the write
           request

         * **override_num_retries** (*int*) -- If not None will
           override configured num_retries parameter for underlying
           PUT.

   provider

   read(size=0)

   restore(days, headers=None)

      Restore an object from an archive.

      Parameters:
         **days** (*int*) -- The lifetime of the restored object (must
         be at least 1 day).  If the object is already restored then
         this parameter can be used to readjust the lifetime of the
         restored object.  In this case, the days param is with
         respect to the initial time of the request. If the object has
         not been restored, this param is with respect to the
         completion time of the request.

   send_file(fp, headers=None, cb=None, num_cb=10, query_args=None, chunked_transfer=False, size=None)

      Upload a file to a key into a bucket on S3.

      Parameters:
         * **fp** (*file*) -- The file pointer to upload. The file
           pointer must point point at the offset from which you wish
           to upload. ie. if uploading the full file, it should point
           at the start of the file. Normally when a file is opened
           for reading, the fp will point at the first byte.  See the
           bytes parameter below for more info.

         * **headers** (*dict*) -- The headers to pass along with
           the PUT request

         * **num_cb** (*int*) -- (optional) If a callback is
           specified with the cb parameter this parameter determines
           the granularity of the callback by defining the maximum
           number of times the callback will be called during the file
           transfer. Providing a negative integer will cause your
           callback to be called with each buffer read.

         * **query_args** (*string*) -- (optional) Arguments to pass
           in the query string.

         * **chunked_transfer** (*boolean*) -- (optional) If true,
           we use chunked Transfer-Encoding.

         * **size** (*int*) -- (optional) The Maximum number of
           bytes to read from the file pointer (fp). This is useful
           when uploading a file in multiple parts where you are
           splitting the file up into different ranges to be uploaded.
           If not specified, the default behaviour is to read all
           bytes from the file pointer. Less bytes may be available.

   set_acl(acl_str, headers=None)

   set_canned_acl(acl_str, headers=None)

   set_contents_from_file(fp, headers=None, replace=True, cb=None, num_cb=10, policy=None, md5=None, reduced_redundancy=False, query_args=None, encrypt_key=False, size=None, rewind=False)

      Store an object in S3 using the name of the Key object as the
      key in S3 and the contents of the file pointed to by 'fp' as the
      contents. The data is read from 'fp' from its current position
      until 'size' bytes have been read or EOF.

      Parameters:
         * **fp** (*file*) -- the file whose contents to upload

         * **headers** (*dict*) -- Additional HTTP headers that will
           be sent with the PUT request.

         * **replace** (*bool*) -- If this parameter is False, the
           method will first check to see if an object exists in the
           bucket with the same key.  If it does, it won't overwrite
           it.  The default value is True which will overwrite the
           object.

         * **cb** (*function*) -- a callback function that will be
           called to report progress on the upload.  The callback
           should accept two integer parameters, the first
           representing the number of bytes that have been
           successfully transmitted to S3 and the second representing
           the size of the to be transmitted object.

         * **num_cb** (*int*) -- (optional) If a callback is
           specified with the cb parameter this parameter determines
           the granularity of the callback by defining the maximum
           number of times the callback will be called during the file
           transfer.

         * **policy** ("boto.s3.acl.CannedACLStrings") -- A canned
           ACL policy that will be applied to the new key in S3.

         * **md5** (*A tuple containing the hexdigest version of the
           MD5 checksum of the file as the first element and the
           Base64-encoded version of the plain checksum as the second
           element.  This is the same format returned by the
           compute_md5 method.*) -- If you need to compute the MD5 for
           any reason prior to upload, it's silly to have to do it
           twice so this param, if present, will be used as the MD5
           values of the file.  Otherwise, the checksum will be
           computed.

         * **reduced_redundancy** (*bool*) -- If True, this will set
           the storage class of the new Key to be REDUCED_REDUNDANCY.
           The Reduced Redundancy Storage (RRS) feature of S3,
           provides lower redundancy at lower storage cost.

         * **encrypt_key** (*bool*) -- If True, the new copy of the
           object will be encrypted on the server-side by S3 and will
           be stored in an encrypted form while at rest in S3.

         * **size** (*int*) -- (optional) The Maximum number of
           bytes to read from the file pointer (fp). This is useful
           when uploading a file in multiple parts where you are
           splitting the file up into different ranges to be uploaded.
           If not specified, the default behaviour is to read all
           bytes from the file pointer. Less bytes may be available.

         * **rewind** (*bool*) -- (optional) If True, the file
           pointer (fp) will be rewound to the start before any bytes
           are read from it. The default behaviour is False which
           reads from the current position of the file pointer (fp).

      Return type:
         int

      Returns:
         The number of bytes written to the key.

   set_contents_from_filename(filename, headers=None, replace=True, cb=None, num_cb=10, policy=None, md5=None, reduced_redundancy=False, encrypt_key=False)

      Store an object in S3 using the name of the Key object as the
      key in S3 and the contents of the file named by 'filename'. See
      set_contents_from_file method for details about the parameters.

      Parameters:
         * **filename** (*string*) -- The name of the file that you
           want to put onto S3

         * **headers** (*dict*) -- Additional headers to pass along
           with the request to AWS.

         * **replace** (*bool*) -- If True, replaces the contents of
           the file if it already exists.

         * **cb** (*int*) -- a callback function that will be called
           to report progress on the upload.  The callback should
           accept two integer parameters, the first representing the
           number of bytes that have been successfully transmitted to
           S3 and the second representing the size of the to be
           transmitted object.

         * **num_cb** -- (optional) If a callback is specified with
           the cb parameter this parameter determines the granularity
           of the callback by defining the maximum number of times the
           callback will be called during the file transfer.

         * **policy** ("boto.s3.acl.CannedACLStrings") -- A canned
           ACL policy that will be applied to the new key in S3.

         * **md5** (*A tuple containing the hexdigest version of the
           MD5 checksum of the file as the first element and the
           Base64-encoded version of the plain checksum as the second
           element.  This is the same format returned by the
           compute_md5 method.*) -- If you need to compute the MD5 for
           any reason prior to upload, it's silly to have to do it
           twice so this param, if present, will be used as the MD5
           values of the file.  Otherwise, the checksum will be
           computed.

         * **reduced_redundancy** (*bool*) -- If True, this will set
           the storage class of the new Key to be REDUCED_REDUNDANCY.
           The Reduced Redundancy Storage (RRS) feature of S3,
           provides lower redundancy at lower storage cost.  :type
           encrypt_key: bool :param encrypt_key: If True, the new copy
           of the object will be encrypted on the server-side by S3
           and will be stored in an encrypted form while at rest in
           S3.

      Return type:
         int

      Returns:
         The number of bytes written to the key.

   set_contents_from_stream(fp, headers=None, replace=True, cb=None, num_cb=10, policy=None, reduced_redundancy=False, query_args=None, size=None)

      Store an object using the name of the Key object as the key in
      cloud and the contents of the data stream pointed to by 'fp' as
      the contents.

      The stream object is not seekable and total size is not known.
      This has the implication that we can't specify the Content-Size
      and Content-MD5 in the header. So for huge uploads, the delay in
      calculating MD5 is avoided but with a penalty of inability to
      verify the integrity of the uploaded data.

      Parameters:
         * **fp** (*file*) -- the file whose contents are to be
           uploaded

         * **headers** (*dict*) -- additional HTTP headers to be
           sent with the PUT request.

         * **replace** (*bool*) -- If this parameter is False, the
           method will first check to see if an object exists in the
           bucket with the same key. If it does, it won't overwrite
           it. The default value is True which will overwrite the
           object.

         * **cb** (*function*) -- a callback function that will be
           called to report progress on the upload. The callback
           should accept two integer parameters, the first
           representing the number of bytes that have been
           successfully transmitted to GS and the second representing
           the total number of bytes that need to be transmitted.

         * **num_cb** (*int*) -- (optional) If a callback is
           specified with the cb parameter, this parameter determines
           the granularity of the callback by defining the maximum
           number of times the callback will be called during the file
           transfer.

         * **policy** ("boto.gs.acl.CannedACLStrings") -- A canned
           ACL policy that will be applied to the new key in GS.

         * **reduced_redundancy** (*bool*) -- If True, this will set
           the storage class of the new Key to be REDUCED_REDUNDANCY.
           The Reduced Redundancy Storage (RRS) feature of S3,
           provides lower redundancy at lower storage cost.

         * **size** (*int*) -- (optional) The Maximum number of
           bytes to read from the file pointer (fp). This is useful
           when uploading a file in multiple parts where you are
           splitting the file up into different ranges to be uploaded.
           If not specified, the default behaviour is to read all
           bytes from the file pointer. Less bytes may be available.

   set_contents_from_string(string_data, headers=None, replace=True, cb=None, num_cb=10, policy=None, md5=None, reduced_redundancy=False, encrypt_key=False)

      Store an object in S3 using the name of the Key object as the
      key in S3 and the string 's' as the contents. See
      set_contents_from_file method for details about the parameters.

      Parameters:
         * **headers** (*dict*) -- Additional headers to pass along
           with the request to AWS.

         * **replace** (*bool*) -- If True, replaces the contents of
           the file if it already exists.

         * **cb** (*int*) -- a callback function that will be called
           to report progress on the upload.  The callback should
           accept two integer parameters, the first representing the
           number of bytes that have been successfully transmitted to
           S3 and the second representing the size of the to be
           transmitted object.

         * **num_cb** -- (optional) If a callback is specified with
           the cb parameter this parameter determines the granularity
           of the callback by defining the maximum number of times the
           callback will be called during the file transfer.

         * **policy** ("boto.s3.acl.CannedACLStrings") -- A canned
           ACL policy that will be applied to the new key in S3.

         * **md5** (*A tuple containing the hexdigest version of the
           MD5 checksum of the file as the first element and the
           Base64-encoded version of the plain checksum as the second
           element.  This is the same format returned by the
           compute_md5 method.*) -- If you need to compute the MD5 for
           any reason prior to upload, it's silly to have to do it
           twice so this param, if present, will be used as the MD5
           values of the file.  Otherwise, the checksum will be
           computed.

         * **reduced_redundancy** (*bool*) -- If True, this will set
           the storage class of the new Key to be REDUCED_REDUNDANCY.
           The Reduced Redundancy Storage (RRS) feature of S3,
           provides lower redundancy at lower storage cost.

         * **encrypt_key** (*bool*) -- If True, the new copy of the
           object will be encrypted on the server-side by S3 and will
           be stored in an encrypted form while at rest in S3.

   set_metadata(name, value)

   set_redirect(redirect_location, headers=None)

      Configure this key to redirect to another location.

      When the bucket associated with this key is accessed from the
      website endpoint, a 301 redirect will be issued to the specified
      *redirect_location*.

      Parameters:
         **redirect_location** (*string*) -- The location to redirect.

   set_remote_metadata(metadata_plus, metadata_minus, preserve_acl, headers=None)

   set_xml_acl(acl_str, headers=None)

   should_retry(response, chunked_transfer=False)

   startElement(name, attrs, connection)

   update_metadata(d)


boto.s3.prefix
==============

class class boto.s3.prefix.Prefix(bucket=None, name=None)

   endElement(name, value, connection)

   provider

   startElement(name, attrs, connection)


boto.s3.multipart
=================

class class boto.s3.multipart.CompleteMultiPartUpload(bucket=None)

   Represents a completed MultiPart Upload.  Contains the following
   useful attributes:

      * location - The URI of the completed upload

      * bucket_name - The name of the bucket in which the upload

           is contained

      * key_name - The name of the new, completed key

      * etag - The MD5 hash of the completed, combined upload

      * version_id - The version_id of the completed upload

      * encrypted - The value of the encryption header

   endElement(name, value, connection)

   startElement(name, attrs, connection)

class class boto.s3.multipart.MultiPartUpload(bucket=None)

   Represents a MultiPart Upload operation.

   cancel_upload()

      Cancels a MultiPart Upload operation.  The storage consumed by
      any previously uploaded parts will be freed. However, if any
      part uploads are currently in progress, those part uploads might
      or might not succeed. As a result, it might be necessary to
      abort a given multipart upload multiple times in order to
      completely free all storage consumed by all parts.

   complete_upload()

      Complete the MultiPart Upload operation.  This method should be
      called when all parts of the file have been successfully
      uploaded to S3.

      Return type:
         "boto.s3.multipart.CompletedMultiPartUpload"

      Returns:
         An object representing the completed upload.

   copy_part_from_key(src_bucket_name, src_key_name, part_num, start=None, end=None, src_version_id=None, headers=None)

      Copy another part of this MultiPart Upload.

      Parameters:
         * **src_bucket_name** (*string*) -- Name of the bucket
           containing the source key

         * **src_key_name** (*string*) -- Name of the source key

         * **part_num** (*int*) -- The number of this part.

         * **start** (*int*) -- Zero-based byte offset to start
           copying from

         * **end** (*int*) -- Zero-based byte offset to copy to

         * **src_version_id** (*string*) -- version_id of source
           object to copy from

         * **headers** (*dict*) -- Any headers to pass along in the
           request

   endElement(name, value, connection)

   get_all_parts(max_parts=None, part_number_marker=None, encoding_type=None)

      Return the uploaded parts of this MultiPart Upload.  This is a
      lower-level method that requires you to manually page through
      results.  To simplify this process, you can just use the object
      itself as an iterator and it will automatically handle all of
      the paging with S3.

   startElement(name, attrs, connection)

   to_xml()

   upload_part_from_file(fp, part_num, headers=None, replace=True, cb=None, num_cb=10, md5=None, size=None)

      Upload another part of this MultiPart Upload.

      Note: After you initiate multipart upload and upload one or
        more parts, you must either complete or abort multipart upload
        in order to stop getting charged for storage of the uploaded
        parts. Only after you either complete or abort multipart
        upload, Amazon S3 frees up the parts storage and stops
        charging you for the parts storage.

      Parameters:
         * **fp** (*file*) -- The file object you want to upload.

         * **part_num** (*int*) -- The number of this part.

      The other parameters are exactly as defined for the
      "boto.s3.key.Key" set_contents_from_file method.

      Return type:
         "boto.s3.key.Key" or subclass

      Returns:
         The uploaded part containing the etag.

class class boto.s3.multipart.Part(bucket=None)

   Represents a single part in a MultiPart upload. Attributes include:

      * part_number - The integer part number

      * last_modified - The last modified date of this part

      * etag - The MD5 hash of this part

      * size - The size, in bytes, of this part

   endElement(name, value, connection)

   startElement(name, attrs, connection)

boto.s3.multipart.part_lister(mpupload, part_number_marker=None)

   A generator function for listing parts of a multipart upload.


boto.s3.multidelete
===================

class class boto.s3.multidelete.Deleted(key=None, version_id=None, delete_marker=False, delete_marker_version_id=None)

   A successfully deleted object in a multi-object delete request.

   Variables:
      * **key** -- Key name of the object that was deleted.

      * **version_id** -- Version id of the object that was deleted.

      * **delete_marker** -- If True, indicates the object deleted
        was a DeleteMarker.

      * **delete_marker_version_id** -- Version ID of the delete
        marker deleted.

   endElement(name, value, connection)

   startElement(name, attrs, connection)

class class boto.s3.multidelete.Error(key=None, version_id=None, code=None, message=None)

   An unsuccessful deleted object in a multi-object delete request.

   Variables:
      * **key** -- Key name of the object that was not deleted.

      * **version_id** -- Version id of the object that was not
        deleted.

      * **code** -- Status code of the failed delete operation.

      * **message** -- Status message of the failed delete
        operation.

   endElement(name, value, connection)

   startElement(name, attrs, connection)

class class boto.s3.multidelete.MultiDeleteResult(bucket=None)

   The status returned from a MultiObject Delete request.

   Variables:
      * **deleted** -- A list of successfully deleted objects.  Note
        that if the quiet flag was specified in the request, this list
        will be empty because only error responses would be returned.

      * **errors** -- A list of unsuccessfully deleted objects.

   endElement(name, value, connection)

   startElement(name, attrs, connection)


boto.s3.resumable_download_handler
==================================

class class boto.s3.resumable_download_handler.ByteTranslatingCallbackHandler(proxied_cb, download_start_point)

   Proxy class that translates progress callbacks made by
   boto.s3.Key.get_file(), taking into account that we're resuming a
   download.

   call(total_bytes_uploaded, total_size)

class class boto.s3.resumable_download_handler.ResumableDownloadHandler(tracker_file_name=None, num_retries=None)

   Handler for resumable downloads.

   Constructor. Instantiate once for each downloaded file.

   Parameters:
      * **tracker_file_name** (*string*) -- optional file name to
        save tracking info about this download. If supplied and the
        current process fails the download, it can be retried in a new
        process. If called with an existing file containing an
        unexpired timestamp, we'll resume the transfer for this file;
        else we'll start a new resumable download.

      * **num_retries** (*int*) -- the number of times we'll re-try
        a resumable download making no progress. (Count resets every
        time we get progress, so download can span many more than this
        number of retries.)

   MIN_ETAG_LEN = 5

   RETRYABLE_EXCEPTIONS = (<class 'httplib.HTTPException'>, <type 'exceptions.IOError'>, <class 'socket.error'>, <class 'socket.gaierror'>)

   get_file(key, fp, headers, cb=None, num_cb=10, torrent=False, version_id=None, hash_algs=None)

      Retrieves a file from a Key :type key: "boto.s3.key.Key" or
      subclass :param key: The Key object from which upload is to be
      downloaded

      Parameters:
         * **fp** (*file*) -- File pointer into which data should be
           downloaded

         * **cb** (*function*) -- (optional) a callback function
           that will be called to report progress on the download.
           The callback should accept two integer parameters, the
           first representing the number of bytes that have been
           successfully transmitted from the storage service and the
           second representing the total number of bytes that need to
           be transmitted.

         * **num_cb** (*int*) -- (optional) If a callback is
           specified with the cb parameter this parameter determines
           the granularity of the callback by defining the maximum
           number of times the callback will be called during the file
           transfer.

         * **torrent** (*bool*) -- Flag for whether to get a torrent
           for the file

         * **version_id** (*string*) -- The version ID (optional)

         * **hash_algs** (*dictionary*) -- (optional) Dictionary of
           hash algorithms and corresponding hashing class that
           implements update() and digest(). Defaults to {'md5':
           hashlib/md5.md5}.

      Param:
         headers to send when retrieving the files

      Raises ResumableDownloadException if a problem occurs during
         the transfer.

boto.s3.resumable_download_handler.get_cur_file_size(fp, position_to_eof=False)

   Returns size of file, optionally leaving fp positioned at EOF.


boto.s3.lifecycle
=================

class class boto.s3.lifecycle.Expiration(days=None, date=None)

   When an object will expire.

   Variables:
      * **days** -- The number of days until the object expires

      * **date** -- The date when the object will expire. Must be in
        ISO 8601 format.

   endElement(name, value, connection)

   startElement(name, attrs, connection)

   to_xml()

class class boto.s3.lifecycle.Lifecycle

   A container for the rules associated with a Lifecycle
   configuration.

   add_rule(id=None, prefix='', status='Enabled', expiration=None, transition=None)

      Add a rule to this Lifecycle configuration.  This only adds the
      rule to the local copy.  To install the new rule(s) on the
      bucket, you need to pass this Lifecycle config object to the
      configure_lifecycle method of the Bucket object.

      Parameters:
         * **id** (*str*) -- Unique identifier for the rule. The
           value cannot be longer than 255 characters. This value is
           optional. The server will generate a unique value for the
           rule if no value is provided.

         * **status** (*str*) -- If 'Enabled', the rule is currently
           being applied. If 'Disabled', the rule is not currently
           being applied.

         * **expiration** (*int*) -- Indicates the lifetime, in
           days, of the objects that are subject to the rule. The
           value must be a non-zero positive integer. A Expiration
           object instance is also perfect.

         * **transition** (*Transition*) -- Indicates when an object
           transitions to a different storage class.

      Iparam prefix:
         Prefix identifying one or more objects to which the rule
         applies.

   endElement(name, value, connection)

   startElement(name, attrs, connection)

   to_xml()

      Returns a string containing the XML version of the Lifecycle
      configuration as defined by S3.

class class boto.s3.lifecycle.Rule(id=None, prefix=None, status=None, expiration=None, transition=None)

   A Lifecycle rule for an S3 bucket.

   Variables:
      * **id** -- Unique identifier for the rule. The value cannot
        be longer than 255 characters. This value is optional. The
        server will generate a unique value for the rule if no value
        is provided.

      * **prefix** -- Prefix identifying one or more objects to
        which the rule applies. If prefix is not provided, Boto
        generates a default prefix which will match all objects.

      * **status** -- If 'Enabled', the rule is currently being
        applied. If 'Disabled', the rule is not currently being
        applied.

      * **expiration** -- An instance of *Expiration*. This
        indicates the lifetime of the objects that are subject to the
        rule.

      * **transition** -- An instance of *Transition*.  This
        indicates when to transition to a different storage class.

   endElement(name, value, connection)

   startElement(name, attrs, connection)

   to_xml()

class class boto.s3.lifecycle.Transition(days=None, date=None, storage_class=None)

   A transition to a different storage class.

   Variables:
      * **days** -- The number of days until the object should be
        moved.

      * **date** -- The date when the object should be moved.
        Should be in ISO 8601 format.

      * **storage_class** -- The storage class to transition to.
        Valid values are GLACIER.

   endElement(name, value, connection)

   startElement(name, attrs, connection)

   to_xml()


boto.s3.tagging
===============

class class boto.s3.tagging.Tag(key=None, value=None)

   endElement(name, value, connection)

   startElement(name, attrs, connection)

   to_xml()

class class boto.s3.tagging.TagSet

   add_tag(key, value)

   endElement(name, value, connection)

   startElement(name, attrs, connection)

   to_xml()

class class boto.s3.tagging.Tags

   A container for the tags associated with a bucket.

   add_tag_set(tag_set)

   endElement(name, value, connection)

   startElement(name, attrs, connection)

   to_xml()


boto.s3.user
============

class class boto.s3.user.User(parent=None, id='', display_name='')

   endElement(name, value, connection)

   startElement(name, attrs, connection)

   to_xml(element_name='Owner')
