Skip to Content

Understanding AWS S3 with Examples

Amazon S3 (Simple Storage Service) is a scalable, high-speed, low-cost web-based service designed for online backup and archiving of data and application programs. It allows to upload, store, and download any type of files up to 5 TB in size.

Amazon S3 can be integrated with any application or services offered by Amazon, such as Amazon Elastic Compute Cloud (Amazon EC2), Amazon Elastic Block Storage (Amazon EBS), Amazon Glacier, and so on.

The use case for AWS S3

  • File hosting: Companies often deploy their images, videos, audios, PDFs, DOCs, and other files in Amazon S3. This helps in loading the files directly from Amazon S3 without managing the on-premise infrastructure.
  • Storing data on mobile-based applications: Many users/companies go for Amazon S3 to store mobile app data. This becomes easy for user/companies to manage mobile user data over Amazon S3.
  • Static website hosting: Users can host their static website over Amazon S3 along with Amazon Route53.
  • Video Hosting: Companies upload their videos over Amazon S3, which can then be accessed on their website. Amazon S3 can also be configured to provide video streaming.
  • Backup: Users can keep a backup of their data, which will be securely and reliably stored in Amazon S3. Amazon S3 can also be configured to move the old data over to Amazon Glacier for archiving, as the Glacier costs less as compared to S3.

Example of AWS S3

Every object in Amazon S3 can be uniquely addressed through the combination of the web service endpoint, bucket name, key, and optionally, a version.

For example, in the URL https://DOC-EXAMPLE-BUCKET.s3.us-west-2.amazonaws.com/photos/puppy.jpg, DOC-EXAMPLE-BUCKET is the name of the bucket and /photos/puppy.jpg is the key.

AWS S3 Buckets

A bucket is a container in Amazon S3 where the files are uploaded. For using Amazon S3 to store a file, we need to create at least one bucket. Files (objects) are stored in buckets.

The following are a few features of buckets:

  • The bucket name should be unique because it is shared by all users.
  • Buckets can contain logical nested folders and subfolders. But it cannot contain nested buckets.
  • we can create a maximum of 100 buckets in a single account.
  • The bucket name can contain letters, numbers, periods, dash, and the underscore.
  • The bucket name should start with a letter or number, and it should be between 3 to 25 characters long.

Buckets can be managed via the following:

  • REST-style HTTP interface
  • SOAP interface

How to access AWS S3 buckets

Buckets can be accessed via HTTP URLs as follows:

  • http://< BUCKET_NAME>.s3.amazonaws.com/< OBJECT_NAME >
  • http://s3.amazonaws.com/< BUCKET_NAME >/< OBJECT_NAME >

In the preceding URLs, BUCKET_NAME will be the name of the bucket that we provided while creating it. And OBJECT_NAME will be the name of the object that we provided while creating the object.

AWS S3 Objects

An object is a stored file in Amazon S3. Each object consists of a unique identifier, the user who uploaded the object, and permissions for other users to perform CRUD operations on it. Every object is stored in a bucket.

Objects can be managed via the following:

  • REST-style HTTP interface
  • SOAP interface

How AWS S3 works

Amazon S3 is an object storage service that stores data as objects within buckets. An object is a file and any metadata that describes the file. A bucket is a container for objects.

To store our data in Amazon S3, we first create a bucket and specify a bucket name and AWS Region. Then, we upload our data to that bucket as objects in Amazon S3. Each object has a key (or key name), which is the unique identifier for the object within the bucket.

S3 provides features that we can configure to support our specific use case. For example, we can use S3 Versioning to keep multiple versions of an object in the same bucket, which allows us to restore objects that are accidentally deleted or overwritten.

Buckets and the objects in them are private and can be accessed only if we explicitly grant access permissions. We can use bucket policies, AWS Identity and Access Management (IAM) policies, access control lists (ACLs), and S3 Access Points to manage access.

How to access AWS S3

Amazon S3 provides features for auditing and managing access to our buckets and objects. By default, S3 buckets and the objects in them are private.

We have access only to the S3 resources that we create. To grant granular resource permissions that support our specific use case or to audit the permissions of our Amazon S3 resources, we can use the following features.

  • S3 Block Public Access – Block public access to S3 buckets and objects. By default, Block Public Access settings are turned on at the account and bucket level.
  • AWS Identity and Access Management (IAM) – Create IAM users for our AWS account to manage access to our Amazon S3 resources. For example, we can use IAM with Amazon S3 to control the type of access a user or group of users has to an S3 bucket that our AWS account owns.
  • Bucket policies – Use IAM-based policy language to configure resource-based permissions for our S3 buckets and the objects in them.

Amazon S3 offers access policy options broadly categorized as resource-based policies and user policies. Access policies that we attach to our resources (buckets and objects) are referred to as resource-based policies.

For example, bucket policies and access control lists (ACLs) are resource-based policies. We can also attach access policies to users in our account. These are called user policies. We can choose to use resource-based policies, user policies, or some combination of these to manage permissions to our Amazon S3 resources.

AWS S3 Bucket policy

A bucket policy is a resource-based AWS Identity and Access Management (IAM) policy that we can use to grant access permissions to our bucket and the objects in it. Only the bucket owner can associate a policy with a bucket. The permissions attached to the bucket apply to all of the objects in the bucket that are owned by the bucket owner. Bucket policies are limited to 20 KB in size.

Example of AWS S3 Bucket policy

The following example bucket policy shows the effect, principal, action, and resource elements. The policy allows Dave, a user in account Account-ID, s3:GetObject, s3:GetBucketLocation, and s3:ListBucket Amazon S3 permissions on the awsexamplebucket1 bucket.

{
“Version”: “2012-10-17”,
“Id”: “ExamplePolicy01”,
“Statement”: [
{
“Sid”: “ExampleStatement01”,
“Effect”: “Allow”,
“Principal”: {
“AWS”: “arn:aws:iam::123456789012:user/Dave”
},
“Action”: [
“s3:GetObject”,
“s3:GetBucketLocation”,
“s3:ListBucket”
],
“Resource”: [
“arn:aws:s3:::awsexamplebucket1/*”,
“arn:aws:s3:::awsexamplebucket1”
]
}
]
}

AWS S3 user policy

We can use IAM to manage access to our Amazon S3 resources. We can create IAM users, groups, and roles in our account and attach access policies to them granting them access to AWS resources, including Amazon S3.

Example of AWS S3 User policy

In this example, we want to grant an IAM user in our AWS account access to one of our buckets, awsexamplebucket1, and allow the user to add, update, and delete objects.

In addition to granting the s3:PutObject, s3:GetObject, and s3:DeleteObject permissions to the user, the policy also grants the s3:ListAllMyBuckets, s3:GetBucketLocation, and s3:ListBucket permissions.

These are the additional permissions required by the console. Also, the s3:PutObjectAcl and the s3:GetObjectAcl actions are required to be able to copy, cut, and paste objects in the console.

For example, walkthrough that grants permissions to users and tests them using the console, see Controlling access to a bucket with user policies.

{
“Version”:”2012-10-17″,
“Statement”:[
{
“Effect”:”Allow”,
“Action”: “s3:ListAllMyBuckets”,
“Resource”:”*”
},
{
“Effect”:”Allow”,
“Action”:[“s3:ListBucket”,”s3:GetBucketLocation”],
“Resource”:”arn:aws:s3:::awsexamplebucket1″
},
{
“Effect”:”Allow”,
“Action”:[
“s3:PutObject”,
“s3:PutObjectAcl”,
“s3:GetObject”,
“s3:GetObjectAcl”,
“s3:DeleteObject”
],
“Resource”:”arn:aws:s3:::awsexamplebucket1/*”
}
]
}

How to understand AWS S3 policy

  • Resources – Buckets, objects, access points, and jobs are the Amazon S3 resources for which we can allow or deny permissions. In a policy, we use the Amazon Resource Name (ARN) to identify the resource. For more information, see Amazon S3 resources.
  • Actions – For each resource, Amazon S3 supports a set of operations. we identify resource operations that we will allow (or deny) by using action keywords. For example, the s3:ListBucket permission allows the user to use the Amazon S3 GET Bucket (List Objects) operation.
  • Effect – What the effect will be when the user requests the specific action—this can be either allow or deny. If we do not explicitly grant access to (allow) a resource, access is implicitly denied. we can also explicitly deny access to a resource. we might do this to make sure that a user can’t access the resource, even if a different policy grants access.
  • Principal – The account or user who is allowed access to the actions and resources in the statement. In a bucket policy, the principal is the user, account, service, or other entity that is the recipient of this permission.
  • Condition – Conditions for when a policy is in effect. We can use AWS‐wide keys and Amazon S3‐specific keys to specify conditions in an Amazon S3 access policy.