10 tips for securing data hosted on Amazon S3

The use of Amazon Simple Storage Service S3 is becoming more and more widespread, being used in a multitude of use cases: sensitive data repositories, security log storage, integration with backup tools…, so we must pay special attention to the way we configure our buckets and how we expose them to the Internet.

In this post we will talk about 10 good security practices that will allow us to manage our S3 buckets correctly.

Let’s get started.

1 – Block public access to S3 buckets across the organization

By default, the buckets are private and can only be used by the users of our account, provided that they have set the correct permissions.

Additionally, the buckets have an “S3 Block Public Access” option that prevents the buckets from being considered public. This option can be enabled or disabled for each bucket in your AWS Account. To prevent a user from deactivating this option, we can create an SCP policy in our organization so that no AWS Account member of the organization can do so.

2 – Verify that no wildcards are used in the allow policy principals

All security policies must be governed by the principle of least privilege. To do this, we will avoid the use of wildcards “*” when setting permissions, and every time we want to set a permission to a bucket, we will specify which “principal” should access that resource. It can be a range of IP addresses, an AWS Account, a VPC… but a wildcardwill never be used.

3 – Verify that no wildcards are used in the allow policy actions

Following the principle of least privilege we will check that the allow policies are correctly described with the actions that the identity to which we grant access must execute. For example, we will use S3:GetObject or S3:PutObject but avoid using S3:* where all actions are allowed.

4 – Enable GuardDuty to detect suspicious activity in S3 buckets

The GuardDuty service monitors our buckets in real time for possible security incidents. It allows us to detect requests from unusual sources, strange patterns of API calls trying to discover misconfigured buckets…

GuardDuty generates alerts to notify the security team and thus automate a solution to security incidents.

5 – Use Amazon Macie to detect sensitive content

Macie uses artificial intelligence to detect sensitive content among our buckets. By activating Macie at the organization level, we can obtain a centralized console where we can evaluate our data and be alerted in case it is public, not encrypted or shared outside our organization.

6 – Encrypt your data

It is vitally important that our data is encrypted at rest. Amazon S3 provides four methods to encrypt our data:

  • SSE-S3 makes use of cryptographic keys managed by Amazon.
  • SSE-KMS uses the KMS service to encrypt/decrypt our data, which allows us to set permissions on who can use the encryption keys, log every action performed and use our own keys or those of Amazon.
  • SSE-C, with which we must store and manage our own keys.
  • Finally, we can use client-side encryption to encrypt and decrypt our data ourselves before uploading or downloading it to S3.

7- Protect your data from accidental deletion

Amazon provides 99.99999999999% durability of our objects in case of standard storage, which is stored in at least 3 different availability zones.

This does not prevent an accidental deletion from causing your data to disappear, and we have different options to prevent this:

  • Object versioning: allows us to add a deletion mark but not to permanently delete or overwrite the object. It will allow us to quickly recover each previous version of the object.
  • MFA delete requires adding a second authentication method in case of final deletion of a version.
  • S3 Object lock activates the WORM (write-once-read-many) model, so that the object will be write-protected, making it impossible to delete or overwrite it.

8 – Enable S3 Access Logs

AWS S3 integrates with Cloudtrail. Every S3 API call can be logged and integrated with CloudWatch for future analysis. Cloudtrail can be enabled globally for the entire organization, so it is recommended that our critical buckets have this integration enabled.

9 – Make a backup of your S3 data

Keep at least one backup of your critical data in more than one destination.

AWS provides Cross Region Replication CRR functionality where we can completely replicate a bucket to another region. In case of deletion of an object in the source bucket, we will keep the object in the destination bucket.

10 – Monitor S3 using Security Hub

Security Hub provides us with a global console where we can view the status of our AWS accounts.

We can upload a set of compliance rules to help us ensure that our resources comply with a set of configurations based on best practices. The S3 service benefits from them by allowing us to evaluate whether our buckets have active deny public access, encryption at rest, encryption in transit…

Conclusions

As we have seen, with these tips we can build a robust security strategy in our buckets, keeping the information protected and controlled against unauthorized access, encrypting our data, logging every activity that takes place in them and having a backup in case of disasters.

AWS provides us with a large number of possibilities and tools to help us do this, so we must know all the possibilities they provide us with and how to configure them correctly.

See also in: