A Guide to S3 Logging

Mar 29, 23
A Guide to S3 Logging

12/13/23: This post has been updated with details on leveraging Date Partitioning.
11/20/23: AWS has lauched Data Partitioning for S3 Access Logs. I strongly recommend enabling this for your buckets. Going forward, it should enable Athena (and other) queries without post processing.

Here’s what you should do about S3 Logging

  1. Turn on S3 Access Logs with event time Date Partioning for any non-public bucket. Save them for a rainy day, at which point you can query them with Athena, somewhat painfully at scale
  2. Use Cloudtrail Data Events with Event Selectors, on buckets of reasonable volume where you require detective controls on reads or writes
  3. If you have a high volume bucket that needs detective controls, you’ll want to deploy a Lambda to process S3 Access Logs as they’re delivered

About S3 Logging

S3, the first generally available AWS service, is over 15 years old. It stores over 100 trillion objects.

While S3 is well documented, good answers on logging are buried in the noise. Google, and you’ll find 100 blog posts, a half dozen AWS-published documents, and few clear answers.

Starting with the basics, there are three types of logs available for S3:

Pro Tip: if you want to get tricky, you can set up SSE-KMS with a customer-managed key for S3. This provides an alternative for data plane events via the logging on decryption h/t Aidan Steele

An unnecessary defense of logging

This post is not intended to convince you that logging is important. In short, logging is an obvious practice, recommended by AWS for S3 specifically, and required to various extents by every compliance scheme and regulatory standard (ex. NIST SP 800-53 Rev. 5)

However, it is worth assessing what your goals are for the logs you’re collecting. Are you gathering them to detect attackers? To support forensics? To enable threat hunting? Or just to check a compliance box?

Comparing S3 Logs

No matter your goals, you should set up Cloudtrail. You should do it using an Organization Trail. You should log management events. Few controls in AWS are as universally applicable.

It gets more complicated when you start looking at Cloudtrail Data Events and Server Access Logs. AWS’ “Logging options for Amazon S3” offers the best holistic comparison.

A screenshot of the table from AWS's Logging options for Amazon S3 doc

Data Events vs. Server Access Logs

Data Events S3 Server Access Logs
Support granular Event Selectors Offer detail that is not available in Data Events, like HTTP Referer
N/A Log the Object Size and HTTP Referer
Configured via Cloudtrail, and can easily be managed centrally Have to be configured on each bucket
Logs can be routed to various and multiple systems Can only log to another S3 bucket in the same region and account
Reliable and ~fast delivery (under 5 minutes, often much sooner) Delivery is best effort and is “within a few hours”
Generally, are well supported as a data source in your SIEM of choice Require a Lambda for detective controls, [1] date partioning can enable querying (once enabled, post 2023)
Cost per Data Event, for storage (both in AWS and your SIEM), and for SIEM ingestion (often) Cost for storage in S3

[1] See Working with Server Access Logs

Mixed messages

Part of the confusion stems for a lack of a clear best-of-breed solution from AWS. By default, your account will have neither Data Events or S3 Server Access Logs. There is plenty of bad guidance floating around that, in summary, says “turn it all on.” That approach disregards and disrespects the real world implications of cost, complexity, and work to operationalize these tools.

Even AWS resources give mixed recommendations:

… and yes, so does ChatGPT

A screenshot of ChatGPT recommending enabling all logging options

In practice

Up top, we presented the guidelines for S3 logging. Applied, this might look like the following:

  1. In order to proactively enforce any control in the cloud, you should be codifying your infrastructure as code (i.e. Terraform). This is key to mediate and audit changes. In such an environment, you should implement standard modules for key services like S3. These modules offer an opportunity to set sane defaults and shared configuration.
    For S3, you can configure your module to enable S3 Access Logs by default. Take care, as you’ll need to handle each account:region pair where you will be hosting S3 buckets.

  2. With your buckets represented as code, you can use a SAST tool or linter to:
    a. detect use of the raw s3_bucket module instead of your hardened module
    b. prompt developers with guidance on when and how to configure Cloudtrail data events
    c. detect when Cloudtrail data events are enabled on a new bucket

  3. When you detect added buckets with Cloudtrail data events, take the opportunity to write the appropriate detections for unapproved or unexpected access.

  4. If you need detective controls on top of S3 Access Logs, you should set up a Lambda trigger for the bucket. The Lambda will need to parse the log file and implement detection logic over the logs. Reach out in CloudSecForum if you need sample code!

  5. To buffer your code-based detections, you can also rely on your Cloud Security Posture Management (or equivalent buzzword) tool to enforce an invariant on your desired logging.

Don’t forget to watch out for:

  1. Cloudtrail Data Events get expensive fast - the mix of charges for the data events themselves, plus storage across both cloudtrail and your SIEM, and then SIEM ingestion and processing.
  2. It can never be mentioned enough, but it’s easy to get bit by cyclic logging, especially if you move to a log-by-default posture.
  3. S3 Server Access Logs are easy to stow for a rainy day, make sure to enable date partioniong or they’ll be hard to use in a pinch

Working with Server Access Logs

By default, Server Access Logs are placed directly in the configured prefix. Logs have the following object key format TargetPrefixYYYY-mm-DD-HH-MM-SS-UniqueString/. This means that AWS Athena (and similar tools) cannot partition the data, and have to scan the entire set. AWS documentation elides this complexity.

Date Partioning: Data Partitioning for S3 Access Logs was launched at re:Invent 2023. I recommend enabling it in all cases, generally with event time prefixes, to support accurate retroactive searching by activity time.

For logs stored without Date Partitioning: AWS has documented an ELT pattern built on Glue. As an alternative, I recommend using a Lambda trigger to automate moving log files under a daily (or more granular) prefix. This will allow more trivial analysis with Athena. You could also explore rolling up the log files.

You could also directly post-process these logs, looking at an example parser like the one from Dassana.

S3 Logging

When planning your logging, consider the following characteristics of each bucket:

  • Volume of writes expected
  • Volume of reads expected
  • Risk of unauthorized writes
  • Risk of unauthorized reads
  • Frequency of human/user access

Then, based on those parameters, go:

  1. Turn on S3 Access Logs for any non-public bucket. Save them for a rainy day, at which point you can query them with Athena, somewhat painfully at scale
  2. Use Cloudtrail Data Events with Event Selectors, on buckets of reasonable volume where you require detective controls on reads or writes
  3. If you have a high volume bucket that needs detective controls, you’ll want to deploy a Lambda to process S3 Access Logs as they’re delivered
  4. To actually allow querying of S3 Access Logs, invest in post processing

Additional References


Thank you to:

  • Erik Steringer for prompting the inciting discussing in CloudSecForum, and all those who participated in the thread
  • Aidan Steele for the feedback!