AWS makes finding the size of an S3 bucket fairly unintuitive and hidden in the menus. Here’s how to find the total size, graph it in CloudWatch, or fetch it programmatically from the command line.
How to Find Bucket Size from the GUI
From the S3 Management Console, click on the bucket you wish to view. Under Management > Metrics > Storage, there’s a graph that shows the total number of bytes stored over time.
Additionally, you can view this metric in CloudWatch, along with the number of objects stored. You can use this to add the bucket size to a graph in a CloudWatch dashboard.
From the bucket overview page, you can also select all items, and choose Actions > Get Total Size. But, if you have more than one page of items, you can’t select everything, and it won’t be representative of the bucket’s actual size.
You can also view the bucket’s size from the Cost Explorer, because the billing department will, of course, have a very accurate measurement of your usage.
How to Get Bucket Size from the CLI
You can list the size of a bucket using the AWS CLI, by passing the
--summarize flag to
aws s3 ls s3://bucket --recursive --human-readable --summarize
This will loop over each item in the bucket, and print out the total number of objects and total size at the end. If you’d like to not have your terminal flooded with every filename in your bucket, you can pass the output to
aws s3 ls s3://bucket --recursive --human-readable --summarize | tail -2
This will take a while if you have a very large bucket. You could use
get-metric-data to fetch the size from CloudWatch, but the syntax is clunky.
An easier method is to install s3cmd; It’s not a part of the AWS CLI, so you’ll have to manually install it from your distro’s package manager. For Debian-based systems like Ubuntu, that would be:
sudo apt-get install s3cmd
s3cmd is installed, you’ll need to run the following command to link it to your account with your access key (you can generate a new one from “My Security Credentials”):
Once it’s installed, you can get the size of all of your buckets quickly with:
s3cmd du -H 5.708148956298828M 2 objects s3://bucket/
This will display the size of large buckets much faster than recursively summing file sizes will, as it fetches the actual disk space used. Plus, it’s human readable if you pass the
-H flag, so you won’t have to break out your calculator.