Fixing permissions on CloudTrail S3 objects

Posted on Updated on

Where I work we aggregate all of our AWS CloudTrail logs from separate accounts into a single S3 bucket in a central account.

Yesterday, I ran into a weird problem where I noticed that our Logging solution, ELK, would not process files dating before a certain time.

Upon further investigation, I discovered that the date period missing was before we moved all of our existing files into the bucket from the AWS account it was originally in. So, “simple!” I thought to myself, “I’ll just update the permissions and allow ELK access to the files!”.

I was wrong, not simple. I had to fix this anyway…

To simplify this explanation, I will continue like this:
Account-A – original account where the files originally came from
Account-B – second account where files went to (bucket is here)

As I discovered, there are a few issues at play here:

  1. The files were pushed to the bucket in Account-B, not pulled from Account-A. This means the files are still owned by account-A.
  2. The files show no ‘grantee’ permissions.
  3. AWS has an retarded ingenious policy/process where the owner of the files is not the owner of the bucket, but the entity/account/service that created the files.

As a result Account-B (And all users/roles within) cannot forcefully take control of the files, and is unable to modify the permissions.

I also had to consider that I am working with tens of thousands of files here, so fixing this manually is not an option. I’ll have to script this for batch processing.

So, there was a few things I had to do:

  1. Modify the S3 bucket policy to allow Account-A to access the bucket.
  2. Grant a user/role within Account-A to access the files, using a user policy.
  3. The granted user in Account-A will need to add permissions for Account-B on every affected file in the bucket (Not every file is affected).
  4. Account-B should take ownership of the files, for future use.
  5. Remove Account-A’s access to the bucket.
  6. Win?

 

S3 Bucket Policy updated with additional access:

{
"Sid": "AllowAccountAAccess",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNT-A_ID:root"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my-cloudtrail-bucket/*",
"arn:aws:s3:::my-cloudtrail-bucket"
]
}

Added an IAM policy for the user/role in Account-A:

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Example",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::my-cloudtrail-bucket"
]
}
]
}

I wrote a PowerShell file to take care of this problem (Note that this uses the AWS PowerShell tools!), and thought I would share it online too:

$region = "eu-west-1"
$accountAcreds = "profileacreds"
$accountBcreds = "profilebcreds"
$s3cloudtrailbucket = "my-cloudtrail-bucket"
$s3folder = "AWSLogs/ACCOUNT-B_ID/CloudTrail/AWS_REGION_HERE/2015/01/01/"

Set-DefaultAWSRegion $region
$files = Get-S3Object -BucketName $s3cloudtrailbucket -KeyPrefix $s3folder | select key

foreach ($file in $files) {
# Cleanup 'key' virtual paths
$oldfile = ${file}.key
$newfile = "newfolder/"+${file}.key

# Using PowerShell
# Using Account-A credentials, set the ACL on objects in remote bucket (Sets bucket owner Account-B full access, not ownership)
Set-S3ACL -BucketName $s3cloudtrailbucket -Key $oldfile -CannedACLName "bucket-owner-full-control" -StoredCredentials $accountAcreds
# Using Account-B credentials, copy files to new location in same bucket (Take ownership of new files)
Copy-S3Object -BucketName $s3cloudtrailbucket -DestinationBucket $s3cloudtrailbucket -Key $oldfile -DestinationKey $newfile -StoredCredentials $accountBcreds

# OPTIONAL:
# Using Account-B credentials, copy files to SAME LOCATION in same bucket, OVERWRITING. (Take ownership of new files)
#Copy-S3Object -BucketName $s3cloudtrailbucket -DestinationBucket $s3cloudtrailbucket -Key $oldfile -DestinationKey $oldfile -StoredCredentials $accountBcreds

# Same thing Using S3 CLI, not required if only using PowerShell CLI
# Set ACL on objects in remote bucket
#aws s3api put-object-acl --bucket $s3cloudtrailbucket --key $oldfile --acl bucket-owner-full-control --profile $accountAcreds
# Copy files to new location
#aws s3 cp s3://evision-logs-cloudtrail/$oldfile s3://evision-logs-cloudtrail/$newfile --profile $accountBcreds

# Final cleanup of objects.
# Using Account-B credentials, copy newly owned files back to old/original object location
Copy-S3Object -BucketName $s3cloudtrailbucket -DestinationBucket $s3cloudtrailbucket -Key $newfile -DestinationKey $oldfile -StoredCredentials $accountBcreds
# Using Account-B credentials, delete 'new' object
Remove-S3Object -BucketName $s3cloudtrailbucket -Key $newfile -StoredCredentials $accountBcreds -Confirm -Force
}

So did it work?

Yes it did, I processed tens of thousands of CloudTrail files (Bucket total is 110000+ objects) in a few hours and now they show up in ELK, ready for use.

I hope this is useful to someone.

Leave a comment if you found this and found it useful.

Leave a Reply if you find this useful

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s