A temporary fix for Logstash S3 input authentication
A while ago I ran into an issue where I couldnt use Logstash and the ‘logstash-input-s3’ plugin, and the manual authentication method didnt work well.
My original configuration looked like:
input { s3 { bucket => "mybucketname-logs-cloudtrail" access_key_id => "ACCESS_KEY_HERE" secret_access_key => "SECRET_KEY_HERE" region => "eu-west-1" codec => "cloudtrail" type => "cloudtrail" prefix => "AWSLogs/AWS_ACCOUNT_ID_HERE/CloudTrail/" temporary_directory => "/tmp/temp-cloudtrail_s3_temp" sincedb_path => "/tmp/temp-cloudtrail_s3_sincedb" debug => "true" } } output { elasticsearch { host => "ELASTICSEARCH_URL_HERE" protocol => "http" } stdout { codec => "rubydebug" } }
This configurations didn’t work, and was later reported as a bug, but it caused a lot of headache for me.
After much faffing about, and trial & error, this configuration worked:
Within logstash.conf file:
input { s3 { bucket => "evision-logs-cloudtrail" delete => false interval => 60 # seconds prefix =>"AWSLogs/AWS_ACCOUNT_ID_HERE/CloudTrail/" type => "cloudtrail" codec => "cloudtrail" credentials => "/etc/logstash/s3_credentials.ini" sincedb_path => "/tmp/temp-cloudtrail_s3_sincedb" } } output { elasticsearch { host => "ELASTICSEARCH_URL_HERE" protocol => "http" } stdout { codec => "rubydebug" } }
As you can see, I created a separate “/etc/logstash/s3_credentials.ini” file.
It should be stressed that this is marked as a depreciated setup and at some point in the future will be removed. I have not had issues with this upto Logstash version 2.2, so it works. I’ll soon be testing it with v2.3.1 also.
Within the s3_credentials.ini file:
AWS_ACCESS_KEY_ID=PUTMYACCESSKEYHERE AWS_SECRET_ACCESS_KEY=PUTMYSECRETKEYHERE
This was originally to work around an issue I reported to Elastic here, and on GitHub here.
Again, I hope this is useful to someone.
If you found it useful, then why not leave a comment! 😉