Automating the AWS CloudWatch logs retention period for all log groups

CloudWatch logs cost a lot if you keep them forever. You have option of dumping them to S3 for further use. Still it may the case you don’t want to keep logs after certain amount of time. You can achieve it by individually changing the retention period of each log groups.

Workaround

You can use EventBrige Rules -> Lambda Setup to automate this process.

Steps:

  1. Create a Lambda function with Python Run time. Add the following code in the code editor
import json
import boto3
def lambda_handler(event, context):
print("Received event: " + json.dumps(event, indent=2))
client = boto3.client('logs')
marker = None
while True:
if marker:
response = client.describe_log_groups(nextToken = marker)
else:
response = client.describe_log_groups()
for item in response['logGroups']:
response2 = client.put_retention_policy(logGroupName=item['logGroupName'],retentionInDays=90)
print(item['logGroupName'] + "Retention period changed")
if "nextToken" in response:
print("Next Token : {} ".format(response['nextToken']))
marker = response['nextToken']
else:
return{
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}

2. Add arn:aws:iam::aws:policy/CloudWatchFullAccess Policy to your Lambda Execution Role

3. Add an Event Bridge trigger using AWS Lambda Console with Schedule expression: cron(0 18 ? * FRI *) and Event bus: default

This will trigger lambda every Friday at 6pm.

Limitations

The above code is region specific. If you want it to be updating all regions in your account then:

  1. create a string array of all regions and pass it to boto3.client(‘logs’)
from boto3.session import Session
s = Session()
logs_regions = s.get_available_regions('logs')
  1. Create a loop over the function as well as the string array and pass the region to the boto3 so it will able to update the retention period in all regions