DynamoDB backups are non trivial, when you choose export on the AWS console you land in Data Pipeline...Looking into this process the Data Pipeline actually creates an Elastic Map Reduce cluster to facilitate the backup to s3. This is major convoluted just to do a simple backup.
Here are some open source projects alternatives for DynamoDB backups:
Script to perform backups of all dynamodb tables
(
for i in `aws dynamodb list-tables | jq -r '.TableNames' | grep ' "' | tr ',' ' ' | cut -d'"' -f2`;
for i in `aws dynamodb list-tables | jq -r '.TableNames' | grep ' "' | tr ',' ' ' | cut -d'"' -f2`;
do
echo "======= Starting backup of $i `date` =========="
echo "======= Starting backup of $i `date` =========="
python dynamodump.py -m backup -r ap-southeast-2 -s $i
done
mv dump dump.`date +%d.%m.%Y`ls -lR dump.`date +%d.%m.%Y`
aws s3 sync dump.`date +%d.%m.%Y` s3://dynamodump
) > dynamodump.log 2>&1
cat dynamodump.log | aws logs push --log-group-name dynamodump --log-stream-name nightlydump
cat dynamodump.log | aws logs push --log-group-name dynamodump --log-stream-name nightlydump
No comments:
Post a Comment