I wanted a simple solution to backing up my webserver and the low cost of AWS S3 and Glacier is quite appealing. The solution I settled on was to use s3cmd (http://s3tools.org/s3cmd). After setting up a api key and then running;

s3cmd --configure

filling the information in. 

I created a new sql user that could read all tables for the backup and the backup user also has read access to the /var/www directory. Then this script runs nightly;

#!/bin/sh

DATE=`date +%Y-%m-%d`

#make db backup and compress
mysqldump -uuser-BUP -pPASSWORD --all-databases --result-file=/home/user-BUP/database_BUP/all_databases_$DATE.sql
gzip /home/user-BUP/database_BUP/all_databases_$DATE.sql

#transfer to S3
s3cmd put --storage-class=STANDARD_IA /home/user-BUP/database_BUP/all_databases_$DATE.sql.gz s3://serverbackup

#remove db dump as we will have loads of them
rm /home/user-BUP/database_BUP/all_databases_$DATE.sql.gz

#compress websites
tar cfzv /home/user-BUP/database_BUP/all_websites_$DATE.tar.gz /var/www/html

#transfer websites to S3
s3cmd put --storage-class=STANDARD_IA /home/user-BUP/database_BUP/all_websites_$DATE.tar.gz s3://serverbackup

#remove website compress too as we will have loads of them and these will be large
rm /home/user-BUP/database_BUP/all_websites_$DATE.tar.gz

So now that we can transfer data to S3, I added this as a cronjob;

0 00 * * * /home/mjv08/database_BUP/mysql_dump_script.sh > /home/mjv08/database_BUP/db_backup.log

With the data now backing up nightly to S3 I set up a LifeCycle rule to automagically transfer S3 data to Glacier after 14 days.

undefined

 

Finally after a few days (well after 14 days), a quick check to ensure that the transition worked and all looked great. 

undefined