Cedeus DB backups

From stgo
Revision as of 13:16, 12 December 2014 by StefanS (Talk | contribs)

Jump to: navigation, search

>> return to Cedeus_IDE


How to set up Automated Backups

The Objective of this exercise is to have an automated backup process of user-profiles and user contributed data, that is copied to a portable medium at least once a week.

General Workflow to Create the Backups

The backups contain several steps. Usually they consist of:

  1. create a script that contain commands to
    • create a database dump =or= tar/zip the files in a particular folder
    • copy this dump file or zip archive to another machine from where it can be easily copied to portable medium, i.e. tape
  2. create a cron tab entry that runs the backup script(s) at some set intervall, e.g. each night at 1am

Below now some personal notes on how to set things up:

Notifications

To get notified about the backups via email, a/the shell script may send emails via "mailx" - i.e Nail. => see http://klenwell.com/press/2009/03/ubuntu-email-with-nail/

Btw. postfix may work as well

=> ToDo: Install mail program

Example: cron Job that makes a Dump of the GeoNode DB

General infos on how to create a Cron tab can be found here: https://help.ubuntu.com/community/CronHowto

  • create a shell script that contains the pgdump instructions - see for example /home/ssteinig/pgdbbackup.sh on CedeusDB
  • test if script or script execution actually works. A simple script for testing may perhaps be this (/home/ssteinig/touchy.sh)
#!/bin/bash 
touch /home/ssteinig/ftw.text
  • create a cron-tab entry for user ssteinig with "crontab -e"
    then add entry such as "00 01 * * * sh /home/ssteinig/geonodegisdb93backup.sh" to run the dump script daily at 1am
    => when using the user "postgres" to do the db dump
  • check if cron is running: "sudo service cron status" otherwise start it...
  • to see what the cron tab contains use "crontab -l"
  • to check if a cron is executed check the log: sudo tail -f /var/log/syslog

Dump example script geonodegisdb93backup.sh

#!/bin/bash
logfile="/home/ssteinig/geonode_db_backups/pgsql.log"
backup_dir="/home/ssteinig/geonode_db_backups"
touch $logfile

echo "Starting backup of databases " >> $logfile
dateinfo=`date '+%Y-%m-%d %H:%M:%S'`
timeslot=`date '+%Y%m%d-%H%M'`
/usr/bin/vacuumdb -z -h localhost -U postgres geonodegisdb93  >/dev/null 2>&1
/usr/bin/pg_dump -U postgres -i -F c -b geonodegisdb93 -h 127.0.0.1 -f $backup_dir/geonodegisdb93-backup-$timeslot.backup
echo "Backup and Vacuum complete on $dateinfo for database: geonodegisdb93 " >> $logfile
echo "Done backup of databases " >> $logfile
# sstein: email notification not used at the moment
# tail -16 /home/ssteinig/geonode_db_backups/pgsql.log | mailx blabla@blub.cl

This example is based on the shell script posted here: http://stackoverflow.com/questions/854200/how-do-i-backup-my-postgresql-database-with-cron For a better Postgres dump script it may be worth to look here: https://wiki.postgresql.org/wiki/Automated_Backup_on_Linux

File transfer

To transfer files, I decided to create a new cedeus backup user on the receiving computer (20xxb...p).

A file transfer can be accomplished using scp or rsync e.g.:

  • "scp /home/ssteinig/ftw.txt user@example.com:/home/backup_user/dbbackups/"
    • However, a ssh key should be generated first so no password needs to be provided. A detailed dscription can be found on: http://troy.jdmz.net/rsync/index.html. However, later on I used this description: http://blogs.oracle.com/jkini/entry/how_to_scp_scp_and .
    • in short do "ssh-keygen -t rsa -b 2048 -f /home/thisuser/cron/thishost-rsync-key". But do not provide a pass phrase when generating it, otherwise it will always asked for it when establishing a connection.
    • Then copy the key to the other servers users .ssh/ folder (e.g. using scp), and add it to the authorized_keys using "cat blabla_key.pub >> authorized_keys" (Note, the authorized_keys should be chmod 700, and eventually restrict the incoming IP - see http://troy.jdmz.net/rsync/index.html).
    • Then we would use "scp -i /home/ssteinig/cron/thishost-rsync-key /home/ssteinig/ftw.txt user@example.com:/home/backup_user/dbbackups/"
    • note that it is probably necessary to initialize a server connection once (with whatever file), so the connection gets an ECDDSA key fingerprint.
  • for the use of rsync see the section below on "sync with CedeusGIS1"

Performed CEDEUS Observatory Backups

A description on a test how to backup and restore GeoNode data can be found under backup of geonode. So this page was used as an input for the backup details below.

Dump of the GeoNode DB - on CedeusDB

  • server: CedeusDB
  • cron job running nightly at 1:00am
  • using the script geonodegisdb93backup.sh
  • copies the PG dump file to CedeusGeoNode into folder /home/cedeusdbbackupuser/geonodedbbackups/

Dump of the GeoNode user db - on CedeusGeonode VM (13080)

  • server: CedeusGeoNode on geonode1204 VM
  • cron job running nightly at 1:10am
  • using the script geonodeuserdbbackup.sh
  • copies the PG dump file to CedeusGeoNode into folder /home/cedeusdbbackupuser/geonodeuserdbbackups/

Tar/zip of the (uploaded) GeoNode file data and docs - on CedeusGeonode Vm (13080)

Data to backup

GeoNode settings and uploaded data may change in different frequencies or almost never. Hence it seems its best to do once-in-a-while backup of stuff that does not seem to change that much and frequent backups for file uploads and styles etc.

  • We do once-in-a-while backup of stuff that does not seem to change that much, such as:
    1. GeoNode config: "sudo tar -cvzf /home/ssteinig/geonodeConfigBackup.tgz /etc/geonode"
    2. Django language strings: "sudo tar -cvzf /home/ssteinig/geonodei18nBackup.tgz /usr/local/lib/python2.7/dist-packages/geonode/locale/"
    3. GeoNode www folder (including static subfolder and data folder): "sudo tar -cvzf /home/ssteinig/geonodeWWWBackup.tgz /var/www/geonode/" (note, this also includes the GeoNode upload folders, that are to backup-ed daily, see below)
    4. Eventually there are data in /var/lib/geoserver/geonode-data/, for instance the printing setup file config.yaml. So one should also do a once-in-a-while backup: "sudo tar -cvzf /home/ssteinig/geonodeDataBackup.tgz /var/lib/geoserver/geonode-data/"
    => These tar files need to be copied by hand to CedeusGeoNode's /home/cedeusdbbackupuser/geonode_one_time_backup/, e.g. with "scp -i /home/ssteinig/.ssh/id_rsa /home/ssteinig/geoserverDataBackup.tgz cedeusdbbackupuser@146.155.17.19:/home/cedeusdbbackupuser/geoserverbackup"
  • We will backup a couple of folders that can change frequently:
    1. GeoServer (i.e. rasters, gwc layers, map styles, etc.): "sudo tar -cvzf /home/ssteinig/geoserverDataBackup.tgz /usr/share/geoserver/data/"
      ... copied to /home/cedeusdbbackupuser/geoserverbackup/.
    2. GeoNode www-data uploads (i.e. raster data, pdfs, etc): "sudo tar -cvzf /home/ssteinig/geonodeWWWUploadBackup.tgz /var/www/geonode/uploaded/"
      ... copied to /home/cedeusdbbackupuser/geonodewwwuploadbackup/.
    => these two frequent backups are performed in the shell script geonodewwwdatabackup.sh (see below)
    => ToDo: it is not clear to me yet if I need to run frequent backups using sudo i.e. sudo sh geonodewwwdatabackup.sh (or the sudo cron tab). Because when testing the tar files generation with and without sudo using my normal login (on 10 Dec. 2014) the resulting tar archives had the same size, indicating that content was the same.

Running cron shell script

The shell script geonodewwwdatabackup.sh is used to create frequent copies of the GeoNode and GeoServer data files. The tar commands itself, in the script, are not run with sudo, as this would require to type the credentials. Instead the script should be run using "sudo" to get access to all the data folders. ToDo: However as noted above, in a test with my standard login, there was no difference in tar file size between using not using sudo and using it. Hence, I shall execute the script using my personal cron-tab, instead of using the admin/root cron-tab.

To copy the tar files to CedeusGeoNode server with scp we use the ssh login credentials that were already established for the GeoNode userdb backup.

Tar backup summary

  • server: CedeusGeoNode on geonode1204 VM
  • cron job running nightly at 1:20am
    • using the script geonodewwwdatabackup.sh
    • copies the geoserver-data tar file to CedeusGeoNode into folder /home/cedeusdbbackupuser/geoserverbackup/
    • copies the geonode-data tar file to CedeusGeoNode into folder /home/cedeusdbbackupuser/geonodewwwuploadbackup/
  • requires manual tar ball creation and copying to CedeusGeoNode of
    • geonodeConfigBackup.tgz with copy to /home/cedeusdbbackupuser/geonode_one_time_backup/
    • geonodei18nBackup.tgz with copy to /home/cedeusdbbackupuser/geonode_one_time_backup/
    • geonodeWWWBackup.tgz with copy to /home/cedeusdbbackupuser/geonode_one_time_backup/
    • perhaps: geonodeDataBackup.tgz with copy to /home/cedeusdbbackupuser/geonode_one_time_backup/

Backup of Elgg miCiudad - on CedeusGeonode VM (15080)

the official Elgg backup guide: http://learn.elgg.org/en/1.9/admin/backup-restore.html

Data to backup

  • the elgg database as mysql dump
  • the elgg web folder as tar
  • the elgg data folder as tar => the folders files (e.g. in /elggdata/1/39/file/) cannot be accessed by the backup user sst... It is owned by the www-data user. This problem needs to be solved when creating the tar.

This does not work yet => To be able to backup the elgg data directory I needed to grant the my backup user (sst...) access rights to this folder or use sudo. The Elgg data directory is owned by www-data, so I added my user to this group, using sudo usermod -a -G www-data ssteinig - see also http://www.cyberciti.biz/faq/ubuntu-add-user-to-group-www-data/ . However, I had no success.

=> Hence, I am running the script as root in the root cron tab instead - with sudo crontab -e .

Elgg backup summary

  • server: CedeusGeoNode on elgg VM (15080)
  • cron job running nightly at 1:45am
  • using the script createmiciudadbackup.sh
  • copies the three files to CedeusGeoNode into folder /home/cedeusdbbackupuser/miciudadbackups/

MySQL dump for Mediawiki(s) - on CedeusGeonode VM (22080 vs. 21080)

the official Mediawiki backup guide: http://www.mediawiki.org/wiki/Manual:Backing_up_a_wiki

Before writing the backup scripts, I actually changed the root passwords for mysql DBs using UPDATE mysql.user SET Password=PASSWORD('foobar') WHERE User='tom' AND Host='localhost'; Note, when changing the root password one needs to restart the mysql service or apply FLUSH PRIVILEGES; right after changing the pw. However, its probably even better to create a backup user that is used for doing the mysql dumps. (see also http://www.cyberciti.biz/faq/mysql-change-user-password/)

Data to backup

what do we need to backup:

  • database : via a mysql dump; e.g. using also zip for a smaller file: mysqldump -h hostname -u userid --password dbname | gzip > backup.sql.gz
  • uploaded data/images/extensions etc in /var/www/html/wiki/: create a tar ball

Mediawiki backup summary

CEDEUS Wiki

  • server: CedeusGeoNode on wikicedeus VM (22080)
  • cron job running nightly at 1:15am
  • using the script createcedeuswikibackup.sh
  • copies the two files to CedeusGeoNode into folder /home/cedeusdbbackupuser/cedeuswikibackups/

Stefan's Wiki

  • server: CedeusGeoNode on mediawiki VM (21080)
  • cron job running nightly at 1:40am
  • using the script createmywikibackup.sh
  • copies the two files to CedeusGeoNode into folder /home/cedeusdbbackupuser/stefanwikibackups/

Synchronization of backup files between CedeusGeoNode and CedeusGIS1

this file sync should serve to:

  • have a second backup location
  • to make copies of the backup files to a portable drive (via USB) or/and to the Dell RD1000

To perform the folder synchronization we will use "rsync" tool. For an introduction to rsync see http://www.digitalocean.com/community/tutorials/how-to-use-rsync-to-sync-local-and-remote-directories-on-a-vps

Sync summary

  • from server CedeusGeoNode to CedeusGIS1
  • cron job running nightly at 2:00am
  • using the script syncwithcedeusgis1.sh run by backup-user
  • synchronizes backup files to CedeusGIS1 with folder /home/ssteinig/backups_cedeusservers/ => sync means: deleted files on the source are also deleted at the target (but not vice versa)

Deletion of old files

Examples

An example for finding files older than a specific number of days that follow a particular name-ing pattern is

find $BACKUP_DIR -maxdepth 1 -mtime +$DAYS_TO_KEEP -name "*-daily"

taken from http://wiki.postgresql.org/wiki/Automated_Backup_on_Linux

A shorter version is:

find /home/cedeusdbbackupuser/geonode_one_time_backup/ -maxdepth 1 -mtime +5

This searches for all(!) files in the particular folder that are older than 5 days. The search does not include subfolder, as the -maxdepth param is set to "1".

To delete the found files on adds at the end -exec rm... as in this example:

find /home/cedeusdbbackupuser/geonode_one_time_backup/ -maxdepth 1 -mtime +5 -exec rm -rf '{}' ';'

File deletion realized

  • GeoNode Database on CedeusDB : script removeoldbackups.sh deletes files older than 7 days. Crontab running every Tuesday 3am.
  • All backups on CedeusGeoNode : script removeoldbackups.sh deletes files older than 7 days - except for files in folder geonode_one_time_backup. Crontab running every day 0:30 am (before any backup).
  • GeoNode user db and tar files on GeoNode1204 VM: script ....
  • x
  • deactivated (as I am using rsync with delete option): All backups on CedeusGIS1 : script removeoldserverbackups.sh deletes files older than 7 days. Crontab running every day 3am.

ToDo List

  • create a script to delete files older than 5 days
    • add to these script deletion of the log files (as well)
  • install mail program to get notified about backups and syncs
  • check how to use the RD1000