Cedeus DB backups

From stgo
Revision as of 15:10, 10 December 2014 by StefanS (Talk | contribs)

Jump to: navigation, search

>> return to Cedeus_IDE


How to set up Automated Backups

The Objective of this exercise is to have an automated backup process of user-profiles and user contributed data, that is copied to a portable medium at least once a week.

General Workflow to Create the Backups

The backups contain several steps. Usually they consist of:

  1. create a script that contain commands to
    • create a database dump =or= tar/zip the files in a particular folder
    • copy this dump file or zip archive to another machine from where it can be easily copied to portable medium, i.e. tape
  2. create a cron tab entry that runs the backup script(s) at some set intervall, e.g. each night at 1am

Below now some personal notes on how to set things up:

Notifications

To get notified about the backups via email, a/the shell script may send emails via "mailx" - i.e Nail. => see http://klenwell.com/press/2009/03/ubuntu-email-with-nail/

Btw. postfix may work as well

=> ToDo: Install mail program

Example: cron Job that makes a Dump of the GeoNode DB

  • create a shell script that contains the pgdump instructions - see for example /home/ssteinig/pgdbbackup.sh on CedeusDB
  • test if script or script execution actually works. A simple script for testing may perhaps be this (/home/ssteinig/touchy.sh)
#!/bin/bash 
touch /home/ssteinig/ftw.text
  • create a cron-tab entry for user ssteinig with "crontab -e"
    then add entry such as "00 01 * * * sh /home/ssteinig/geonodegisdb93backup.sh" to run the dump script daily at 1am
    => when using the user "postgres" to do the db dump
  • check also if the cron is running: "sudo service cron status" otherwise start it...
  • to see what the cron tab contains use "crontab -l"

Dump example script geonodegisdb93backup.sh

#!/bin/bash
logfile="/home/ssteinig/geonode_db_backups/pgsql.log"
backup_dir="/home/ssteinig/geonode_db_backups"
touch $logfile

echo "Starting backup of databases " >> $logfile
dateinfo=`date '+%Y-%m-%d %H:%M:%S'`
timeslot=`date '+%Y%m%d-%H%M'`
/usr/bin/vacuumdb -z -h localhost -U postgres geonodegisdb93  >/dev/null 2>&1
/usr/bin/pg_dump -U postgres -i -F c -b geonodegisdb93 -h 127.0.0.1 -f $backup_dir/geonodegisdb93-backup-$timeslot.backup
echo "Backup and Vacuum complete on $dateinfo for database: geonodegisdb93 " >> $logfile
echo "Done backup of databases " >> $logfile
# sstein: email notification not used at the moment
# tail -16 /home/ssteinig/geonode_db_backups/pgsql.log | mailx blabla@blub.cl

This example is based on the shell script posted here: http://stackoverflow.com/questions/854200/how-do-i-backup-my-postgresql-database-with-cron For a better Postgres dump script it may be worth to look here: https://wiki.postgresql.org/wiki/Automated_Backup_on_Linux

File transfer

To tranfers files I decided, for safety reasons, to create a new cedeus backup user on the receiving computer (20xxb...p).

A file transfer can be accomplished using scp or better rsync e.g.:

  • "scp /home/ssteinig/ftw.txt user@example.com:/home/backup_user/dbbackups/"
    • However, a ssh key should be generated first so no password needs to be provided. A detailed dscription can be found on: http://troy.jdmz.net/rsync/index.html
    • in short do "ssh-keygen -t rsa -b 2048 -f /home/thisuser/cron/thishost-rsync-key". But do not provide a pass phrase when generating it, otherwise it will always asked for it when establishing a connection.
    • Then copy the key to the other servers users .ssh folder (using scp), and add it to the authorized_keys. (Note, the authorized_keys should be chmod 700).
    • Then we would use "scp -i /home/ssteinig/cron/thishost-rsync-key /home/ssteinig/ftw.txt user@example.com:/home/backup_user/dbbackups/"
    • note that it is probably necessary to initialize a server connection once (with whatever file), so the connection gets an ECDDSA key fingerprint.
  • having my ssh keys setup, the code for syncing the cedeusdb directory with rsync would be
    • "...ToDo..."

Performed CEDEUS Observatory Backups

A description on a test how to backup and restore GeoNode data can be found under backup of geonode. So this page was used as an input for the backup details below.

Dump of the GeoNode DB - on CedeusDB

  • server: CedeusDB
  • cron job running nightly at 1:00am
  • using the script geonodegisdb93backup.sh
  • copies the PG dump file to CedeusGeoNode into folder /home/cedeusdbbackupuser/geonodedbbackups/
    => ToDo: perhaps change this last step and copy it to cedeusgis1 for straight backup on a drive

Dump of the GeoNode user db - on CedeusGeonode VM (13080)

  • server: CedeusGeoNode on geonode1204 VM
  • cron job running nightly at 1:10am
  • using the script geonodeuserdbbackup.sh
  • copies the PG dump file to CedeusGeoNode into folder /home/cedeusdbbackupuser/geonodeuserdbbackups/
    => ToDo: perhaps change this last step and copy it to cedeusgis1 for straight backup on a drive

Tar/zip of the (uploaded) GeoNode file data and docs - on CedeusGeonode Vm (13080)

Data to backup

GeoNode settings and uploaded data may change in different frequencies or almost never. Hence it seems its best to do once-in-a-while backup of stuff that does not seem to change that much and frequent backups for file uploads and styles etc.

  • We do once-in-a-while backup of stuff that does not seem to change that much, such as:
    1. GeoNode config: "sudo tar -cvzf /home/ssteinig/geonodeConfigBackup.tgz /etc/geonode"
    2. Django language strings: "sudo tar -cvzf /home/ssteinig/geonodei18nBackup.tgz /usr/local/lib/python2.7/dist-packages/geonode/locale/"
    3. GeoNode www folder (including static subfolder and data folder): "sudo tar -cvzf /home/ssteinig/geonodeWWWBackup.tgz /var/www/geonode/" (note, this also includes the GeoNode upload folders, that are to backup-ed daily, see below)
    4. Eventually there are data in /var/lib/geoserver/geonode-data/, for instance the printing setup file config.yaml. So one should also do a once-in-a-while backup: "sudo tar -cvzf /home/ssteinig/geonodeDataBackup.tgz /var/lib/geoserver/geonode-data/"
    => These tar files need to be copied by hand to CedeusGeoNode's /home/cedeusdbbackupuser/geonode_one_time_backup/, e.g. with "scp -i /home/ssteinig/.ssh/id_rsa /home/ssteinig/geoserverDataBackup.tgz cedeusdbbackupuser@146.155.17.19:/home/cedeusdbbackupuser/geoserverbackup"
  • We will backup a couple of folders that can change frequently:
    1. GeoServer (i.e. rasters, gwc layers, map styles, etc.): "sudo tar -cvzf /home/ssteinig/geoserverDataBackup.tgz /usr/share/geoserver/data/"
      ... copied to /home/cedeusdbbackupuser/geoserverbackup/.
    2. GeoNode www-data uploads (i.e. raster data, pdfs, etc): "sudo tar -cvzf /home/ssteinig/geonodeWWWUploadBackup.tgz /var/www/geonode/uploaded/"
      ... copied to /home/cedeusdbbackupuser/geonodewwwuploadbackup/.
    => these two frequent backups are performed in the shell script geonodewwwdatabackup.sh (see below)
    => ToDo it is not clear to me yet if I need to run the the frequent backups using sudo i.e. sudo sh geonodewwwdatabackup.sh. When testing the tar files generation with and without sudo using my normal login (on 10 Dec. 2014) the resulting tars had the same size, indicating that content was the same.

Running cron shell script

The shell script geonodewwwdatabackup.sh is used to create frequent copies of the GeoNode and GeoServer data files. The tar commands itself, in the script, are not run with sudo, as this would require to type the credentials. Instead the script should be run using "sudo" to get access to all the data folders. ToDo: However as noted above, in a test with my standard login, there was no difference in tar file size between using not using sudo and using it. Hence, I shall execute the script using my personal cron-tab, instead of using the admin/root cron-tab.

To copy the tar files to CedeusGeoNode server with scp we use the ssh login credentials that were already established for the GeoNode userdb backup.

Tar backup summary

  • server: CedeusGeoNode on geonode1204 VM
  • cron job running nightly at 1:20am
    • using the script geonodewwwdatabackup.sh
    • copies the geoserver-data tar file to CedeusGeoNode into folder /home/cedeusdbbackupuser/geoserverbackup/
    • copies the geonode-data tar file to CedeusGeoNode into folder /home/cedeusdbbackupuser/geonodewwwuploadbackup/
  • requires manual tar ball creation and copying to CedeusGeoNode of
    • geonodeConfigBackup.tgz with copy to /home/cedeusdbbackupuser/geonode_one_time_backup/
    • geonodei18nBackup.tgz with copy to /home/cedeusdbbackupuser/geonode_one_time_backup/
    • geonodeWWWBackup.tgz with copy to /home/cedeusdbbackupuser/geonode_one_time_backup/
    • perhaps: geonodeDataBackup.tgz with copy to /home/cedeusdbbackupuser/geonode_one_time_backup/
    => ToDo: perhaps change copy step and copy it to cedeusgis1 for straight backup on a drive

MySQL dump for Elgg miCiudad - on CedeusGeonode VM

blabla

tar/zip of the (uploaded) Elgg miCiudad files - on CedeusGeonode VM

blabla

MySQL dump for Mediawiki(s) - on CedeusGeonode VM (22080)

the official Mediawiki backup guide: http://www.mediawiki.org/wiki/Manual:Backing_up_a_wiki

Before writing the backup scripts, I actually changed the root passwords for mysql DBs using UPDATE mysql.user SET Password=PASSWORD('foobar') WHERE User='tom' AND Host='localhost';<code> However, its probably even better to create a backup user that is used for doing the mysql dumps.

data to backup

what do we need to backup:

  • database : via a mysql dump; e.g. using also zip for a smaller file: <code>mysqldump -h hostname -u userid --password dbname | gzip > backup.sql.gz
  • uploaded data/images/extensions etc in /var/www/html/wiki/: create a tar ball