Difference between revisions of "Cedeus DB backups"

From stgo
Jump to: navigation, search
(copy of the geonode DB)
(Data to backup)
 
(233 intermediate revisions by one user not shown)
Line 1: Line 1:
 
>> return to [[Cedeus_IDE]]
 
>> return to [[Cedeus_IDE]]
 
----
 
----
 +
(Note, including UPS installation, this work took me 3 weeks, more or less full time - without the ToDo list of email notifications, and log rotation)
  
== How to set up Backups ==
+
== How to set up Automated Backups ==
  
=== notifications ===
+
The ''Objective'' of this exercise is to have an automated backup process of user-profiles and user contributed data, that is copied to a portable medium at least once a week.
 +
 
 +
=== General Workflow to Create the Backups ===
 +
 
 +
The backups contain several steps. Usually they consist of:
 +
# create a script that contain commands to
 +
#* create a database dump =or= tar/zip the files in a particular folder
 +
#* copy this dump file or zip archive to another machine from where it can be easily copied to portable medium, i.e. tape
 +
# create a cron tab entry that runs the backup script(s) at some set intervall, e.g. each night at 1am
 +
# create a cron tab entry that triggers deletion of old backup files
 +
 
 +
Below now some personal notes on how to set things up:
 +
 
 +
=== Notifications ===
 
To get notified about the backups via email, a/the shell script may send emails via "mailx" - i.e Nail.  
 
To get notified about the backups via email, a/the shell script may send emails via "mailx" - i.e Nail.  
 
=> see http://klenwell.com/press/2009/03/ubuntu-email-with-nail/
 
=> see http://klenwell.com/press/2009/03/ubuntu-email-with-nail/
  
=== copy of the geonode DB ===
+
Btw. postfix may work as well
* create a shell script that contains the pgdump instructions - see /home/ssteinig/pgdbbackup.sh on CedeusDB
+
 
 +
=> '''ToDo''': Install mail program
 +
 
 +
=== Example: ''cron'' Job that makes a Dump of the GeoNode DB ===
 +
 
 +
General infos on how to create a Cron tab can be found here: https://help.ubuntu.com/community/CronHowto
 +
 
 +
* create a shell script that contains the pgdump instructions - see for example /home/ssteinig/pgdbbackup.sh on CedeusDB
 
* test if script or script execution actually works. A simple script for testing may perhaps be this (/home/ssteinig/touchy.sh)
 
* test if script or script execution actually works. A simple script for testing may perhaps be this (/home/ssteinig/touchy.sh)
 
*: <code>
 
*: <code>
Line 15: Line 36:
 
  touch /home/ssteinig/ftw.text</code>
 
  touch /home/ssteinig/ftw.text</code>
 
* create a cron-tab entry for user ''ssteinig'' with "<code>crontab -e</code>"
 
* create a cron-tab entry for user ''ssteinig'' with "<code>crontab -e</code>"
*: then add entry such as "<code>00 01 * * * sh /home/ssteinig/pgdbbackup.sh</code>" to run the script daily at 1:00 am
+
*: then add entry such as "<code>00 01 * * * sh /home/ssteinig/geonodegisdb93backup.sh</code>" to run the dump script daily at 1am
 
*: => when using the user "postgres" to do the db dump
 
*: => when using the user "postgres" to do the db dump
 
*:* check if postgres user has a password assigned already (use ALTER... to do so: http://wiki.geosteiniger.cl/mediawiki-1.22.7/index.php/Setting_up_geonode#Some_PostgreSQL_commands )
 
*:* check if postgres user has a password assigned already (use ALTER... to do so: http://wiki.geosteiniger.cl/mediawiki-1.22.7/index.php/Setting_up_geonode#Some_PostgreSQL_commands )
 
*:* create a .pgpass file to provide the password: http://wiki.postgresql.org/wiki/Pgpass
 
*:* create a .pgpass file to provide the password: http://wiki.postgresql.org/wiki/Pgpass
* check also if the cron is running <code>sudo service cron status</code> otherwise start it...
+
*:*: Note, the pgpass file should have chmod 0600. If it does not, then pg will ask for a password.
* to see what the cron tab contains use <code>crontab -l</code>
+
* check if cron is running: "<code>sudo service cron status</code>" otherwise start it...
 +
* to see what the cron tab contains use "<code>crontab -l</code>"
 +
* to check if a cron is executed check the log: <code>sudo tail -f /var/log/syslog</code>
 +
 
 +
=== Dump example script geonodegisdb93backup.sh ===
 +
<code>
 +
#!/bin/bash
 +
logfile="/home/ssteinig/geonode_db_backups/pgsql.log"
 +
backup_dir="/home/ssteinig/geonode_db_backups"
 +
touch $logfile
 +
 +
echo "Starting backup of databases " >> $logfile
 +
dateinfo=`date '+%Y-%m-%d %H:%M:%S'`
 +
timeslot=`date '+%Y%m%d-%H%M'`
 +
/usr/bin/vacuumdb -z -h localhost -U postgres geonodegisdb93  >/dev/null 2>&1
 +
/usr/bin/pg_dump -U postgres -i -F c -b geonodegisdb93 -h 127.0.0.1 -f $backup_dir/geonodegisdb93-backup-$timeslot.backup
 +
echo "Backup and Vacuum complete on $dateinfo for database: geonodegisdb93 " >> $logfile
 +
echo "Done backup of databases " >> $logfile
 +
# sstein: email notification not used at the moment
 +
# tail -16 /home/ssteinig/geonode_db_backups/pgsql.log | mailx blabla@blub.cl
 +
</code>
 +
 
 +
This example is based on the shell script posted here: http://stackoverflow.com/questions/854200/how-do-i-backup-my-postgresql-database-with-cron
 +
For a better Postgres dump script it may be worth to look here: https://wiki.postgresql.org/wiki/Automated_Backup_on_Linux
 +
 
 +
=== File transfer ===
 +
To transfer files, I decided to create a new cedeus backup user on the receiving computer (20xxb...p).
 +
 
 +
A file transfer can be accomplished using '''scp''' or '''rsync''' e.g.:
 +
*: "<code>scp /home/ssteinig/ftw.txt user@example.com:/home/backup_user/dbbackups/</code>"
 +
** However, a ssh key should be generated first so no password needs to be provided. A detailed dscription can be found on: http://troy.jdmz.net/rsync/index.html. However, later on I used this description: http://blogs.oracle.com/jkini/entry/how_to_scp_scp_and .
 +
** in short do "<code>ssh-keygen -t rsa -b 2048 -f /home/thisuser/cron/thishost-rsync-key</code>". But do '''not''' provide a pass phrase when generating it, otherwise it will always asked for it when establishing a connection.
 +
** Then copy the key to the other servers users .ssh/ folder (e.g. using scp), and add it to the authorized_keys using "<code>cat blabla_key.pub >> authorized_keys</code>" (Note, the authorized_keys should be chmod 700, and eventually restrict the incoming IP - see http://troy.jdmz.net/rsync/index.html).
 +
** Then we would use "<code>scp -i /home/ssteinig/cron/thishost-rsync-key /home/ssteinig/ftw.txt user@example.com:/home/backup_user/dbbackups/</code>"
 +
** note that it is probably necessary to initialize a server connection once (with whatever file), so the connection gets an ECDDSA key fingerprint.
 +
* for the use of rsync see the section below on "sync with CedeusGIS1"
 +
 
 +
== Performed CEDEUS Observatory Backups ==
 +
 
 +
A description on a test how to backup and restore GeoNode data can be found under [[backup of geonode]]. So this page was used as an input for the backup details below.
 +
 
 +
=== Dump of the GeoNode DB - on CedeusDB ===
 +
 
 +
* server: CedeusDB
 +
* cron job running nightly at 1:00am
 +
* using the script ''geonodegisdb93backup.sh''
 +
* copies the PG dump file to CedeusGeoNode into folder ''/home/cedeusdbbackupuser/geonodedbbackups/''
 +
 
 +
=== Dump of the GeoNode user db - on CedeusGeonode VM (13080) ===
 +
 
 +
* server: CedeusGeoNode on geonode1204 VM
 +
* cron job running nightly at 1:10am
 +
* using the script ''geonodeuserdbbackup.sh''
 +
* copies the PG dump file to CedeusGeoNode into folder ''/home/cedeusdbbackupuser/geonodeuserdbbackups/''
 +
 
 +
=== Tar/zip of the (uploaded) GeoNode file data and docs - on CedeusGeonode Vm (13080) ===
 +
 
 +
==== Data to backup ====
 +
GeoNode settings and uploaded data may change in different frequencies or almost never. Hence it seems its best to do once-in-a-while backup of stuff that does not seem to change that much and frequent backups for file uploads and styles etc.
 +
 
 +
*We do '''once-in-a-while''' backup of stuff that does not seem to change that much, such as:
 +
*# GeoNode config: "<code>sudo tar -cvzf /home/ssteinig/geonodeConfigBackup.tgz /etc/geonode</code>"
 +
*# Django language strings: "<code>sudo tar -cvzf /home/ssteinig/geonodei18nBackup.tgz /usr/local/lib/python2.7/dist-packages/geonode/locale/</code>"
 +
*# GeoNode www folder (including static subfolder and data folder): "<code>sudo tar -cvzf /home/ssteinig/geonodeWWWBackup.tgz /var/www/geonode/</code>" (note, this also includes the GeoNode upload folders, that are to backup-ed daily, see below)
 +
*# Eventually there are data in ''/var/lib/geoserver/geonode-data/'', for instance the printing setup file config.yaml. So one should also do a once-in-a-while backup: "<code>sudo tar -cvzf /home/ssteinig/geonodeDataBackup.tgz /var/lib/geoserver/geonode-data/</code>"
 +
*# Image and satellite data that I stored under ''/var/www/geoserver/''. Do this only once in a while as folders may be huge, e.g. the folder with the Sectra 2012 Santiago aerial images in GeoTIFF format (212 images) has a size of . Use: "<code>sudo tar -cvzf /home/ssteinig/geoserverImageDataBackup.tgz /var/www/geoserver/</code>"
 +
*#: => The tar file can be created as a cron job [''its currently disabled due to impractical file size of 222GB'' !!!] for the 12th day for each month, at 12:40 (mittags) as tar creation takes quite some time (so one can not stay logged in to run it from command line). The used crontab command is:
 +
*#: <code>40 12 12 * * tar -cvzf /home/ssteinig/geoserverImageDataBackup.tgz /var/www/geoserver/ > /home/ssteinig/imagetar.log</code>
 +
*: => These tar files need to be copied by hand to CedeusGeoNode's  ''/home/cedeusdbbackupuser/geonode_one_time_backup/'', e.g. with "<code>scp -i /home/ssteinig/.ssh/id_rsa /home/ssteinig/geoserverDataBackup.tgz  cedeusdbbackupuser@146.155.17.19:/home/cedeusdbbackupuser/geoserverbackup</code>"
 +
*We will '''backup''' a couple of folders that can change '''frequently''':
 +
*# GeoServer (i.e. rasters, gwc layers, map styles, etc.): "<code>sudo tar -cvzf /home/ssteinig/geoserverDataBackup.tgz /usr/share/geoserver/data/</code>"
 +
*#: ... copied to ''/home/cedeusdbbackupuser/geoserverbackup/''.
 +
*# GeoNode www-data uploads (i.e. raster data, pdfs, etc): "<code>sudo tar -cvzf /home/ssteinig/geonodeWWWUploadBackup.tgz /var/www/geonode/uploaded/</code>"
 +
*#: ... copied to ''/home/cedeusdbbackupuser/geonodewwwuploadbackup/''.
 +
*: => these two '''frequent backups''' are performed in the shell script ''geonodewwwdatabackup.sh'' (see below)
 +
*: => '''ToDo:''' it is not clear to me yet if I need to run frequent backups using '''sudo''' i.e. ''sudo sh geonodewwwdatabackup.sh'' (or the sudo cron tab). Because when testing the tar files generation ''with'' and ''without sudo'' using my normal login (on 10 Dec. 2014) the resulting tar archives had the same size, indicating that content was the same.
 +
 
 +
==== Running cron shell script ====
 +
 
 +
The shell script ''geonodewwwdatabackup.sh'' is used to create frequent copies of the GeoNode and GeoServer data files. The tar commands itself, in the script, are not run with sudo, as this would require to type the credentials. Instead the script should be run using "sudo" to get access to all the data folders. '''ToDo''': However as noted above, in a test with my standard login, there was no difference in tar file size between using not using sudo and using it. Hence, I shall execute the script using my personal cron-tab, instead of using the admin/root cron-tab.
 +
 
 +
To copy the tar files to CedeusGeoNode server with scp we use the ssh login credentials that were already established for the GeoNode userdb backup.
 +
 
 +
==== Tar backup summary ====
 +
 
 +
* server: CedeusGeoNode on geonode1204 VM
 +
* cron job running nightly at 1:20am
 +
** using the script ''geonodewwwdatabackup.sh''
 +
** copies the geoserver-data tar file to CedeusGeoNode into folder ''/home/cedeusdbbackupuser/geoserverbackup/''
 +
** copies the geonode-data tar file to CedeusGeoNode into folder ''/home/cedeusdbbackupuser/geonodewwwuploadbackup/''
 +
* requires '''manual''' tar ball creation and copying to CedeusGeoNode of
 +
** geonodeConfigBackup.tgz with copy to ''/home/cedeusdbbackupuser/geonode_one_time_backup/''
 +
** geonodei18nBackup.tgz with copy to ''/home/cedeusdbbackupuser/geonode_one_time_backup/''
 +
** geonodeWWWBackup.tgz with copy to ''/home/cedeusdbbackupuser/geonode_one_time_backup/''
 +
** perhaps: geonodeDataBackup.tgz with copy to ''/home/cedeusdbbackupuser/geonode_one_time_backup/''
 +
 
 +
=== Backup of Elgg miCiudad - on CedeusGeonode VM (15080) ===
 +
 
 +
the official Elgg backup guide: http://learn.elgg.org/en/1.9/admin/backup-restore.html
 +
 
 +
==== Data to backup ====
 +
 
 +
* the elgg database as mysql dump
 +
* the elgg web folder as tar
 +
* the elgg data folder as tar => the folders files (e.g. in ''/elggdata/1/39/file/'') cannot be accessed by the backup user sst... It is owned by the www-data user. This problem needs to be solved when creating the tar.
 +
 
 +
This does not work yet => To be able to backup the elgg data directory I needed to grant the my backup user (sst...) access rights to this folder or use ''sudo''. The Elgg data directory is owned by www-data, so I added my user to this group, using <code>sudo usermod -a -G www-data  ssteinig</code> - see also http://www.cyberciti.biz/faq/ubuntu-add-user-to-group-www-data/ . However, I had no success.
 +
 
 +
=> Hence, I am running the script as '''root''' in the ''root crontab'' instead - with <code>sudo crontab -e</code> .
 +
 
 +
==== Elgg miCiudad backup summary ====
 +
 
 +
* server: CedeusGeoNode on elgg VM (15080)
 +
* cron job running nightly at 1:45am
 +
* using the script ''createmiciudadbackup.sh'' (run in root crontab)
 +
* copies the three files to CedeusGeoNode into folder ''/home/cedeusdbbackupuser/miciudadbackups/''
 +
 
 +
=== Backup of Elgg (Observatory Homepage) - on CedeusGeonode ===
 +
 
 +
==== Data to backup ====
 +
* elgg DB: ''elgg''
 +
* elgg docs: ''/var/www/html/elgg1-11''
 +
* the elgg data folder is under ''/usr/share/elgg/elggdata''
 +
 
 +
==== Elgg HP backup summary ====
 +
 
 +
* server: CedeusGeoNode
 +
* cron job running nightly at 1:35am
 +
* using the script ''home/MyUser/backupscripts_inuse/createelggbackup.sh'' (run in root crontab; reason see above)
 +
* copies the three files to CedeusGeoNode into folder ''/home/cedeusdbbackupuser/elgghpbackups/''
 +
 
 +
=== MySQL dump for Mediawiki(s) - on CedeusGeonode VM (22080 vs. 21080) ===
 +
 
 +
the official Mediawiki backup guide: http://www.mediawiki.org/wiki/Manual:Backing_up_a_wiki
 +
 
 +
Before writing the backup scripts, I actually changed the root passwords for mysql DBs using <code>UPDATE mysql.user SET Password=PASSWORD('foobar') WHERE User='tom' AND Host='localhost';</code> Note, when changing the root password one needs to restart the mysql service or apply <code>FLUSH PRIVILEGES;</code> right after changing the pw. However, its probably even better to create a backup user that is used for doing the mysql dumps. (see also http://www.cyberciti.biz/faq/mysql-change-user-password/)
 +
 
 +
==== Data to backup ====
 +
 
 +
what do we need to backup:
 +
* database : via a mysql dump; e.g. using also zip for a smaller file: <code>mysqldump -h hostname -u userid --password dbname | gzip > backup.sql.gz</code>
 +
* uploaded data/images/extensions etc in ''/var/www/html/wiki/'': create a tar ball
 +
 
 +
==== Mediawiki backup summary ====
 +
 
 +
CEDEUS Wiki
 +
* server: CedeusGeoNode on wikicedeus VM (22080)
 +
* cron job running nightly at 1:15am
 +
* using the script ''createcedeuswikibackup.sh''
 +
* copies the two files to CedeusGeoNode into folder ''/home/cedeusdbbackupuser/cedeuswikibackups/''
 +
 
 +
Stefan's Wiki
 +
* server: CedeusGeoNode on mediawiki VM (21080)
 +
* cron job running nightly at 1:40am
 +
* using the script ''createmywikibackup.sh''
 +
* copies the two files to CedeusGeoNode into folder ''/home/cedeusdbbackupuser/stefanwikibackups/''
 +
 
 +
=== Synchronization of backup files between CedeusGeoNode and CedeusGIS1 ===
 +
 
 +
this file sync should serve to:
 +
* have a second backup location
 +
* to make copies of the backup files to a portable drive (via USB) or/and to the Dell RD1000
 +
 
 +
To perform the folder synchronization we will use "rsync" tool. For an introduction to rsync see http://www.digitalocean.com/community/tutorials/how-to-use-rsync-to-sync-local-and-remote-directories-on-a-vps
 +
 
 +
==== Sync summary ====
 +
 
 +
* from server CedeusGeoNode to CedeusGIS1
 +
* cron job running nightly at 3:00am
 +
*: => Note, I had to switch the backup time from am to 3am because the wiki backups run now after 2am while being scheduled for 1:40am. Perhaps this happens due to automatic time zone adjustment on the server/Wiki VMs?
 +
* using the script ''syncwithcedeusgis1.sh'' run by backup-user
 +
* synchronizes backup files to CedeusGIS1 with folder ''/home/ssteinig/backups_cedeusservers/'' => sync means: deleted files on the source are also deleted at the target (but not vice versa)
 +
 
 +
== Deletion of old files ==
 +
 
 +
=== Examples ===
 +
An example for finding files older than a specific number of days that follow a particular name-ing pattern is
 +
<pre>find $BACKUP_DIR -maxdepth 1 -mtime +$DAYS_TO_KEEP -name "*-daily"</pre>
 +
taken from http://wiki.postgresql.org/wiki/Automated_Backup_on_Linux
 +
 
 +
A shorter version is:
 +
<pre>find /home/cedeusdbbackupuser/geonode_one_time_backup/ -maxdepth 1 -mtime +5</pre>
 +
This searches for all(!) files in the particular folder that are older than 5 days. The search does not include subfolder, as the ''-maxdepth'' param is set to "1".
 +
 
 +
To delete the found files on adds at the end ''-exec rm''... as in this example:
 +
<pre>find /home/cedeusdbbackupuser/geonode_one_time_backup/ -maxdepth 1 -mtime +5 -exec rm -rf '{}' ';'</pre> 
 +
 
 +
=== File deletion realized ===
 +
 
 +
* GeoNode Database on CedeusDB : script ''removeolddbbackups.sh'' deletes files older than 7 days. Crontab running every Tuesday 3am. Writes to log file.
 +
* All backups on CedeusGeoNode (as backupuser): script ''removeoldbackups.sh'' deletes files older than 7 days - except for files in folder ''geonode_one_time_backup''. Crontab running every day 0:30 am (before any backup). Writes to sync.log log file.
 +
* GeoNode user db and tar files on GeoNode1204 VM: script ''removeoldgeonodedatabackups.sh'' deletes files older than 7 days. Crontab running every Tuesday 3am. Writes to the 2 different log files.
 +
* Mediawiki / stefans wiki on MediaWiki VM: script ''removeoldstefanwikibackups.sh'' deletes files older than 7 days. Crontab running every Tuesday 3am. Writes to log file.
 +
* Cedeuswiki on WikiCedeus VM: script ''removeoldcedeuswikibackups.sh'' deletes files older than 7 days. Crontab running every Tuesday 3am. Writes to log file.
 +
* Elgg on Elgg VM: script ''removeoldelggbackups.sh'' deletes files older than 7 days. ''Root Crontab'' running every Tuesday 3am. Writes to log file.
 +
* ''deactivated (as I am using rsync with delete option)'': All backups on CedeusGIS1 : script ''removeoldserverbackups.sh'' deletes files older than 7 days. Crontab running every day 3am.
 +
 
 +
== Deletion of log files / log rotate ==
 +
 
 +
Log files are deleted using a cron tab entry. This should happen on the first day of the month at 18:00 for: cedeusgeonode, geonode1204 VM, mediawiki VM, wikicedeus VM, elgg VM (using sudo crontab!), cedeusdb
 +
 
 +
== Installation of APC Smart UPS RT3000V ==
 +
 
 +
It would be nice that the servers are shutdown in case the UPS battery runs out of power. Therefore it is best to install a control software that communicates with the APC SURTD3000. The software delivered is names PowerChute, but comes unfortunately only for Suse, RedHat (rpm) or Window Systems, etc. and not for Ubuntu / Debian based systems (see [http://www.apcmedia.com/salestools/ASTE-6Z5QEV/ASTE-6Z5QEV_R49_EN.pdf?sdirect=true here]). So the solutions are:
 +
# converting the *.rpm to a *.deb  - but this was without much success.
 +
# to use [http://www.apcupsd.org/manual/ apcupsd] - but unfortunately the RT3000 model comes not with the newer open ''modbus'' control protocol , but only with a proprietary protocol. However, I could still try to do a firmware update to enable communication with apcupsd via modbus.
 +
# install a VirtualMachine with OpenSUSE.
 +
# by the additional APC networkcard for a whopping 300 $US - given the fact that the RT3000VA costs already 1700 US$ this is kind of a scam!)
 +
 
 +
Hence I tried option number 3 - communication via the original PowerChute software installed on a OpenSUSE Virtual Machine (as I was running VMs already).
 +
 
 +
For this variant it is necessary to do a serial-port routing between host server and VM. How I did this is described [http://wiki.geosteiniger.cl/mediawiki-1.22.7/index.php/CEDEUS_Server_Setup#Enabling_serial_port_access_from_VM_.28for_APC_UPS.29 here] in [[CEDEUS Server Setup]].
 +
 
 +
I shall note that the UPS was actually connected to serial connector '''ttyS1''' on the host machine (and the VM) ... so ''not'' on ttyS0
 +
 
 +
To install PowerChute on the OpenSuse 13.2 VM, I did the following:
 +
# copied powerchutes rpm to the VM
 +
# navigated to folder with ''install_pbeagent_linux.sh''
 +
# run the ''sh'' file and choosing the following settings
 +
#* 2 : RJ45 connection
 +
#* 2 : NO (= no Share UPS, Interface Expander or Simple Signaling)
 +
# chosen user and pw was the usual one
 +
# selected ''/dev/ttyS1'' as serial port, as this port was the only one I installed anyway for the VM
 +
# openend a web browser in the opensuse VM with http:// <localhost> :3052
 +
#: => this did forward me actually to the https connection address https://10.0.2.15:6547/
 +
 
 +
Notes:
 +
* The PowerChute Agent server can be started using <code>/etc/init.d/PBEAgent start</code> , and stopped with <code>/etc/init.d/PBEAgent stop</code>.
 +
* The PowerChute files are copied into ''/opt/APC//opt/APC/PowerChuteBusinessEdition/Agent/''
 +
* To uninstall use <code>rpm -e pbeagent</code>
 +
* To communicate with the Server or Console, unblock port 2161
 +
 
 +
=== Debugging Serial Port ===
 +
 
 +
http://www.tldp.org/HOWTO/Serial-HOWTO-16.html
 +
 
 +
When trying to connect with minicom, I got the message that no lockfile could be created for /dev/ttyS0. (permission denied). To check what is going on:
 +
* inspect the current log file for ttyS1: <code>vim /var/lock/LCK..ttyS0</code>
 +
* I found there a process number (2221), that I looked up with <code>ps 2221</code>. Thir returned me
 +
<code>
 +
  PID TTY      STAT  TIME COMMAND
 +
2221 ?        Sl    23:19 /bin/java/jre/1.6.0_37/bin/java -Dpicard.main.thread=blocking -classpa...
 +
</code>
 +
* so, the PowerChute agent did block/use this already for communication. So I stopped the PowerChute agent server using <code>/etc/init.d/PBEAgent stop</code>
 +
 
 +
* I did also run <code>sudo lsof /dev/ttyS*</code> to see which ports are open. The result of this was:
 +
<code>
 +
COMMAND    PID    USER  FD  TYPE DEVICE SIZE/OFF NODE NAME
 +
apcupsd    1197    root    7u  CHR  4,64      0t0 1114 /dev/ttyS1
 +
VBoxHeadl 24174 ssteinig  19u  CHR  4,64      0t0 1114 /dev/ttyS1
 +
</code>
 +
* so, I saw that apcupsd was actually using the port (as I installed it before). Hence, I stopped the program <code>sudo service apcupsd stop</code>. And checked again with <code>sudo lsof /dev/ttyS*</code>. Showing now that only VBox used the port...
 +
=> Hence, I did a reboot of the OpenSUSE VM after which the the PowerChute server did run again...
 +
 
 +
=== Script to run by PowerChute ===
 +
 
 +
==== Basics ====
 +
 
 +
PowerChute can run a script, to shutdown certain programs, if battery power is low.
 +
 
 +
Therefore go in the web interface to Shutdown > Shutdown Settings > and see for ''Operating System and Application Shutdown'' section. Placing a file into the folder ''/opt/APC/PowerChuteBusinessEdition/Agent/cmdfiles/'' makes it available in the drop down list.
 +
 
 +
However, the script needs to be executable by the application. Means, I did a <code>chmod 755 script.sh</code> so it can be executed by PowerChute. Note, it seems like the script is executed as ''root''.
 +
 
 +
A test script may look like this:
 +
<code>
 +
#!/bin/sh
 +
touch /home/ssteinig/ftw.txt
 +
ping 127.0.0.1 -c 5 | cat > /home/ssteinig/pingtest
 +
</code>
 +
 
 +
Next one needs to write script that connects to other VMs and shuts them down, e.g.:
 +
<code>
 +
ssh user@remote_computer sudo poweroff
 +
</code>
 +
(from http://ubuntuforums.org/showthread.php?t=2093192)
 +
 
 +
Problem is that this requires a "sudo", and hence entering a sudo password. To solve this use "visudo" as described here: http://sleekmason.wordpress.com/fluxbox/using-etcsudoers-to-allow-shutdownrestart-without-password/ and further below.
 +
 
 +
The script that gets run on low power is:
 +
* ''/opt/APC/PowerChuteBusinessEdition/Agent/cmdfiles/cedeusshutdown.sh''
 +
 
 +
==== Shutting down or running a script remotely ====
 +
 
 +
I created on each machine a ''new user'' with root privileges to run scripts that perform server and vm shutdowns. To transfer the public ssh key file I needed to define the port for the VM access with large "-P", e.g. "<code>scp -P 17022 /root/.ssh/id_rsa.pub ced-user@146.155.17.19:/home/ced-user/</code>". A file transfer that worked with ssh key was: "<code>scp -P 17022 "ssh -i /root/.ssh/id_rsa" /home/ssteinig/pingtest.txt cedeuspoweroffuser@146.155.17.19:/home/cedeuspoweroffuser/</code>"
 +
 
 +
Infos on the ''shutdown'' command itself can be found here: http://www.computerhope.com/unix/ushutdow.htm . The best option to stop the servers is <code>sudo shutdown -h now</code> (or instead some time like "+1" for in 1 minute). Howver, To avoid that one gets prompted for a password on needs to use <code>sudo visudo</code>add then add
 +
* under ''# Cmnd alias specification'' the line <code>Cmnd_Alias SHUTDOWNCNMDS = /sbin/shutdown, /sbin/reboot, /sbin/halt</code>
 +
* under ''# Members of the admin group may...'' a line like <code>username ALL = NOPASSWD: SHUTDOWNCNMDS</code>
 +
 
 +
The shutdown command that I use in "a" script looks like this then:
 +
<code>
 +
ssh -i /root/.ssh/id_rsa -p 17022 -t cedeuspoweroffuser@146.155.17.19 sudo shutdown -h +1
 +
</code>
 +
 
 +
However, I have written for CedeusGeoNode and CedeusDB script that get started from the opensuse VM, which first shutdown all the VMs properly and poweroff the server.
 +
 
 +
Important: the commands in the script run by the opensuse VM needs to be run detached. Otherwise the console stays connected and, perhaps ???, I can not shutdown the VM host server, because the VM keeps controlling it... or so. For how to detach see: http://unix.stackexchange.com/questions/30400/execute-remote-commands-completely-detaching-from-the-ssh-connection e.g. by using the "nohub" argument.
 +
 
 +
=== Shutdown summary ===
 +
 
 +
PowerChute on opensuse VM triggers the script ''/opt/APC/PowerChuteBusinessEdition/Agent/cmdfiles/cedeusshutdown.sh''. This script in turn triggers other script, which are owned by the user ''c..pow..'', on:
 +
 
 +
* cedeusdb: ''cedeusdbshutdown.sh'' which shuts down:
 +
*# Tilestream VM
 +
*# CedeusDB
 +
* cedeusgeonode: ''cedeusgeonodeshutdown.sh'' which shuts down:
 +
*# GeoNode1204 VM
 +
*# elgg VM
 +
*# mediawiki VM
 +
*# wikicedeus VM
 +
*# opensuse132 VM
 +
*# CedeusGeoNode
 +
 
 +
'''ToDo''': allow only ssh connections from this particular VM
 +
 
 +
== Complete VM Backups ==
 +
 
 +
on CedeusGeoNode:
 +
* shutdown VMs before zipping the VMs
 +
* also copy the VM settings under ''/home/xxx/VirtualBox VMs/''
 +
* to zip and copy:
 +
# Wiki CEDEUS: <code>zip wikicedeus_vdi.zip wikicedeus.vdi</code>
 +
# Elgg test: <code>zip elgg18pyp_vdi.zip elgg18pyp.vdi</code>
 +
# Elgg with miCiudad: <code>zip elgg_vdi.zip elgg.vdi</code>
 +
# GeoNode: <code>zip geonode1204b_vdi.zip geonode1204b.vdi</code>
 +
# my wiki with documentation: <code>zip stefan_mediawiki_vdi.zip mediawiki.vdi</code>
 +
# OpenSuse VM with UPS control software: <code>zip opensuse132_vdi.zip opensuse132.vdi</code>
 +
# WalkYourPlace: <code>zip wypwps_vdi.zip wypwps.vdi</code>
 +
 
 +
on CedeusDB:
 +
* perhaps TileStream VM
 +
* Nominatim: <code>zip nominatim_vdi.zip nominatim.vdi</code>
 +
 
 +
== ToDo List ==
 +
* install mail program to get notified about backups and syncs : see for instance http://klenwell.com/press/2009/03/ubuntu-email-with-nail/
 +
* check how to use the RD1000

Latest revision as of 15:42, 11 August 2015

>> return to Cedeus_IDE


(Note, including UPS installation, this work took me 3 weeks, more or less full time - without the ToDo list of email notifications, and log rotation)

How to set up Automated Backups

The Objective of this exercise is to have an automated backup process of user-profiles and user contributed data, that is copied to a portable medium at least once a week.

General Workflow to Create the Backups

The backups contain several steps. Usually they consist of:

  1. create a script that contain commands to
    • create a database dump =or= tar/zip the files in a particular folder
    • copy this dump file or zip archive to another machine from where it can be easily copied to portable medium, i.e. tape
  2. create a cron tab entry that runs the backup script(s) at some set intervall, e.g. each night at 1am
  3. create a cron tab entry that triggers deletion of old backup files

Below now some personal notes on how to set things up:

Notifications

To get notified about the backups via email, a/the shell script may send emails via "mailx" - i.e Nail. => see http://klenwell.com/press/2009/03/ubuntu-email-with-nail/

Btw. postfix may work as well

=> ToDo: Install mail program

Example: cron Job that makes a Dump of the GeoNode DB

General infos on how to create a Cron tab can be found here: https://help.ubuntu.com/community/CronHowto

  • create a shell script that contains the pgdump instructions - see for example /home/ssteinig/pgdbbackup.sh on CedeusDB
  • test if script or script execution actually works. A simple script for testing may perhaps be this (/home/ssteinig/touchy.sh)
#!/bin/bash 
touch /home/ssteinig/ftw.text
  • create a cron-tab entry for user ssteinig with "crontab -e"
    then add entry such as "00 01 * * * sh /home/ssteinig/geonodegisdb93backup.sh" to run the dump script daily at 1am
    => when using the user "postgres" to do the db dump
  • check if cron is running: "sudo service cron status" otherwise start it...
  • to see what the cron tab contains use "crontab -l"
  • to check if a cron is executed check the log: sudo tail -f /var/log/syslog

Dump example script geonodegisdb93backup.sh

#!/bin/bash
logfile="/home/ssteinig/geonode_db_backups/pgsql.log"
backup_dir="/home/ssteinig/geonode_db_backups"
touch $logfile

echo "Starting backup of databases " >> $logfile
dateinfo=`date '+%Y-%m-%d %H:%M:%S'`
timeslot=`date '+%Y%m%d-%H%M'`
/usr/bin/vacuumdb -z -h localhost -U postgres geonodegisdb93  >/dev/null 2>&1
/usr/bin/pg_dump -U postgres -i -F c -b geonodegisdb93 -h 127.0.0.1 -f $backup_dir/geonodegisdb93-backup-$timeslot.backup
echo "Backup and Vacuum complete on $dateinfo for database: geonodegisdb93 " >> $logfile
echo "Done backup of databases " >> $logfile
# sstein: email notification not used at the moment
# tail -16 /home/ssteinig/geonode_db_backups/pgsql.log | mailx blabla@blub.cl

This example is based on the shell script posted here: http://stackoverflow.com/questions/854200/how-do-i-backup-my-postgresql-database-with-cron For a better Postgres dump script it may be worth to look here: https://wiki.postgresql.org/wiki/Automated_Backup_on_Linux

File transfer

To transfer files, I decided to create a new cedeus backup user on the receiving computer (20xxb...p).

A file transfer can be accomplished using scp or rsync e.g.:

  • "scp /home/ssteinig/ftw.txt user@example.com:/home/backup_user/dbbackups/"
    • However, a ssh key should be generated first so no password needs to be provided. A detailed dscription can be found on: http://troy.jdmz.net/rsync/index.html. However, later on I used this description: http://blogs.oracle.com/jkini/entry/how_to_scp_scp_and .
    • in short do "ssh-keygen -t rsa -b 2048 -f /home/thisuser/cron/thishost-rsync-key". But do not provide a pass phrase when generating it, otherwise it will always asked for it when establishing a connection.
    • Then copy the key to the other servers users .ssh/ folder (e.g. using scp), and add it to the authorized_keys using "cat blabla_key.pub >> authorized_keys" (Note, the authorized_keys should be chmod 700, and eventually restrict the incoming IP - see http://troy.jdmz.net/rsync/index.html).
    • Then we would use "scp -i /home/ssteinig/cron/thishost-rsync-key /home/ssteinig/ftw.txt user@example.com:/home/backup_user/dbbackups/"
    • note that it is probably necessary to initialize a server connection once (with whatever file), so the connection gets an ECDDSA key fingerprint.
  • for the use of rsync see the section below on "sync with CedeusGIS1"

Performed CEDEUS Observatory Backups

A description on a test how to backup and restore GeoNode data can be found under backup of geonode. So this page was used as an input for the backup details below.

Dump of the GeoNode DB - on CedeusDB

  • server: CedeusDB
  • cron job running nightly at 1:00am
  • using the script geonodegisdb93backup.sh
  • copies the PG dump file to CedeusGeoNode into folder /home/cedeusdbbackupuser/geonodedbbackups/

Dump of the GeoNode user db - on CedeusGeonode VM (13080)

  • server: CedeusGeoNode on geonode1204 VM
  • cron job running nightly at 1:10am
  • using the script geonodeuserdbbackup.sh
  • copies the PG dump file to CedeusGeoNode into folder /home/cedeusdbbackupuser/geonodeuserdbbackups/

Tar/zip of the (uploaded) GeoNode file data and docs - on CedeusGeonode Vm (13080)

Data to backup

GeoNode settings and uploaded data may change in different frequencies or almost never. Hence it seems its best to do once-in-a-while backup of stuff that does not seem to change that much and frequent backups for file uploads and styles etc.

  • We do once-in-a-while backup of stuff that does not seem to change that much, such as:
    1. GeoNode config: "sudo tar -cvzf /home/ssteinig/geonodeConfigBackup.tgz /etc/geonode"
    2. Django language strings: "sudo tar -cvzf /home/ssteinig/geonodei18nBackup.tgz /usr/local/lib/python2.7/dist-packages/geonode/locale/"
    3. GeoNode www folder (including static subfolder and data folder): "sudo tar -cvzf /home/ssteinig/geonodeWWWBackup.tgz /var/www/geonode/" (note, this also includes the GeoNode upload folders, that are to backup-ed daily, see below)
    4. Eventually there are data in /var/lib/geoserver/geonode-data/, for instance the printing setup file config.yaml. So one should also do a once-in-a-while backup: "sudo tar -cvzf /home/ssteinig/geonodeDataBackup.tgz /var/lib/geoserver/geonode-data/"
    5. Image and satellite data that I stored under /var/www/geoserver/. Do this only once in a while as folders may be huge, e.g. the folder with the Sectra 2012 Santiago aerial images in GeoTIFF format (212 images) has a size of . Use: "sudo tar -cvzf /home/ssteinig/geoserverImageDataBackup.tgz /var/www/geoserver/"
      => The tar file can be created as a cron job [its currently disabled due to impractical file size of 222GB !!!] for the 12th day for each month, at 12:40 (mittags) as tar creation takes quite some time (so one can not stay logged in to run it from command line). The used crontab command is:
      40 12 12 * * tar -cvzf /home/ssteinig/geoserverImageDataBackup.tgz /var/www/geoserver/ > /home/ssteinig/imagetar.log
    => These tar files need to be copied by hand to CedeusGeoNode's /home/cedeusdbbackupuser/geonode_one_time_backup/, e.g. with "scp -i /home/ssteinig/.ssh/id_rsa /home/ssteinig/geoserverDataBackup.tgz cedeusdbbackupuser@146.155.17.19:/home/cedeusdbbackupuser/geoserverbackup"
  • We will backup a couple of folders that can change frequently:
    1. GeoServer (i.e. rasters, gwc layers, map styles, etc.): "sudo tar -cvzf /home/ssteinig/geoserverDataBackup.tgz /usr/share/geoserver/data/"
      ... copied to /home/cedeusdbbackupuser/geoserverbackup/.
    2. GeoNode www-data uploads (i.e. raster data, pdfs, etc): "sudo tar -cvzf /home/ssteinig/geonodeWWWUploadBackup.tgz /var/www/geonode/uploaded/"
      ... copied to /home/cedeusdbbackupuser/geonodewwwuploadbackup/.
    => these two frequent backups are performed in the shell script geonodewwwdatabackup.sh (see below)
    => ToDo: it is not clear to me yet if I need to run frequent backups using sudo i.e. sudo sh geonodewwwdatabackup.sh (or the sudo cron tab). Because when testing the tar files generation with and without sudo using my normal login (on 10 Dec. 2014) the resulting tar archives had the same size, indicating that content was the same.

Running cron shell script

The shell script geonodewwwdatabackup.sh is used to create frequent copies of the GeoNode and GeoServer data files. The tar commands itself, in the script, are not run with sudo, as this would require to type the credentials. Instead the script should be run using "sudo" to get access to all the data folders. ToDo: However as noted above, in a test with my standard login, there was no difference in tar file size between using not using sudo and using it. Hence, I shall execute the script using my personal cron-tab, instead of using the admin/root cron-tab.

To copy the tar files to CedeusGeoNode server with scp we use the ssh login credentials that were already established for the GeoNode userdb backup.

Tar backup summary

  • server: CedeusGeoNode on geonode1204 VM
  • cron job running nightly at 1:20am
    • using the script geonodewwwdatabackup.sh
    • copies the geoserver-data tar file to CedeusGeoNode into folder /home/cedeusdbbackupuser/geoserverbackup/
    • copies the geonode-data tar file to CedeusGeoNode into folder /home/cedeusdbbackupuser/geonodewwwuploadbackup/
  • requires manual tar ball creation and copying to CedeusGeoNode of
    • geonodeConfigBackup.tgz with copy to /home/cedeusdbbackupuser/geonode_one_time_backup/
    • geonodei18nBackup.tgz with copy to /home/cedeusdbbackupuser/geonode_one_time_backup/
    • geonodeWWWBackup.tgz with copy to /home/cedeusdbbackupuser/geonode_one_time_backup/
    • perhaps: geonodeDataBackup.tgz with copy to /home/cedeusdbbackupuser/geonode_one_time_backup/

Backup of Elgg miCiudad - on CedeusGeonode VM (15080)

the official Elgg backup guide: http://learn.elgg.org/en/1.9/admin/backup-restore.html

Data to backup

  • the elgg database as mysql dump
  • the elgg web folder as tar
  • the elgg data folder as tar => the folders files (e.g. in /elggdata/1/39/file/) cannot be accessed by the backup user sst... It is owned by the www-data user. This problem needs to be solved when creating the tar.

This does not work yet => To be able to backup the elgg data directory I needed to grant the my backup user (sst...) access rights to this folder or use sudo. The Elgg data directory is owned by www-data, so I added my user to this group, using sudo usermod -a -G www-data ssteinig - see also http://www.cyberciti.biz/faq/ubuntu-add-user-to-group-www-data/ . However, I had no success.

=> Hence, I am running the script as root in the root crontab instead - with sudo crontab -e .

Elgg miCiudad backup summary

  • server: CedeusGeoNode on elgg VM (15080)
  • cron job running nightly at 1:45am
  • using the script createmiciudadbackup.sh (run in root crontab)
  • copies the three files to CedeusGeoNode into folder /home/cedeusdbbackupuser/miciudadbackups/

Backup of Elgg (Observatory Homepage) - on CedeusGeonode

Data to backup

  • elgg DB: elgg
  • elgg docs: /var/www/html/elgg1-11
  • the elgg data folder is under /usr/share/elgg/elggdata

Elgg HP backup summary

  • server: CedeusGeoNode
  • cron job running nightly at 1:35am
  • using the script home/MyUser/backupscripts_inuse/createelggbackup.sh (run in root crontab; reason see above)
  • copies the three files to CedeusGeoNode into folder /home/cedeusdbbackupuser/elgghpbackups/

MySQL dump for Mediawiki(s) - on CedeusGeonode VM (22080 vs. 21080)

the official Mediawiki backup guide: http://www.mediawiki.org/wiki/Manual:Backing_up_a_wiki

Before writing the backup scripts, I actually changed the root passwords for mysql DBs using UPDATE mysql.user SET Password=PASSWORD('foobar') WHERE User='tom' AND Host='localhost'; Note, when changing the root password one needs to restart the mysql service or apply FLUSH PRIVILEGES; right after changing the pw. However, its probably even better to create a backup user that is used for doing the mysql dumps. (see also http://www.cyberciti.biz/faq/mysql-change-user-password/)

Data to backup

what do we need to backup:

  • database : via a mysql dump; e.g. using also zip for a smaller file: mysqldump -h hostname -u userid --password dbname | gzip > backup.sql.gz
  • uploaded data/images/extensions etc in /var/www/html/wiki/: create a tar ball

Mediawiki backup summary

CEDEUS Wiki

  • server: CedeusGeoNode on wikicedeus VM (22080)
  • cron job running nightly at 1:15am
  • using the script createcedeuswikibackup.sh
  • copies the two files to CedeusGeoNode into folder /home/cedeusdbbackupuser/cedeuswikibackups/

Stefan's Wiki

  • server: CedeusGeoNode on mediawiki VM (21080)
  • cron job running nightly at 1:40am
  • using the script createmywikibackup.sh
  • copies the two files to CedeusGeoNode into folder /home/cedeusdbbackupuser/stefanwikibackups/

Synchronization of backup files between CedeusGeoNode and CedeusGIS1

this file sync should serve to:

  • have a second backup location
  • to make copies of the backup files to a portable drive (via USB) or/and to the Dell RD1000

To perform the folder synchronization we will use "rsync" tool. For an introduction to rsync see http://www.digitalocean.com/community/tutorials/how-to-use-rsync-to-sync-local-and-remote-directories-on-a-vps

Sync summary

  • from server CedeusGeoNode to CedeusGIS1
  • cron job running nightly at 3:00am
    => Note, I had to switch the backup time from am to 3am because the wiki backups run now after 2am while being scheduled for 1:40am. Perhaps this happens due to automatic time zone adjustment on the server/Wiki VMs?
  • using the script syncwithcedeusgis1.sh run by backup-user
  • synchronizes backup files to CedeusGIS1 with folder /home/ssteinig/backups_cedeusservers/ => sync means: deleted files on the source are also deleted at the target (but not vice versa)

Deletion of old files

Examples

An example for finding files older than a specific number of days that follow a particular name-ing pattern is

find $BACKUP_DIR -maxdepth 1 -mtime +$DAYS_TO_KEEP -name "*-daily"

taken from http://wiki.postgresql.org/wiki/Automated_Backup_on_Linux

A shorter version is:

find /home/cedeusdbbackupuser/geonode_one_time_backup/ -maxdepth 1 -mtime +5

This searches for all(!) files in the particular folder that are older than 5 days. The search does not include subfolder, as the -maxdepth param is set to "1".

To delete the found files on adds at the end -exec rm... as in this example:

find /home/cedeusdbbackupuser/geonode_one_time_backup/ -maxdepth 1 -mtime +5 -exec rm -rf '{}' ';'

File deletion realized

  • GeoNode Database on CedeusDB : script removeolddbbackups.sh deletes files older than 7 days. Crontab running every Tuesday 3am. Writes to log file.
  • All backups on CedeusGeoNode (as backupuser): script removeoldbackups.sh deletes files older than 7 days - except for files in folder geonode_one_time_backup. Crontab running every day 0:30 am (before any backup). Writes to sync.log log file.
  • GeoNode user db and tar files on GeoNode1204 VM: script removeoldgeonodedatabackups.sh deletes files older than 7 days. Crontab running every Tuesday 3am. Writes to the 2 different log files.
  • Mediawiki / stefans wiki on MediaWiki VM: script removeoldstefanwikibackups.sh deletes files older than 7 days. Crontab running every Tuesday 3am. Writes to log file.
  • Cedeuswiki on WikiCedeus VM: script removeoldcedeuswikibackups.sh deletes files older than 7 days. Crontab running every Tuesday 3am. Writes to log file.
  • Elgg on Elgg VM: script removeoldelggbackups.sh deletes files older than 7 days. Root Crontab running every Tuesday 3am. Writes to log file.
  • deactivated (as I am using rsync with delete option): All backups on CedeusGIS1 : script removeoldserverbackups.sh deletes files older than 7 days. Crontab running every day 3am.

Deletion of log files / log rotate

Log files are deleted using a cron tab entry. This should happen on the first day of the month at 18:00 for: cedeusgeonode, geonode1204 VM, mediawiki VM, wikicedeus VM, elgg VM (using sudo crontab!), cedeusdb

Installation of APC Smart UPS RT3000V

It would be nice that the servers are shutdown in case the UPS battery runs out of power. Therefore it is best to install a control software that communicates with the APC SURTD3000. The software delivered is names PowerChute, but comes unfortunately only for Suse, RedHat (rpm) or Window Systems, etc. and not for Ubuntu / Debian based systems (see here). So the solutions are:

  1. converting the *.rpm to a *.deb - but this was without much success.
  2. to use apcupsd - but unfortunately the RT3000 model comes not with the newer open modbus control protocol , but only with a proprietary protocol. However, I could still try to do a firmware update to enable communication with apcupsd via modbus.
  3. install a VirtualMachine with OpenSUSE.
  4. by the additional APC networkcard for a whopping 300 $US - given the fact that the RT3000VA costs already 1700 US$ this is kind of a scam!)

Hence I tried option number 3 - communication via the original PowerChute software installed on a OpenSUSE Virtual Machine (as I was running VMs already).

For this variant it is necessary to do a serial-port routing between host server and VM. How I did this is described here in CEDEUS Server Setup.

I shall note that the UPS was actually connected to serial connector ttyS1 on the host machine (and the VM) ... so not on ttyS0

To install PowerChute on the OpenSuse 13.2 VM, I did the following:

  1. copied powerchutes rpm to the VM
  2. navigated to folder with install_pbeagent_linux.sh
  3. run the sh file and choosing the following settings
    • 2 : RJ45 connection
    • 2 : NO (= no Share UPS, Interface Expander or Simple Signaling)
  4. chosen user and pw was the usual one
  5. selected /dev/ttyS1 as serial port, as this port was the only one I installed anyway for the VM
  6. openend a web browser in the opensuse VM with http:// <localhost> :3052
    => this did forward me actually to the https connection address https://10.0.2.15:6547/

Notes:

  • The PowerChute Agent server can be started using /etc/init.d/PBEAgent start , and stopped with /etc/init.d/PBEAgent stop.
  • The PowerChute files are copied into /opt/APC//opt/APC/PowerChuteBusinessEdition/Agent/
  • To uninstall use rpm -e pbeagent
  • To communicate with the Server or Console, unblock port 2161

Debugging Serial Port

http://www.tldp.org/HOWTO/Serial-HOWTO-16.html

When trying to connect with minicom, I got the message that no lockfile could be created for /dev/ttyS0. (permission denied). To check what is going on:

  • inspect the current log file for ttyS1: vim /var/lock/LCK..ttyS0
  • I found there a process number (2221), that I looked up with ps 2221. Thir returned me

 PID TTY      STAT   TIME COMMAND
2221 ?        Sl    23:19 /bin/java/jre/1.6.0_37/bin/java -Dpicard.main.thread=blocking -classpa...

  • so, the PowerChute agent did block/use this already for communication. So I stopped the PowerChute agent server using /etc/init.d/PBEAgent stop
  • I did also run sudo lsof /dev/ttyS* to see which ports are open. The result of this was:

COMMAND     PID     USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
apcupsd    1197     root    7u   CHR   4,64      0t0 1114 /dev/ttyS1
VBoxHeadl 24174 ssteinig   19u   CHR   4,64      0t0 1114 /dev/ttyS1

  • so, I saw that apcupsd was actually using the port (as I installed it before). Hence, I stopped the program sudo service apcupsd stop. And checked again with sudo lsof /dev/ttyS*. Showing now that only VBox used the port...

=> Hence, I did a reboot of the OpenSUSE VM after which the the PowerChute server did run again...

Script to run by PowerChute

Basics

PowerChute can run a script, to shutdown certain programs, if battery power is low.

Therefore go in the web interface to Shutdown > Shutdown Settings > and see for Operating System and Application Shutdown section. Placing a file into the folder /opt/APC/PowerChuteBusinessEdition/Agent/cmdfiles/ makes it available in the drop down list.

However, the script needs to be executable by the application. Means, I did a chmod 755 script.sh so it can be executed by PowerChute. Note, it seems like the script is executed as root.

A test script may look like this:

#!/bin/sh
touch /home/ssteinig/ftw.txt
ping 127.0.0.1 -c 5 | cat > /home/ssteinig/pingtest

Next one needs to write script that connects to other VMs and shuts them down, e.g.:

ssh user@remote_computer sudo poweroff

(from http://ubuntuforums.org/showthread.php?t=2093192)

Problem is that this requires a "sudo", and hence entering a sudo password. To solve this use "visudo" as described here: http://sleekmason.wordpress.com/fluxbox/using-etcsudoers-to-allow-shutdownrestart-without-password/ and further below.

The script that gets run on low power is:

  • /opt/APC/PowerChuteBusinessEdition/Agent/cmdfiles/cedeusshutdown.sh

Shutting down or running a script remotely

I created on each machine a new user with root privileges to run scripts that perform server and vm shutdowns. To transfer the public ssh key file I needed to define the port for the VM access with large "-P", e.g. "scp -P 17022 /root/.ssh/id_rsa.pub ced-user@146.155.17.19:/home/ced-user/". A file transfer that worked with ssh key was: "scp -P 17022 "ssh -i /root/.ssh/id_rsa" /home/ssteinig/pingtest.txt cedeuspoweroffuser@146.155.17.19:/home/cedeuspoweroffuser/"

Infos on the shutdown command itself can be found here: http://www.computerhope.com/unix/ushutdow.htm . The best option to stop the servers is sudo shutdown -h now (or instead some time like "+1" for in 1 minute). Howver, To avoid that one gets prompted for a password on needs to use sudo visudoadd then add

  • under # Cmnd alias specification the line Cmnd_Alias SHUTDOWNCNMDS = /sbin/shutdown, /sbin/reboot, /sbin/halt
  • under # Members of the admin group may... a line like username ALL = NOPASSWD: SHUTDOWNCNMDS

The shutdown command that I use in "a" script looks like this then:

ssh -i /root/.ssh/id_rsa -p 17022 -t cedeuspoweroffuser@146.155.17.19 sudo shutdown -h +1

However, I have written for CedeusGeoNode and CedeusDB script that get started from the opensuse VM, which first shutdown all the VMs properly and poweroff the server.

Important: the commands in the script run by the opensuse VM needs to be run detached. Otherwise the console stays connected and, perhaps ???, I can not shutdown the VM host server, because the VM keeps controlling it... or so. For how to detach see: http://unix.stackexchange.com/questions/30400/execute-remote-commands-completely-detaching-from-the-ssh-connection e.g. by using the "nohub" argument.

Shutdown summary

PowerChute on opensuse VM triggers the script /opt/APC/PowerChuteBusinessEdition/Agent/cmdfiles/cedeusshutdown.sh. This script in turn triggers other script, which are owned by the user c..pow.., on:

  • cedeusdb: cedeusdbshutdown.sh which shuts down:
    1. Tilestream VM
    2. CedeusDB
  • cedeusgeonode: cedeusgeonodeshutdown.sh which shuts down:
    1. GeoNode1204 VM
    2. elgg VM
    3. mediawiki VM
    4. wikicedeus VM
    5. opensuse132 VM
    6. CedeusGeoNode

ToDo: allow only ssh connections from this particular VM

Complete VM Backups

on CedeusGeoNode:

  • shutdown VMs before zipping the VMs
  • also copy the VM settings under /home/xxx/VirtualBox VMs/
  • to zip and copy:
  1. Wiki CEDEUS: zip wikicedeus_vdi.zip wikicedeus.vdi
  2. Elgg test: zip elgg18pyp_vdi.zip elgg18pyp.vdi
  3. Elgg with miCiudad: zip elgg_vdi.zip elgg.vdi
  4. GeoNode: zip geonode1204b_vdi.zip geonode1204b.vdi
  5. my wiki with documentation: zip stefan_mediawiki_vdi.zip mediawiki.vdi
  6. OpenSuse VM with UPS control software: zip opensuse132_vdi.zip opensuse132.vdi
  7. WalkYourPlace: zip wypwps_vdi.zip wypwps.vdi

on CedeusDB:

  • perhaps TileStream VM
  • Nominatim: zip nominatim_vdi.zip nominatim.vdi

ToDo List