[hobbit] Backing up hobbit

Stef Coene stef.coene at docum.org
Fri Oct 19 09:13:51 CEST 2007


On Thursday 18 October 2007, Josh Luthman wrote:
> I've only had Hobbit running since last Monday.  I have restarted it twice
> to ensure that my configurations would take place (things like changing the
> WWW hostname).  I last restarted it yesterday and it has been running since
> yesterday, so I know if it is restarting it takes more then a day.  I have
> 40787 total core* files in ~/server and 569364 total core* files in
> ~/server/bin - couldn't possibly have restarted that many times!
Look at the timestamps of these files.  Each crash can create a core file.  So 
each visit to the hobbit site, every poll hobbit does, every rrd update can 
create a crash and a core file.  I never had a crash/core file, but in theory 
it can.
We also use vmware so if a hobbit server goes down, I copy the vmware guest 
that I use to deploy new installations, copy over the etc directory, goes to 
the custumer, pick a computer/desktop/laptop/server, install vmware player 
and hobbit is running again.

> Stef - If you have two Hobbit servers and duplicate your actions, why do
> you note your actions?  My original plan was to tar the home directory of
> the hobbit user, but as
I don't have 2 hobbit servers, but more then 20 located for our customers.  
The bare mimal I need for re-creating the same setup is the contents of etc 
and some extra information I collected during the installation (hostname, 
network settings, ...).

> "Hobbit User" - I could use rsync and it would make backups though I
> normally don't use rsync as I like to have daily backups, in case I make a
> mistake on Monday, the backup is done Tuesday and I catch it on Wednesday -
> I can revert to Sunday with daily backups.  Rsync could have backed up my
> problem making it useless in this scenario!  I have a scripts that backup
> necessary components (like databases) and then finally tar with gzip
> compression and then SCP the file to a remote data center (I also use
> public keys to automate this).  I have found this works very well in my
> situation and has saved my life in the case of a MySQL database crash!
You don't have to rsync everything in the same way.  If you look at the hobbit 
server data, the stuff in the data directory takes op 99% of the disk space.  
And that stuff can be rsync'd and overwritten daily.  For the server 
installation, you can also use rsync but do something like this:
rsync -Auhv --delete ~hobbit/server/ <remote>:/backup/hobbit/server-`date +%a`
So every day of the week you will have a new directory so you have a history 
of 7 days.

> Would it be safe for me to delete these core files and start working on
> this task from this day forward?  What can I use to read these core files? 
> I noticed they're not text files so I assume there is some bb utility to
> read them.  With the exception of these core* files, I would expect Hobbit
> to peak at 200MB which I could do in a ~3 minutes
You can delete the core files, but you should also try to find out why the are 
created.  If you use rsync, you can exclude these core files from being 
rsync'd


Stef



More information about the Xymon mailing list