Thursday 12 April 2012

Modifying synbak to use lzop or xz for compression

For backing up a server I installed the synbak program on CentOS, partly because it was a standard package from rpmforge, and partly because it is uncomplicated to use, basically just shell scripts. I noticed that when I used gzip as the compression method, it tied up a whole CPU doing the compression. So I decided to add lzop support. It's very easy, just add these two lines to /usr/share/synbak/method/tar/tar.sh:

--- /usr/share/synbak/method/tar/tar.sh.orig    2010-06-10 05:18:14.000000000 +1000
+++ /usr/share/synbak/method/tar/tar.sh 2012-04-12 09:46:36.000000000 +1000
@@ -30,6 +30,8 @@
        tar)    backup_method_opts_default="--totals -cpf"  ; backup_name_extension="tar" ;;
        gz)     backup_method_opts_default="--totals -zcpf" ; backup_name_extension="tar.gz" ;;
        bz2)    backup_method_opts_default="--totals -jcpf" ; backup_name_extension="tar.bz2" ;;
+       lzo)    backup_method_opts_default="--use-compress-program=lzop --totals -cpf" ; backup_name_extension="tar.lzo" ;;
+       xz)     backup_method_opts_default="--use-compress-program=xz --totals -cpf" ; backup_name_extension="tar.xz" ;;
        *)      report_text extra_option_wrong && exit 1 ;;
   esac


You run it like this: synbak -s serverconfig -m tar -M lzo

I added xz support for good measure, but this is untested. Make sure that backup_method_opts in your config file doesn't have -z or the tar command will throw an error.


Naturally you must have the lzop and/or xz packages installed.


Using lzop, tar and lzop take about equal amounts of CPU time, most of the time tar was I/O bound, and backups finished faster. I also see that the memory footprint of lzop is small. The downside that is the compressed output is larger, about 31% more in my case. Decompression is supposed to be very fast, which is useful when restoring. I haven't used decompression and I hope I won't ever have to.

No comments:

Post a Comment