Saturday, 14 December 2013

Adjusting the service shutdown sequence on Debian

I manage a few virtual Debian (Wheezy) machines that mount an iSCSI volume. These hold websites and mysql data.

The problem is that in the default shutdown sequence, umountiscsi.sh is called before shutting down cherokee (or apache2) and mysql. So the umount fails because the filesystem is busy and the machine doesn't shutdown properly from the CLI, requiring a reset at the VMWare console.

I searched around for the way to adjust the dependency based boot and shutdown sequence and after reading a lot of web and manual pages I finally understood the way to do it.

First edit /etc/init.d/cherokee (or /etc/init.d/apache2) and /etc/init.d/mysql so that the line:

# Required-Stop:

contains umountiscsi.sh

The logic of Required-Stop is that the services named on the line depend on the current service (webserver, database) being stopped first.

Now run:

insserv -d -v umountiscsi.sh

Despite the warnings on the man page that this is too low level, this is the right command to use. It will rebuild the symlinks in the runlevel directories as well as regenerate the files /etc/init.d/.depend.{boot,start,stop} Next time you shutdown, the webserver and database will be shutdown before unmounting the iSCSI volume.

This assumes that the default start and stop sequences expressed in the INIT lines are correct for you. If you have added services manually using update-rc.d at particular runlevels and sequence points then I cannot guarantee that you will get the desired result.

Saturday, 7 December 2013

Shrinking an ext4 filesystem from the CentOS 5.x CD/DVD in rescue mode

Unfortunately this cannot be done because the resize4fs program is missing from the rescue filesystem. You'll have to use a rescue distro, for example gparted-live, where the ext4 capabilities are already incorporated into the up to date resize2fs. CentOS/RHEL 5 ships with an old resize2fs and ext4 capabilities are in separate programs.

If it were a matter of expanding the filesystem this could be done on a live system.

If it were not the root filesystem you might be able to unmount the file system and do this on a live filesystem.

If it were not ext4 you could use resize2fs.

In any case a shrink involves downtime for that filesystem.

There is however a fsck.ext4 in the rescue filesystem so you can still do fscks in rescue mode.

Friday, 11 October 2013

DOS 6.22 printing to Samba fails with Path not found

My workplace wanted a NT server for DOS 6.22 clients replaced by a Samba server. No problem. Disk shares worked fine. But when it came to printing, this happened:

>net use lpt1 \\samba-server\printername
>copy con lpt1
Testing
^Z

Blah blah Path not found

I looked at the log file for the client in question and found these lines:

[2013/10/01 14:02:22.820899,3] smbd/vfs.c:905(check_reduced_name) check_reduced_name [DEV/LPT1] [/var/spool/samba]
[2013/10/01 14:02:22.820941,3] smbd/vfs.c:944(check_reduced_name) check_reduce_name: couldn't get realpath for DEV/LPT1 (NT_STATUS_OBJECT_PATH_NOT_FOUND)

Ah, was it trying to create a file called DEV/LPT1 in /var/spool/samba? But there is no subdirectory called DEV. So I did this:

# cd /var/spool/samba
# ln -s . DEV

Voila, printing worked from DOS. I can only surmise that DOS sends the whole path rather than just LPT1 to Samba or indeed any SMB server.

Hope this helps you if you need to convert any DOS clients to use Samba instead of NT.

Thursday, 5 September 2013

Dumont d'Urville time?

When adding a new entry into Google Calendar on my Android phone, I noticed it defaulted to Dumont d'Urville time (UTC+10), although Sydney/Melbourne was also offered. What?!

A search shows that this is the time zone of a French Antarctic station which happens to be in the same time zone as the Eastern Australian seaboard. When I went into settings and chose Use Home Time Zone, all was good again. Pretty bizarre but amusing bug. I'm glad my appointments are in warmer climes.

It seems somebody has hit this bug before. Maybe it's because my phone has a rather old version of Android.

Sunday, 1 September 2013

Installing phpMyAdmin 3 on RHEL/CentOS 5.9 running PHP 5.3

If you are running the PHP 5.3 (php53-*) packages on RHEL/CentOS 5.9 and try to install the phpmyadmin package from CentOS Base or RPMForge, you will get messages about installing the ancient PHP 5.1 packages. This was not what I wanted to happen, so I looked for a ready made phpmyadmin package that would work with PHP 5.3. I was prepared to install from a phpMyAdmin tarball, but I hoped to avoid that because an official package would stay up to date with patches.

It wasn't well publicised but the answer is to add the EPEL repository by downloading the epel-release RPM from here and installing that. After that you can do yum install phpMyAdmin3 (case important). This is the package you want.

Saturday, 10 August 2013

The Russians are not that interested in you(r blog)

When I started publishing on this blog and others years ago, I was initially gratified to see in the statistics referrals from certain sites. Of course, my vanity was punished when I discovered that these sites had nothing at all about my posts.

Some searching showed it's a practice called referer spam. No human has actually read your post for those hits. A web bot has fetched your page, but supplied a fake referral URL. So when you look at your statistics you think somebody has linked to you. You click on the link and the spammers get another page view.

The most blatant fake referrals had query terms promoting dubious products. The less blatant ones had query terms with a unique ID that let the site know you had clicked on the link. And serve up some spam after that. Some spammers used URL shorteners to conceal the source of the fake referral. At some point about a third of my "hits" were from Russia. I don't think I have that many readers there, even though I provide a translation button. Also it seems they have a way of detecting when a new blog has started up so they can target a fledgling blog. After I had published the last post of a travel blog, they lost interest in my blog.

What can you do about these fake hits? Basically nothing. Just ignore the fake hits in your statistics. Don't click on the links. Don't reward them with traffic.

By the way it isn't just the Russians doing it, there is one Korean site that attempts to get page views this way. Also some spammers use servers or hijacked PCs in the US to make it look like the hits came from there.

Sunday, 14 July 2013

RAID UUID != filesystem UUID

I had copied a RAID filesystem onto larger disks using a second computer and wanted to mount it on the original computer with minimal disruption. To this objective, I made the UUID of the new RAID array the same as the previous RAID using the assemble command of mdadm, like this:

mdadm -A /dev/md1 -U uuid --uuid=<old RAID UUID> /dev/sdb3 /dev/sdc3

(If the array is currently assembled, you have to disassemble it it first with mdadm -S /dev/md1)

This worked fine and I was able assemble the RAID on the second computer using the /etc/mdadm.conf from the first computer.

At the same time I decided to change to mount by UUID, so I edited /etc/fstab to look like this:

/dev/disk/by-uuid/<RAID UUID> /home xfs default 1 2

Alas when I booted, the RAID array wouldn't mount and I had to reboot in rescue mode to fix it up.

I wondered why the discrepancy between the symlink to /dev/md1 in /dev/disk/by-uuid and the RAID UUID. Eventually after some red herrings I understood. RAID UUID != filesystem UUID. They are distinct UUIDs. The filesystem UUID is that of the XFS (or ext4, btrfs, whatever) filesystem on it and is what is reported by blkid when udev runs it to populate the by-uuid directory. On the other hand the RAID UUID is what mdadm uses to match against entries in /etc/mdadm.conf when assembling the array.

The second lesson I learnt (again) is that you should not change two things in your config at once. I should have booted the machine up with /dev/md1 in /etc/fstab and then explored by-uuid later.

I could have kept the random RAID UUIDs generated by mdadm for the new disks and just run mdadm --examine --rescan > /etc/mdadm.conf in rescue mode to rebuild the metadata, but I want to be able to swap in the old RAID disk in an emergency and it's one less step to make the RAID UUIDs the same.

Thursday, 27 June 2013

A caching scheme to reduce downloads for Linux package updates

As I administer sites with a lot of (nearly) identical Linux hosts, I thought a lot about the issue of reducing the download for package updates. Even if you have unlimited data quota for your site, downloading the updates multiple times for multiple machines takes time.

I looked at various schemes suggested on the Internet. One is to set up a satellite repository. This is reliable but involves a lot of configuration. Also you may download a lot of package updates that are not used at your site.

Another class of scheme involves interposing a proxy. It could be a specialised proxy for the distro, or a general proxy like squid. This can work well but is sometimes defeated by load balancers or smart mirrors where you are not guaranteed to hit the same repository every download, as the actual repository used could vary.

Eventually I came up with a scheme that is applicable to three distros that I have tried it on: CentOS, Debian (and derivatives) and openSUSE. The key observation is that fact that the updaters in these distros have a package cache. If packages are added to the cache on the side, the updater will skip downloading them. I describe this as a caching scheme, not a proxying scheme as no change is made to the updater configuration files and normal, independent update works.

The scheme:

There are three parts to the scheme.

1. A download script runs on a designated master host. This runs during off-peak hours. We use the updater program in download but not install mode so we will only download packages we need. It's also necessary to set the repository to retain packages.

2. A synchronisation script runs on the other hosts to pull the new updates to their cache. This is run as soon as possible after the download.

3. Various utility scripts are used to fire off updates on all hosts. I make use of parallel ssh and ssh-agent to make life easy.

Let's use a concrete example to illustrate this.

CentOS:

1. Download:

The download script is this:

yum --disableexcludes=updates --downloadonly -y update

You need to install the auxiliary package yum-downloadonly. The --disableexcludes=updates is to disable the regular expressions that prevent the installation of the kernel on this host. On this host I disabled kernel updates due to the need to reboot and recompile VirtualBox modules whenever the kernel is updated, and as this is a server that engineers use constantly, I delay kernel updates until downtime can be scheduled to do them manually. Other software that may need recompilation on kernel update include proprietary video drivers.

To set the repository to retain packages, add

keepcache=1

to /etc/yum.conf

2. Synchronisation:

Next I set up a rsync server. Initially I used rsync over ssh, but this requires setting up a login account, restricted commands, etc. for security reasons. Package updates are not sensitive data so I just share the cache directory with rsyncd. Please consult your distro documentation on how to enable the rsyncd daemon. Usually it's run out of xinetd. Here is /etc/rsyncd.conf:

secrets file = /etc/rsyncd.secrets
motd file = /etc/rsyncd.motd
read only = yes
list = yes
uid = nobody
gid = nobody

[yum]
comment = YUM cache
path = /var/cache/yum
filter = + *.rpm + */ - *

Note the filter expression to share RPM files and directories but not files of other types.

The synchronisation script, call it syncfrommaster, on the other hosts is:

rsync -avi masterhost::yum /var/cache/yum

In the cron job that runs this command I insert a random delay so that the hosts don't all try to synchronise at the same time and overload the master.

sleep $(($RANDOM \% 900)); /root/bin/syncfrommaster

The % is escaped with \ in cron scripts by the way.

I arrange for this job to be run an hour after the download script.

3. Update.

You have probably noticed that it is possible that an update will be published in the time between the master pulling it down and you applying the updates. In this case each host merely downloads the package on its own, and nothing breaks. However you may wish to run a script to cause the cache to be updated and the hosts to catch up.

#!/bin/sh
yum --disableexcludes=updates --downloadonly -y update
pssh -h /root/linux-servers -P -p 1 -t -1 /root/bin/syncfrommaster

This first command is just the download script again. The second command makes use of the parallel ssh program to fire off syncfrommaster on each host. Before you run this you need to run ssh-agent bash, followed by ssh-add to make it possible to run pssh without supplying a login password for each host. This assumes have already installed ssh keys on each host.

4. Finally you can update all the hosts in one go:

pssh -h /root/linux-servers -P -p 1 -t -1 yum update -y

You need the -y as pssh cannot run interactive commands. You might want to run

yum update

on the master host first to check that the updates will go as expected.

It is possible that the master doesn't have all the packages in the union of all packages installed on all hosts. No harm done, the host that has distinct packages will download it on its own. Or you may wish to install that package on the master so the next update will be covered.

openSUSE:

1. Download:

zypper -n up -l -d

To set the repository to retain packages, run

zypper mr -k <repo>

for the repositories must be retained, usually repo-update and any extras such as packman.

2. Synchronisation:

/etc/rsyncd.conf (I only show the stanza added to the existing config file)

[zypp]
comment = Zypp archive
path = /var/cache/zypp/packages
read only = yes
list = yes

Notice that read only and list are in the zypp section. It can be either in the global or module section.

Synchronisation script:

rsync -avi masterhost::zypp /var/cache/zypp/packages

3. Update:

zypper up

With this info you can work out the catchup script and parallel ssh scripts. As in CentOS the -y flag also works to make zypper non-interactive.

Debian:

1. Download:

apt-get update; apt-get upgrade -d -y

2. Synchronisation:

/etc/rsyncd.conf:


[apt]
comment = APT archive
path = /var/cache/apt/archives
read only = yes
list = yes
filter = - lock

Synchronisation script:

rsync -avi masterhost::apt /var/cache/apt/archives

3. Update:

apt-get upgrade

and you can work out the catchup script and parallel ssh scripts. Again -y works to make apt-get upgrade non-interactive.

Notes:

To keep the explanation simpler I have not added the shell directives to redirect output to /dev/null. If cron jobs generate output, root or you will get mailed the output. This could be useful for debuging and to keep an eye on things, but gets tiresome after a while, given that this all works well and falls back to normal update if the caching is not effective. You will need 2>&1 to redirect stderr also.

You will notice that packages will accumulate in the cache after a while. While Debian commendably has apt-get autoclean, the other two RPM based distros do not have an analogous feature. You might just have a cron job remove packages older than a certain number of days.

Thursday, 13 June 2013

Install tmm, matplotlib, scipy and numpy on CentOS/RHEL 5

I hope this post saves you some searching if you have to go through the same steps as I did.

A user at this site wanted to run tmm for simulations on CentOS 5 servers.

CentOS/RHEL 5 unfortunately comes with python 2.4, too old for the required modules. Fortunately there is an easy way to install python 2.6 side by side and that is to get the python26 package from EPEL. If you do:
yum search python26
you will get a list of python26 packages. Aside from the base python26 package, I also needed python26-numpy. This will pull in any dependencies. When you have installed python26 you must remember to invoke python scripts with python26 or to edit the #! first line.

The scipy module was installed from source, that was the easiest way. Then I installed the tmm module. BTW in all the python extension installs I used python26 setup.py install rather than any package manager.

After that the test.py program in the tmm distribution ran. But my user pointed out that examples.py didn't run. This requires matplotlib. Ok, installed that from source. Now the function sample1() in examples.py ran, but generated no output. More searching showed that you have to specify the backend in ~/.matplotlib/matplotlibrc like this:

backend: TkAgg
interactive: true

(I'm leaving out some useless experiments involving the Postscript backend.) This generated an error about Tkinter not loading. I went back and looked at the setup step for matplotlib and noticed that when I ran:
python26 setup.py
it said I didn't have the Tkinter header files installed so it wasn't building the interface for the Tkinter backend. So I did yum install tk-devel, built and installed matplotlib again, and finally I got the example program to display a plot.

Whew!

Monday, 20 May 2013

Xsane generates large PDFs, use gscan2pdf instead

A friend and I both noticed that Xsane, which is the usual application used with flatbed scanners, generates large PDFs. An investigation showed this is an old problem. One of the comments, by the author in fact, suggested gscan2pdf.

Being a Perl program, this has a large set of dependencies, and there were reports that not all were available in openSUSE repos, so I decided to install from source instead of using a contributed RPM. My strategy was to run perl Makefile.PL, see what missing dependencies were listed and then install then firstly from the standard openSUSE repos, otherwise from CPAN. Some dependencies only show up at runtime, so start it from a terminal and watch it for diagnostic messages.

These dependencies were satisfied using zypper:
tiff
unpaper
ImageMagick-devel
sane-backends-devel
perl-PerlMagick
perl-Try-Tiny
perl-PDF-API2
perl-Font-TTF
perl-Goo-Canvas
perl-Gtk2-ImageView
perl-Config-General
perl-Log-Log4perl
perl-Readonly
These dependencies were satisfied using CPAN:
Gtk2-Ex-PodViewer Gtk2-Ex-Simple-List IO-stringy Set-IntSpan Proc-ProcessTable
These lists are not exhaustive because 1. I may have already had some installed and 2. I did not install dependencies for DjVu or OCR features. So regard this as a starting point.

In retrospect installing from the openSUSE contributed RPM may have been smooth. Perhaps somebody can try that and report. If you are running Debian or derivatives you will probably have no problem getting all the dependencies.

Preliminary results show that gscan2pdf generates much smaller files than Xsane so this is what I'll be using from now on and I encourage you to try gscan2pdf instead of Xsane.

Friday, 8 March 2013

warn of SSL certificate expiry


So you have been tasked to write a cron job to warn when your site's SSL certificate(s) will soon expire and send email to the responsible person. An expired SSL certificate is at best embarrassing and at worst can cause significant business disruptionThere are various ways to do this task.

Nagios has a plugin to check and warn about imminently expiring certificates.

If you have the openssl tools installed you can do it from the command line as shown in this blog post. You should be able to parse the returned date string with the GNU date command and convert it to a number of seconds since the Unix epoch for comparison with the current date to check if expiry is imminent.

I preferred to use Perl. There is a very useful Perl module Net::SSL::ExpireDate by Masaaki Hirose which provided exactly what I wanted. The example is self-explanatory.

On a Debian system other CPAN modules are required, fortunately all of them already in Debian so you only have install those with apt-get, then build, test and install Net::SSL::ExpireDate. Here are the modules I had to install, however there may be other dependencies that were already installed on my system, so this list may not be exhaustive.

libclass-accessor-perl
libcrypt-openssl-x509-perl
libdatetime-perl
libtimedate-perl
libtime-duration-parse-perl
libuniversal-require-perl

The Perl script I wrote was 13 lines long.

Saturday, 2 March 2013

enable ext4 for RHEL/Centos 5.6+ installs

This tip doesn't get enough publicity so I'm repeating it here:

Installing on ext4

Note the limitation on boot filesystems. I used it for the /home filesystem for better performance.

And if you forget during install time, you can still convert ext3 to ext4 afterwards, a search will find you many pages on how to do this.

Thursday, 21 February 2013

vncserver won't start on CentOS 5

CentOS/RHEL has a vncserver script which is configured to start instances for users specified in /etc/sysconfig/vncservers. The format is documented in the comments in that file.

I copied this from a host that I was making a copy of, ran /etc/init.d/vncserver start and it failed. Running the script again with sh -vx /etc/init.d/vncserver start showed me that it was failing on the first user at this line:
runuser -l ${USER} -c "cd ~${USER} && [ -f .vnc/passwd ] && vncserver :${DISP} ${VNCUSERARGS}"
Of course! I had forgotten to copy the .vnc/passwd file for each user across so the servers were failing to start. If this is a new machine, then you need to create passwords for each user first. It's a pity that the script doesn't inform you what really happened.

As an aside, I modified this line:

[ "$RETVAL" -ne 0 ] && break
to:


[ "$RETVAL" -ne 0 ] && echo "Warning: ${USER}:${DISP} not started"
so that the script would not die at the first user that caused an error.

But the script is not very sophisticated and doesn't allow for example starting VNC for a single user that you've added to the configuration.

Sunday, 3 February 2013

You don't need iTunes

Well, with a provocative title like that I should explain that I have tricked you a bit. You don't need iTunes to reset a first generation iPod Shuffle (gumstick) if it has gone out of its mind. Mine was flashing the green and orange LEDs alternately.

The usual advice is to install iTunes and to repair the iPod filesystem. However if you don't mind resetting everything, and there is little to lose on a 512MB or 1GB Shuffle, you can go straight to this reset utility from Apple and not have to download and install the big iTunes software package. At the same time this will update the software to the final version released. As the article states, you may need to try a few times. Mine worked on the second attempt.

I don't think this tip has a large audience as most first generation Shuffles are dead now as those models were released in 2005. Mine's still going strong though. I wonder when the flash memory and rechargeable cells will wear out.

Incidentally I can highly recommend the open source shuffle DB software. With this Python script installed in the Shuffle, and a Python interpreter on the host, you don't need complicated tune management software. Just copy the files onto the Shuffle and then run the program to build a playlist that the Shuffle will understand.

Sunday, 13 January 2013

Is Trackerbird maxing out your CPU? Here's how to remove it

After my recent OS upgrade I found that Thunderbird would pop up a dialog box every now and then saying that a script had stopped responding. A top command showed Thunderbird was consuming all of one CPU. I have multiple cores but this would be a problem for people with a single core, and in addition isn't good for power consumption. Also the status bar at the bottom showed either thousands of items remaining to be indexed, or Initializing...

A search showed somebody with the same issue. My situation was worse. Even though I disabled Trackerbird from the Addons Manager, I could not shut Thunderbird down properly to run without this addon. Every time I started up Thunderbird, it said I had to restart to disable Trackerbird. In addition, there was no Remove button. A search showed that the addon is a system one, and comes from tracker-miner-thunderbird. Removing the package with zypper fixed the problem once and for all. No more items remaining to index, no more maxing out one CPU. The package may be named different in your distro; mine is openSUSE.

I surmise that some of my mailboxes, being POPS or IMAPS connections that are not always connected, caused Trackerbird to spin its wheels trying to index the mailboxes.

As a personal gripe, why do Linux desktops foist indexing tools upon me without asking me if I want my data indexed or not? I have no use for indexing since I know where my documents are. I've also disabled akonadi, nepomuk and strigi because they were eating up CPU for no benefit to me. Answers to me on the back of a postage stamp.