Friday 28 December 2012

sox, rec, and play stopped working with ALSA?

I upgraded my openSUSE installation from 12.1 to 12.2 recently and to my chagrin, my audio recording script stopped working. I use references to ALSA devices such as hw:1. The error message was cryptic:

rec FAIL formats: can't open output file `hw:1': can not open audio device: Invalid argument

After a lot of searching, I found the answer. The base cause is that in libsox2, which openSUSE 12.2 uses instead of libsox1, has separated out the Linux specific ALSA support into a separate shared object which is not loaded unless you set the environment variable AUDIODRIVER=alsa. So this is what I had to do in my script:

export AUDIODRIVER=alsa
export AUDIODEV=hw:1
rec ... program.wav

You are supposed to be able to use -t alsa in the options of rec but I had no luck with that.

Similarly for the play command which is just a link to sox like rec.

If you are just using the default sound device, you may not hit this bug as it may fall back to pulseaudio support. It's only when you must target a specific ALSA sound card that this bug manifests itself.

Here is the bug report that was the crucial clue.

Thursday 6 December 2012

Blacklist a command in bash

I have an alias called nf. Sometimes due to fat fingers I end up typing mf and this starts up Metafont, which is installed because I use TeX, then I would have to exit it. I got tired of this and added this alias to $HOME/.alias:

alias mf='echo "Use \\mf if you really want metafont"'

If I really want to run mf, which is rarely, I can type \mf at the command line as the \ stops alias expansion. Invocations from shell scripts and Makefiles are not affected as $HOME/.alias is only read in by interactive shells.

BTW, please do not use this technique to block shell users from executing certain commands by aliasing them to something else. It's trivial to bypass in just the way I've shown.

Wednesday 28 November 2012

cd to directory containing a file

Many times I have run a program on a file and then immediately, wanted to change to the directory containing it, to do more operations. For example:
vim path/to/some/dir/file
then I next want to do:
cd path/to/some/dir
I used to do
cd !$
and then use the up arrow key to get the previous failed command to edit out the trailing filename and re-execute. Then I decided I should write a shell alias or function to do this. It has to be an alias or a function and not a script as cd needs to work in the current shell.

This is what I came up with:
cdf() { cd ${1%/[^/]*}; }
This uses the remove matching suffix operation of parameter substitution, see the bash manual page for details. So now I can do:
cdf !$
and I will end up in the directory containing the file I just worked on.

I picked cdf as an abbreviation for change to directory of file, but you may prefer some other name.

Friday 23 November 2012

VirtualBox host/guest conflct over audio device

I started a VirtualBox instance on my workstation that contains a CentOS instance to develop software. A bit later on I found this error from a periodic cron job I run on the workstation to record a radio program using the sound card:

rec FAIL formats: can't open input `hw:0': snd_pcm_open error: Device or resource busy
That error message comes from the rec program, part of the sox package. The first time it happened I thought the audio hardware had locked up so I rebooted it and the error went away. The next morning it happened again. I realised that the reason it went away the first time was because the CentOS guest had been shutdown by the reboot.

I disabled the audio device in the guest OS and had no more failures recording from the sound card. In general if you do not need audio in the guest, disable the device so that the guest does not interfere with the host OS's use of it.

You can also disable by editing the XML config fie, but only when the guest OS is not running. This is the relevant line:
<AudioAdapter controller="AC97" driver="ALSA" enabled="false"/>

Thursday 1 November 2012

Filezilla, Domain OS (Apollo), and Cygwin

What do these three things have in common? Well, there's a story behind it.

At a site where I work there are a couple of ancient Apollo workstations running Domain OS, a Unix-like OS. This OS had an early networked filesystem where the super-root is called // and hosts have their filesystems underneath this. E.g. if you have two workstations named ws1 and ws2, their roots will be at //ws1 and //ws2.

Users needed to connect to the workstation filesystem using FTP. I proposed Filezilla as a replacement for an older, less friendly client. A side note,  you need to use active mode FTP when connecting to this old FTP server. We could login fine and get a directory listing, but when we tried to enter a directory by clicking on the folder icon, we would get an error like: /ws1 not found.  Looking at the command stream it was obvious what was going wrong. Filezilla was issuing CWD /ws1 when it should be CWD //ws1.

How to get Filezilla to either 1. use relative paths for CWD, or 2. understand that the root is //, not /? There was no option to use relative paths, it seems that Filezilla always converts paths to absolute ones for CWDBy trial and error I discovered that Cygwin has by coincidence the same pathname convention, // is the super-root. So by setting the Server Type to Cygwin in the Site Manager entry for this Apollo workstation, Filezilla connections worked.

So that's the connection. Hope this tip helps you if you happen to have to connect to a Domain OS FTP server with Filezilla.

Monday 17 September 2012

SFTP is not FTP

Today I encountered another confused person who thought that to provide SFTP service, he had to install an FTP server.

I'm writing this post so that I can point people to it next time I encounter this misconception.

Yes, SFTP stands for Secure File Transfer Program and FTP stands for File Transfer Protocol but there the similarity ends. SFTP is run over a ssh connection, which normally uses the single service port 22. FTP is a different protocol using two ports, normally 20 and 21 for data and command (I will not go into the complexity of active and passive modes here). They are not related. The Wikipedia entry for FTP explains it succinctly. SFTP servers are different from FTP servers. Although there are clients that are capable of connecting to both types of servers, for example, Filezilla.

If you can ssh to a server, you can probably sftp also. I qualified that claim with "probably" because the sftp functionality has to be enabled and allowed to users.

SFTP is much much preferred over FTP due to encryption of the stream.

To complicate things there is a variant of FTP called FTPS which uses TLS to encrypt the stream.

Thursday 6 September 2012

What good is the --target option of cp?


If you look at the man page for cp(1) on operating systems where the GNU tools are used, such as Linux, you will see there is an third form that uses the -t option or alternatively the equivalent long form --target.
cp [OPTION]... -t DIRECTORY SOURCE...
So what good is this when you already can do:
cp [OPTION] SOURCE... DIRECTORY
Here's a reason. Suppose the SOURCE list is large and comes from a file or another command. So you have to use the xargs command to invoke cp as many times as necessary to consume the list, without running into command line argument limits. Assuming the source list is one per line, you could do something like this:
xargs -d '\n' cp -pr -t destdir < listofsources
The -t allows the destination directory to be put before the source arguments in the cp command. Without it,  you would have to resort to the interpolation feature of xargs, i.e. -I {}

Thursday 12 July 2012

TXT_DB error number 2 when generating openvpn client certificates

You may have followed the openvpn quick start instructions either from the online tutorial or using the README file in easy-rsa where it asks you to go through these steps:

[edit vars with your site-specific info]
source ./vars
./clean-all
./build-dh
./pkitool --initca
./pkitool --server myserver
./pkitool client1
 and then you get
failed to update database TXT_DB error number 2
at the last step. I did and a web search mostly turned up suggestions to run ./clean-all again. But this article was the key. It's about openssl, but openvpn's easy-rsa is just a front-end to openssl. The important observation is that every certificate must have a unique CN in the database. In the file vars, this is controlled by KEY_CN. You left the settings read in from vars unchanged between generating the server cert and the client cert. You could edit vars before generating the client certificate and re-source vars, or you could do this before generating each client key.
KEY_CN=someuniqueclientcn ./pkitool client1
and you will stop getting that TXT_DB error.


I'm a bit surprised that the documentation for openvpn hasn't been updated to make this clear.


NB: It is also affected by the setting unique_subject = yes in the file keys/index.txt.attr, but I prefer not to go against the default setting.


Thursday 5 July 2012

How to get a list of installed software on RPM based systems

You might want to do this to know what packages to restore, or to discover the difference between two installations.


In this article, the suggested command is rpm -qa. That is correct, but it has a problem. It will list the package names without the architecture. If you are on an x86_64 system, there may be both x86_64 and i386 packages. If you use the generated list to (re-)install the software you may end up getting both architectures. You would get extra packages or worse, there may be a conflict due to common pathnames in the two architectures.


Therefore we need to also output the architecture with the package name. For this we use the --queryformat option of rpm, or the shorter form --qf.


rpm -qa --qf '%{NAME}.%{ARCH}\n' > listofpackages

Friday 22 June 2012

Don't install VirtualBox on Windows from a network share

Why not, you ask? Well during the installation process, network device drivers are installed (the cause for the several popup warnings about unverified software). These break the network connection temporarily and of course if your installer package is on a network share, your installation stops dead. Duh, stupid me. So install it from a local disk. A USB stick is also ok.


There is actually a longer story in my case. I was upgrading versions. After I realised my mistake I installed from a local drive. But unknown to me damage had already been done. It worked fine for a while. Then one day I decided to do some disk cleaning. Hmm, how come I have both 4.1.14 and 4.1.16 installed? Never mind, I'll blow away 4.1.14. Everything appeared to be still ok afterwards.


Next I needed to enable bridged networking for the Learning Puppet VM. No matter what I did, I could not obtain an interface to bridge. A search turned up this troubleshooting advice in the VirtualBox manual. To paraphrase, one reason for no bridged network adapter is MaxNumFilters is too low. Well, that wasn't it. It was another reason: "The INF cache is corrupt". After I removed %windir%/inf/INFCACHE.1, rebooted and reinstalled VirtualBox, bridged networking became available. It must have been using the driver from the previous release of VirtualBox because I had a partial install of the current version.


So that advice again: Install the VirtualBox package on Windows from a local disk.


Friday 15 June 2012

Expanding an ext3 partition

As has been documented at howtoforgehere and many other places you can find with a search, resize2fs can expand your ext3 filesystem without losing data. (It's actually one case of resizing, you can also shrink.) You have to extend the containing partition first, using parted or the CLI way, by deleting and recreating the partition with a higher cylinder boundary.


I just want to add a few comments. 1. The switching to ext2 and back mentioned in the howtoforge article isn't necessary any more. 2. You can do this on a partition that is not needed for system operation like /home without booting to a rescue disk. This is useful if you have only online access to the server. In fact I did the expansion in parallel with some (tested) RHEL package updates. 3. It works exactly as expected for SAN volumes. It was very nice with a SAN, all the SAN manager had to do when I requested an expansion was to issue a command for the SAN to increase the "disk" size and it finished the task in a few hours.


If you are using logical volume manager, then you have other options too.

Thursday 7 June 2012

Samba, SELinux and the homes share

As has been documented here and many other places, on Redhat, CentOS, Fedora and other distros that come with SELinux enabled, you have to enable the SELinux boolean samba_enable_home_dirs if you export the homes share. I did this but I still could not connect to the share. The usual error message is NT_STATUS_BAD_NETWORK_NAME.


It turns out that you need samba_export_all_ro or more likely, samba_export_all_rw enabled to mount the share. This was on CentOS 5.5. It could be that on this distro and version both come disabled by default.


Make sure though that SELinux is the cause. There are many other reasons for NT_STATUS_BAD_NETWORK_NAME. A quick way to check is to set SELinux to permissive temporarily to attempt the connection.

Saturday 2 June 2012

How to remove the last command line argument in a bash script

In a bash wrapper script I needed to pass a bunch of arguments to the program. No problem, I'll do that with "$@". Then I had a new requirement: if the last argument is of the form *.ppm, I want the stdout of the program to go to this file. But any previous arguments, i.e. options, should be passed to the program. So it boiled down to this:
if nth argument matches *.ppm
  program "1st arg" .. "n-1th arg" > "nth arg"
else
  program "arguments"
To get the last element of an array, you can do ${argv[-1]}. Oops, you cannot do this with ${@[-1]}. So we have to make a copy in a local variable first:
declare -a argv=("$@")
declare file=${argv[-1]}
But we still have to remove the last argument of $@. We can't set it to the empty string, it still exists and will be seen by the program as a empty string. No we can't alter $@, so we have to use unset on argv. But this doesn't work:
unset argv[-1]
Ok, so we have to get the index of the last argument. This is one less than the length of the array, which is ${#argv[@]} (the reason for [@] is the quirky bash syntax for referring to the whole array). So we have to use $(()) to do arithmetic.
declare argc=$((${#argv[@]}-1))
So, putting it all together, the code looks like this:
declare -a argv=("$@")
declare argc=$((${#argv[@]}-1))
declare file=${argv[$argc]}
unset argv[$argc]
Then in the if branch we can write:
program ... ${argv[@]} > "$file" 
Whew!


There is a similar idea in the pop operation here, where we replace the contents of argv with all but the last element, using the subarray notation.

Friday 18 May 2012

Need a Super key for Openbox?

This happened to me on Crunchbang Linux, but would for any desktop using Openbox. I had just installed Crunchbang on an old machine. When I started using the desktop I reached for the Super key (normally the Windows key) and found it missing. Oops, I have a very old keyboard on this machine. Openbox uses the Super key extensively. What to do?


The right Control key is seldom used so I decided to remap this to Super. A perusal of the xmodmap command showed the sequence of commands required.


First remove the control modifier from Control_R:


remove control = Control_R


Next assign it the mod4 qualifier, which is Super:


add mod4 = Control_R


Putting it all together in one command:


xmodmap -e 'remove control = Control_R' -e 'add mod4 = Control_R'


Put this command at the end of ~/.config/openbox/autostart.sh and it will be run at login.


Alternatively you can put those two commands in ~/.Xmodmap as described here. The step with keycode described on that page and many similar ones isn't necessary though.

Tuesday 15 May 2012

Installing Flash Player plugin on Precise Pangolin behind a proxy

You may find that the installation of the package flashplugin-installer fails if you are behind a proxy. Even though you may have instructed apt to use a proxy using a stanza like this:



Acquire {
        http {
                Proxy "http://proxy.example.com:8080/";
        };
};


the download fails. This is because the fetch script doesn't know about apt proxy settings. To fix this, do:


HTTP_PROXY=http://proxy.example.com:8080/
export HTTP_PROXY
apt-get --reinstall install flashplugin-installer


Another method is, when it fails, you will see the URL of the plugin tarball in the error messages. Download the tarball by other means and then do:


dpkg-reconfigure flashplugin-installer


Give it the directory where you have downloaded the plugin tarball to and it will unpack the tarball.

Thursday 10 May 2012

Force yum to determine the fastest mirrors again

We are going to ship this CentOS machine overseas where the fastest mirrors will be different. We want yum to work that out on deployment there. A quick search of the files in the package yum-fastestmirror found the cache file: /var/cache/yum/timedhosts.txt


Just delete or rename this file and yum will do a speed test the next time it is run.


The plugin yum-fastestmirror appears to be an EPEL addon, so may not be installed by default on RHEL or Fedora.

Wednesday 9 May 2012

Downloading updates to openSUSE during off-peak times using zypper

You might, like me, have an ISP that has peak and off-peak quotas. I prefer to consume off-peak quota whenever possible to leave more for waking hours.


My openSUSE installation gets regular package updates. Since I want every update in any case I looked for a way to download the packages ahead of time during off-peak. It turned out to be quite easy, as root I just created a file in /etc/cron.d/zypper-offpeak containing one line:



6 6 * * * root zypper -n up -l -d > /dev/null 2>&1


See man crontab for an explanation of the fields. This runs at 0606 in the morning. The -n allows it to run non-interactively, the -l auto-agrees with licenses (usually Adobe Reader plugin), and -d is the magic, meaning download only.


The packages go into the normal package cache, and next time you do a zypper up, they will install straight away. This makes for faster updates and the savings can be considerable for kernel-source and VirtualBox packages.


This has already been blogged here citing a different motivation. The correct answer is in one of the comments, apparently -d or --download-only was not in old versions of zypper.

Saturday 5 May 2012

Building openconnect for RHEL 5

This blog entry is of historical interest now. If you just want openconnect ready to run for RHEL/CentOS/clones, you can get openconnect 4.0 from EPEL now, thanks to David Woodhouse. (2012-07-26)

Openconnect is an open source client that can connect to Cisco's AnyConnect SSL VPN. There are packages for Fedora but none for RHEL 5. I needed one (actually for CentOS 5) so I set myself the task of building it from the latest Fedora 18 source RPM.


First of all install mock from the EPEL repo. You will need to track down the latest version of Mock 1.0, as Mock 1.1 has Python 2.5 constructs which won't work on RHEL 5.


You can't just do a mock rebuild right away. The Fedora SRPM has an MD5 sum which causes an error when extracting on RHEL 5. In any case we need to make some edits to the spec file. So first extract the Fedora 18 openconnect SRPM in the standard RHEL area, /usr/src/redhat:


rpm -i --nomd5 openconnect-...src.rpm


Now edit the spec file and remove the dependency on libproxy and change vpnc-script to vpnc. Also change the openssl dependency to the latest available for RHEL 5. RedHat keeps the version number the same while backporting fixes so it's ok that the required version decreases.


Create a new SRPM with


rpmbuild -bs openconnect.spec


Do the mock build on the new SRPM


mock openconnect-...src.rpm


You will encounter missing dependencies, so use tell mock to install those in the fakeroot and try again. Eventually you will get a binary RPM in /var/lib/mock/epel-5-x86_64/result (i386 for 32-bit of course).


Unfortunately the associated NetworkManager-openconnect cannot be ported to RHEL 5 due to requiring higher versions of X libraries. However it isn't too unfriendly to run openconnect from the command line (with root privilege, as it configures a TUN device):


openconnect ip_address_of_gateway


and it will prompt you for the username and password. You may want to look at command line options of openconnect to see what else is needed.


Probably the best you can do to make it user friendly is to use sudo to run it, and then create a GNOME or KDE launcher to run a terminal running it.

Wednesday 2 May 2012

Car MP3 player cares about partition type

In my previous adventure I used a USB stick to attempt to install Crunchbang Linux. After that I returned the USB stick to its normal duty. I am not sophisticated enough to have an iPod dock in my car, but the CD/MP3 player in my old bomb can accept USB sticks, so I use it to carry music for car trips.


But when I inserted the stick the player showed NO FILE or ERROR. What had I done?! Had I destroyed the stick?


Wait a moment. When I reformatted the USB stick for installing Crunchbang, it had changed the partition ID to Linux instead of FAT16. Neither the Linux kernel nor the system utilities cared that the partition ID was not DOS, they happily mounted the DOS filesystem or displayed the contents. But the car player did check the ID.


A quick edit of the partition ID with fdisk and the stick worked again. Phew!

Sunday 29 April 2012

Replacing Ubuntu with Crunchbang

With the release of Precise Pangolin, Lucid Lynx's (Netbook Remix) days on my old Asus EEE900 netbook are numbered. It wasn't clear that Precise would run well on this old hardware (Celeron, 1GB RAM, 16GB SSD, dual booting XP) and I wasn't keen on the ongoing UI simplifications in Ubuntu so I decided to switch to Crunchbang Statler. CLI holds no terror for me and I would benefit from the tons of up to date Debian packages.


The install from USB stick using unetbootin was anti-climatic. The whole process has been described here so I won't repeat that. The only hitch was when I forgot the 1GB USB stick I first tried has a weird "floppy" partition which the netbook tried to boot from. I succeeded when I used a more conventional 1GB USB stick, actually a broken USB MP3 player.


The result is a much more responsive netbook. I can even watch videos using VLC streaming over WiFi from a Samba share; this used to struggle on Lucid UNR. Mounting the Samba share was quite easy. The applications in Crunchbang may not be as integrated as on Ubuntu, but all the applications and connectivity options are there if you are willing to use the keyboard shortcuts instead of insisting on icons everywhere. It also shows how much more friendly Debian is these days when there is a bit of UI facade.


I think I will be quite happy to take the netbook on my next travels until I find a pad and BT keyboard combo that I like.

Friday 20 April 2012

Adobe Reader oops

If you have received an update on your Linux machine for the latest Adobe Reader plugin for Firefox, then you may see a stray file of this name in one of your directories: C:\nppdf32Log\debuglog.txt (yes, with the backslashes, because backslash is legal in Unix/Linux filenames). In my case it was in ~/Documents/


From a web search this appears to be a boo-boo by Adobe programmers, releasing a package with a debug feature enabled. It doesn't seem to hurt anything, just makes people suspicious that their system has been rooted. I'm sure another Reader release will be imminent.


Sigh, I always thought they were cowboys, to observe the constant stream of security patches to Reader and Flash Player. For Reader there are alternatives like okular or evince, but Flash Player is harder to replace.

Thursday 12 April 2012

Modifying synbak to use lzop or xz for compression

For backing up a server I installed the synbak program on CentOS, partly because it was a standard package from rpmforge, and partly because it is uncomplicated to use, basically just shell scripts. I noticed that when I used gzip as the compression method, it tied up a whole CPU doing the compression. So I decided to add lzop support. It's very easy, just add these two lines to /usr/share/synbak/method/tar/tar.sh:

--- /usr/share/synbak/method/tar/tar.sh.orig    2010-06-10 05:18:14.000000000 +1000
+++ /usr/share/synbak/method/tar/tar.sh 2012-04-12 09:46:36.000000000 +1000
@@ -30,6 +30,8 @@
        tar)    backup_method_opts_default="--totals -cpf"  ; backup_name_extension="tar" ;;
        gz)     backup_method_opts_default="--totals -zcpf" ; backup_name_extension="tar.gz" ;;
        bz2)    backup_method_opts_default="--totals -jcpf" ; backup_name_extension="tar.bz2" ;;
+       lzo)    backup_method_opts_default="--use-compress-program=lzop --totals -cpf" ; backup_name_extension="tar.lzo" ;;
+       xz)     backup_method_opts_default="--use-compress-program=xz --totals -cpf" ; backup_name_extension="tar.xz" ;;
        *)      report_text extra_option_wrong && exit 1 ;;
   esac


You run it like this: synbak -s serverconfig -m tar -M lzo

I added xz support for good measure, but this is untested. Make sure that backup_method_opts in your config file doesn't have -z or the tar command will throw an error.


Naturally you must have the lzop and/or xz packages installed.


Using lzop, tar and lzop take about equal amounts of CPU time, most of the time tar was I/O bound, and backups finished faster. I also see that the memory footprint of lzop is small. The downside that is the compressed output is larger, about 31% more in my case. Decompression is supposed to be very fast, which is useful when restoring. I haven't used decompression and I hope I won't ever have to.

Thursday 5 April 2012

File AI for instant filesharing

While doing a search for cloud services to share files to a small number of recipients, I came across the interesting File AI service, which deserves to be better known, even though it's been running for a few years. It works by running a peer-to-peer Java applet in a browser on both sides. No software needs to be installed, as long as your browser supports Java.


It's very simple to use, to send a file, just go to fileai.com, click on Send and follow the instructions. You will need to click Run or Trust when the Java applet wants to run because it's unsigned. (Your recipients will also.) Then you drag and drop files you want to share into the "folder". You get a URL that you send to your recipients by email, etc. You can optionally specify a password for recipients before adding any files. You must keep the browser open until all the recipients have the file.


At the recipient end, they just click on your link, or paste it into a browser, click Run or Trust for the Java applet and the file is on the way.


Here is an blog post describing it and here is a YouTube video.


And in case you were wondering, yes it works behind firewalls.

Wednesday 4 April 2012

Old netbook as a tethered Internet gateway


I had an ancient Asus EEE700 useless even as a netbook for the road. I wondered if I could turn it into an emergency gateway to my Android smartphone for when my ADSL connection goes down. It has Crunchbang Linux installed which is basically Debian. Here's what I did, for the learning experience:


First you need to disable NetworkManager. Easiest way to do this is to first install the sysv-rc-conf package, and then use its ncurses interface to shut NetworkManager down. Next you need to configure usb0 (the tethered interface) and eth0, the gateway interface by editing /etc/network/interfaces. You might want to preserve an old version of this file to restore to the default setup. Here are the stanzas used:


allow-hotplug usb0
iface usb0 inet dhcp


auto eth0
iface eth0 inet static
        address GW
        netmask 255.255.255.0

GW should be replaced by whatever gateway address is suitable for your LAN. If your ADSL router is 10.0.1.254, perhaps 10.0.1.253 is suitable.

Here are the rules to make it a NATting gateway, taken from Masquerading Made Simple. I have replaced ppp0 with usb0 everywhere:


iptables -F; iptables -t nat -F; iptables -t mangle -F
iptables -t nat -A POSTROUTING -o usb0 -j MASQUERADE
echo 1 > /proc/sys/net/ipv4/ip_forward


And to secure it:


iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -m state --state NEW ! -i usb0 -j ACCEPT
iptables -P INPUT DROP   #only if the first two are succesful
iptables -A FORWARD -i usb0 -o usb0 -j REJECT


You could save the setup using iptables-save, like this:


iptables-save > /etc/gateway.iptables


Then make a shell script to start the gateway for convenience:


#!/bin/sh
echo 1 > /proc/sys/net/ipv4/ip_forward
iptables-restore < /etc/gateway.iptables


Now tether the netbook to the Android smartphone. When you do ip addr, you should see that it has obtained a private address from your smartphone, and your eth0 address should be static.


Next, on your workstation change the default gateway. From a Linux command line it's


route add default gw 10.0.1.253


When you access the Internet, you should see the traffic icons on your smartphone flash. In addition if you go to a website that tells you your IP address, like whatismyipaddress.com, you should see the external address of your smartphone.


When your ADSL service comes back, delete the route with:


route del default gw


Oh yes, this setup does double NAT, since your smartphone does one NAT.





Monday 2 April 2012

Open Source to the rescue (of Windows)

I needed to set up an XP workstation with wireless access using an old SMC USB wireless G dongle. This device is based on the Prism chipset.


The SMC drivers installed fine and detected the device. XP with all the latest updates supports WPA2. Unfortunately the ancient (2005) SMC management utility doesn't support WPA2, only WPA at best. The native wireless management service on XP, WZC was unstable and caused a crash as soon as it was started. I didn't want to weaken the AP security to WPA.


Happily I found an open source utility called wpa_supplicant, which runs on Linux, BSD, OS/X and Windows. I installed the Windows version of this, and after filling in the credentials, the wireless connection just worked over WPA2. The GUI version of the utility requires clicking on a button to launch the service. I could install the service version of the utility to start automatically at boot, but the user interaction is not onerous for this temporary setup.


Open Source rocks, again.

Thursday 29 March 2012

Tethering a Linux machine inside VirtualBox

Yes you can.


Today I needed to test an openconnect VPN connection while inside a LAN. At this site my desktop is Windows but I needed to check connectivity from outside for Linux users, using the openconnect and NetworkManager-openconnect packages.


Let's see, I could plug my smartphone into the USB port of the desktop, forward the USB connection to Fedora 16 inside VirtualBox and I should be able to connect to usb0 and I would have a WAN connection from outside. Right?


To cut to the chase, it just works.


In the VirtualBox, make sure USB forwarding is enabled in the VM settings. Plug in the smartphone and turn on USB tethering. Ignore Windows suggestions to install hardware drivers for your smartphone. When the VM is running, there will be a USB icon on the bottom bar. Choose the USB device that is your smartphone. Windows will suggest installing a VirtualBox USB driver. Do that. On Linux a usb0 device should appear in the network manager and after disabling the eth0 device which forwards to Windows, you can connect to it. You should then get a DHCP lease from your smartphone and be connected to the outside world.


It seems you have to install the VirtualBox USB driver every time the VM is started, not sure why.


This should work on other distros. For example I know openconnect works on Debian and Ubuntu. It should also work for other VPN technologies, such as openvpn. The USB network driver is called cdc-ether, by the way.

Sunday 11 March 2012

Two gotchas with Postfix, Dovecot, Amavis and Clamav on Debian Squeeze

1. The first problem was when this error appeared in /var/log/mail.log:


Mar 10 16:56:39 mailhost amavis[2877]: (02877-01) (!)ClamAV-clamd av-scanner FAILED: CODE(0x358cef8) unexpected , output="/var/lib/amavis/tmp/amavis-20120310T165639-02877/parts: lstat() failed: Permission denied. ERROR\n" at (eval 103) line 594.


The problem is that clamav requires access to files created by amavis. We fix this by putting amavis and clamav in each other's group.



usermod -a -G clamav amavis
usermod -a -G amavis clamav

Then restart amavis and clamav-daemon.

2. The second problem was when postfix could not authenticate incoming SMTP connections by chaining to dovecot's auth process, resulting in this message in /var/log/mail.log:

Mar 10 18:28:14 mailhost postfix/smtpd[7217]: warning: SASL: Connect to /var/run/dovecot/auth-client failed: No such file or directory

The problem is that postfix runs chrooted by default in Squeeze and this named pipe is outside of the chroot tree. To fix it we tell dovecot to use this path instead:



/var/spool/postfix/private/auth-client


Then in /etc/postfix/main.cf we specify the path to the named pipe with:



smtpd_sasl_path = private/auth-client


It's a path relative to $queue_directory which is /var/spool/postfix.

Thursday 9 February 2012

Selecting a text region on Android 2.3+

I often need to select a region of text on my smartphone, to delete, or to copy to the clipboard. What to do when you have a small screen and few input devices, unlike a desktop? I figured it out.


Touch and hold the screen until the popup menu appears. Choose Select Word or Select All. Which one you use depends on how much you want to include or exclude. You will get two "paddles" surrounding the selection. Drag them to surround the region you want, then hit backspace to delete, or tap the screen to copy to clipboard. Depending on the app, there might be other options accessible from the menu to check the text in dictionary, etc.


This is actually explained in detail in  this Android guide. I wish I had found that guide when I was first looking.


Hope this helps.