Sunday 17 October 2010

Retarded industrial designer

I would like very much to torture the designer of the Compaq NC6000 notebook computer. I had one with a dead Real Time Clock cell and it took me a couple of hours, following instructions, to access the cell, which is deep inside the machine, in fact stuck underneath the body top cover. You end up even having to remove the CPU heatsink to lift that cover.


Do designers believe that these cells (CR2016 or CR2032) never die? This is not the first laptop I encountered that had a difficult to access RTC cell. I feel it's a conspiracy to create work for computer technicians.


If you are unlucky to have the same symptoms (computer will not remember time), search on the Internet for the NC6000 service manual which has disassembly instructions. The cell I removed had solder tags. I simply peeled off the tags and sticky taped them on a new cell. Hopefully it will last until the laptop becomes obsolescent.

Thursday 14 October 2010

A look at Lubuntu 10.10

This review is far in the past so out of date as Lubuntu has advanced a lot, but I'm leaving it here for posterity.

 I'm always interested in lightweight distros for older machines. A while back I looked at Lubuntu 10.04. Lubuntu 10.10 came out recently so I decided to see if the issues I mentioned were fixed.

I burnt the image to a CD-RW and booted an old Celeron 400MHz with 256MB RAM with it.

On the standard Ubuntu splash screen I see there is now an option to install right away, so I picked that. However the machine then churned away for many minutes. Finally I got a desktop with an Install icon. Silly me, I thought that if I picked Install to Disk it would not waste time setting up a desktop but go straight into the installer like Ubuntu does.

I clicked on the Install icon and after a long wait with more CD reading it said that Ubiquity had crashed. I must have clicked several times in succession because one of the attempts did bring up the first stage of the install dialog, but that didn't last long and crashed too.

Ok, maybe I should try it in a VirtualBox VM. Again, assign the same amount of RAM, 256MB, to the VM.

The first try I used the host CD drive. It eventually brought up the desktop with the Install icon, but the desktop was unresponsive, showing a wristwatch wait icon. I noticed that it was doing a lot of probing on the USB port. Why?

The second try I copied the ISO image to the disk and booted from that. This one also brought up the desktop and then Ubiquity crashed again. Again there was a lot of activity on the USB port. What was it doing?

At this point, I decided that this was all a waste of time. Sorry, I cannot recommend Lubuntu 10.10. if anything it's a regression from 10.04. Maybe I don't have enough RAM, but other LXDE distros installed fine with that much RAM. If you want a LXDE desktop, I recommend Mint 9 LXDE or Debian 5 LXDE both of which installed fully on that Celeron.

Wednesday 19 May 2010

Setting up SSL for MySQL server

I needed to set up SSL for MySQL on RHEL because the remote backup solution I chose, MySQL-zrm requires secure connections.


Most of the information about creating self-signed certificates and keys is here on this page. You can use the script in example 2, but I discovered a couple of things:


First, the script creates both server and client certs and keys. You only need to do one or the other, not both, to have secure connections. The advantage of client certs and keys is that you can use them in GRANT statements. I chose to have only server certs and keys. So I truncated the script after the commands related to the server.


Secondly, you might want add the option -days N, where N is the number of days of validity, to the openssl req and ca commands, to suit your needs.


Finally, you need to edit the source argument in the cp command to the actual location of the template openssl.cnf on your system. For my RHEL system, it is /etc/pki/tls/openssl.cnf. For RHEL 6, you also need to edit the replace command because the CA directory in openssl.cnf has changed from ./demoCA to /etc/pki/CA.


Put the script in a directory say /etc/mysql, and run it from the directory with ./mkcert.sh. It will create a subdirectory openssl for the results. You will be prompted twice for cert attributes, once for the CA and once for the server cert.


When you have finished, add these lines to /etc/my.cnf in the mysqld section:

ssl-ca=/etc/mysql/openssl/ca-cert.pem
ssl-cert=/etc/mysql/openssl/server-cert.pem
ssl-key=/etc/mysql/openssl/server-key.pem


You should chown, chmod and chcon (SELinux) the contents of /etc/mysql for security. I did:

chown -R mysql:mysql /etc/mysql
chmod -R g-w,o= /etc/mysql
chcon -R -u system_u -r object_r -t mysqld_etc_t /etc/mysql

Then restart the mysql service and look in /var/log/mysqld.log for any errors re the certs. If no errors,  you can check if SSL is available with mysql:

mysql> show variables like '%ssl%';

Answer for have_ssl should be YES. If not, check the file paths.


If you need to start all over, it suffices to delete the directory /etc/mysql/openssl.


Friday 7 May 2010

Lubuntu: promising start but needs work

This review is far in the past so out of date as Lubuntu has advanced a lot, but I'm leaving it here for posterity.

Lubuntu has joined the *buntu stable of distros and is supposed to be able to use lower spec machines. I have experience with other LXDE based distros like Mint-LXDE and openSUSE-LXDE, and Crunchbang, which isn't actually LXDE but uses openbox. That last I've found quite good on low-memory netbooks. I decided to give Lubuntu a go.

My test hardware was a 400MHz Celeron with 256MB RAM, 6GB hard disk and a 1024x768 screen. I burned a CD-RW and booted with it. It brings up the familiar splash screen and a language chooser on top of it. I picked Install to hard disk right away since I wanted to see what it was like booting for real from a hard disk. First problem, various text error messages to the screen. Second problem, this one serious, it took ages reading the CD to start up a live GUI session with an icon to do the install. That's kind of self-defeating. If you have a low-resource machine, you don't want the user grow much older waiting for the installer to start. Why not go straight into a minimal X installer? The live installer went through the standard 7-step Ubuntu setup rather sluggishly but finished fine. On rebooting from the hard disk it came up with an openbox desktop which worked ok, considering the speed of the CPU. I note that Chromium is the promoted browser. RAM footprint wasn't too bad, free showed something like 128MB actually used.

Overall it feels like it was rushed out to take advantage of 10.04 LTS release publicity. Note, Lubuntu 10.04 is not a LTS release. So a pass for Lubuntu 10.04 from me, but "can do better" next term.

Monday 19 April 2010

It's the wireless switch, stupid

A friend of mine asked to see why a HP Mini 5101 netbook with Ubuntu Karmic on it wouldn't connect to a WiFi AP. As far as I could see the best advice was to install bcmwl-kernel-source, which contains a non-GPLed driver and has to be compiled into a non-free kernel module. But I could not get it to see any APs.


Since the release of Ubuntu Lucid is imminent, I suggested trying Lucid Beta2 on it and then installing the official release after April 29. It would provide a netbook remix and LTS install too. (The Karmic install was a vanilla desktop install, not the best use of the small screen.) Installing off a USB memory stick was painless and fast. I connected it up to a wired network and did:



apt-get install bcmwl-kernel-source



That did all the setup required. But it still wouldn't connect to any APs. Wait a moment, I thought, how do I know if the RF transceiver is turned on or not? I remembered that many notebooks come with a RF kill switch. An examination of the keyboard showed no such function key. Time to do an Internet search. Sure enough, there is a RF kill switch, and it's on the front edge. See the first picture in this photo review. If the RF is on, the LED glows blue.


After flicking the switch on, all the neighbourhood APs were visible in NetworkManager. Duh, that will teach me to at least glance through the manual before installing.


And thank you HP for making a RF kill switch that looks so much like a lid latch. (Sarcasm.)

Wednesday 14 April 2010

Cloning a Linux distro inside a VirtualBox VM to a real machine

Recently I tried out LinuxMint Helena LXDE inside a VirtualBox virtual machine. Very nice it was too. I decided that I wanted it installed on a real machine. Being loath to do another install on the real machine I decided to do a network clone, a technique that I have refined over time.


The key idea is to boot both machines with a rescue CD and mount their respective disks. I use SystemRescueCD because it's very up to date with all kinds of filesystems and modules. Then we create a network pipe with netcat and tar the source machine filesystem onto the target machine. I won't specify every step in detail; you have to fill in some details yourself.


First boot the source machine and make a note of the partition filesystems and sizes. In the case of LinuxMint LXDE, it's one of the standard Ubuntu setups, / in /dev/sda1 and an extended partition /dev/sda2 containing a swap in /dev/sda5. I noted the size of the swap, and that / was ext4.


Then boot the target machine. First setup the networking and note the address assigned to the network interface. Then using fdisk, create the same layout. However you can make the disk larger or smaller, so keep the swap partition the same size (I assume your VM was created with the same amount of memory as your real machine; otherwise use the 2xRAM rule) but adjust /dev/sda1 to suit. This is one advantage of this cloning method, you can increase the size of the disk. Then create the target filesystems:


mkfs.ext4 /dev/sda1
mkswap /dev/sda5


Now on both systems, mount the root filesystem and any other filesystems on it. We have only / so this will do:



mkdir /disk; mount /dev/sda1 /disk



Go to the top of the target filesystem and start a netcat listener piping into tar:



cd /disk; netcat -l 2222 | tar zxvf -



On the source machine, mount the source filesystem.



mkdir /disk; mount /dev/sda1 /disk



Go to the top of the source filesystem and start a tar feeding into netcat:



cd /disk; tar zcvf - * | netcat 192.168.1.123 2222



Subsitute the real IP address for 192.168.1.123 above. You will see the files being packed up and sent over. On the target machine you will see the files extracting.


Netcat doesn't terminate at end of input so you have to judge when there are no more files to be tar'red and interrupt it. Usually it's the last file alphabetically, something like /var/xxx or /vmlinuz, depending.


Now you have to reinstall GRUB on the target machine. LinuxMint Helena uses GRUB2, so



grub2-install --root-directory=/disk /dev/sda



You also have to adjust the UUIDs of the partitions or it will not boot. Look at the UUIDs in /disk/etc/fstab, then use tune2fs and mkswap to fix:



tune2fs -U xxxx-xxxx /dev/sda1
mkswap -U yyyy-yyyy /dev/sda5



where the xxxx-xxxx and yyyy-yyyy are the long UUIDs. I recommend cut and paste from /disk/etc/fstab to avoid keying errors.


Another thing you might want to fix up is the label of the ethernet interface. Edit /etc/udev/rules.d/70-persistent-net.rules and comment out the entry for eth0 which will be for a pcnet32, the default "hardware" for VirtualBox VMs and promote the eth1 entry to eth0. This is cosmetic since the system will work fine with eth1 as the network interface.


Other distros may require more fixing up of sound or video configuration. Fortunately LinuxMint (and Ubuntu distros in general) work out these things and adapt more or less automatically.


Now you should be able to boot the target machine. Have fun.


It is also possible to do the reverse, but as the VM is inside a private network, you have to use bridging, or ssh forwarding to pipe the tar stream into the VM. This is one way to virtualise a real install.

Thursday 8 April 2010

Poor man's Model in PHP, Smarty and ADODB

Model-View-Controller (MVC) is a well-established architecture in web applications. If you are not using an established framework for your web app because it's a small app, it still pays to adopt some features of MVC because it makes your app cleaner.


If you are using a templating language like Smarty with PHP, you are getting some of the View aspect. I recently realised that there is a simple way to get a Model of sorts. Now typically with Smarty you would do a whole lot of assignments like this to pass values into the View:




$page->assign('empid', $empid);
$page->assign('empname', $empname);



I realised that with ADODB (or ADODBLite) I could turn on associative fetch, which makes the DB SELECT statement return rows of hashes, e.g.




$employee = Array('id' => '101', 'name' => 'Joe Bloggs', ...)



Then you pass the whole array into a Smarty template like this with one assignment, instead of one for each field.




$page->assign('employee', $employee);



Inside the template you can refer to the fields as:




{$employee.id} {$employee.name}



But that's not all. You can also get the input fields to return values in an array, like this:




<input name="employee[name]" title="Employee name" type="text" value='{$employee.name|escape:"quotes"}' />
In the PHP, you would get the array of values with:


$employee = $_REQUEST['employee'];
and it's an associative array similar to what you put in. You can even, with care, use that array to build an INSERT or UPDATE query for the database. What you have to watch out for is that the array has both numeric and symbolic keys when ADODB returns it. You can also use that array to carry other fields such as error flags and messages, provided you are careful not to try to write those back to the database. One way would be to filter out keys of a particular form, say those starting with _. In some of that HTML above you should take care to apply htmlspecialchars() or the Smarty modifier "escape" as necessary.

Access modem web admin page from inside IPCop protected LAN

I recently installed an ADSL2+ modem for a client. The previous ADSL1 modem had been setup by the ISP many years ago. Nowadays it's bring your own modem.


I used a patch cable to access the modem at the factory address 192.168.1.254 then put it in place as a transparent bridge in front of IPCop. I remembered that there was a trick to allow internal machines to access the web interface of the modem in its final position but it wasn't until later that I worked it out.


First of all, IPCop's eth1 needs to be on the same subnet as the modem. On a PPPoE interface on IPCop, the assigned IP address is 1.1.1.1. The exact address is irrelevant to the PPPoE discovery process, but it isn't one that can access the modem. We can either hack the PPPoE script to assign one that suits us, like 192.168.1.1. I preferred to leave that alone and instead assign an additional IP address to eth1, like this:





ip addr add 192.168.1.1/24 dev eth1 


This by itself is not enough, because packets intended for 192.168.1.254 from internal machines  will reach the modem, but the packets from the modem have no way to get back to the web browser. All we need is a masquerade rule in iptables, similar to the one that handles the NAT for traffic with the outside.



iptables -t nat -A REDNAT -o eth1 -j MASQUERADE



And that's it. These were the two lines I put in /etc/rc.d/rc.local. If you go to http://192.168.1.254/ from an inside machine, you get the modem's web admin page.

Adjust the IP addresses to suit your modem of course. Of course, if your LAN has the subnet 192.168.1.0 then you would have to change the modem's address first. Personally I don't understand people who, when they have a choice, use common subnets like 192.168.0.0 or 192.168.1.0 for their SOHO LANs. One of these days they will want a VPN with another LAN, and then... There are vast address spaces in 10.0.0.0/8 and 172.16.0.0/12 for the taking.

In some other tutes on the web you will see another method suggested and that is to establish a ssh forwarding to the modem. This has the advantage that you can control who gets access to modem admin page by who has ssh login on IPCop, assuming you don't trust the modem's login dialog to restrict access. The problem is that the default configuration of IPCop these days disallows ssh forwarding at the IPCop's sshd and you have to enable that by editing /etc/ssh/sshd_config and enabling AllowTcpForwarding.