Friday 29 December 2017

How to change the video resolution in Raspbian PC/Mac Pixel

In case you are not aware, you can also get Raspbian Pixel for your PC or Mac. It's a 32-bit Debian Stretch distro with Raspbian enhancements, notably the Pixel desktop.

One problem is that the default video mode after install is 640x480, rather limiting. I searched a bit for how to change the video resolution but most articles were about Raspbian on the RPi. However this GRUB documentation was the key.

To find the modes available to you, interrupt GRUB's booting with c, then at the grub> prompt, type videoinfo. You will get a list of available modes. This depends on your (real or virtual) video card. In my case I was running under VirtualBox and had sufficient video memory configured. Note a video mode you want, then as root (either via sudo or getting a root shell) do the following:

Edit /etc/default/grub. Change the entry GRUB_GFXMODE which is commented out by default to for example:

GRUB_GFXMODE=1280x1024x32

Check that the file also contains

GRUB_GFXPAYLOAD_LINUX=keep

Run update-grub, which will rewrite the grub.cfg file. Reboot and enjoy your new video resolution.

Addendum: I've found that when the VirtualBox guest additions are installed, a different resolution is used. This can be configured with the program lxrandr. I have to investigate under what circumstances the display switches from the resolution inherited from grub to its own setting.

Saturday 23 December 2017

Are all your blogger blogs using https?

Blogger now allows you to force all http access to redirect to https access. But if you have a lot of blogs how do you check which (historical) ones need to have this setting enabled in Settings > Basic? Wget to the rescue again. Assuming you have a list of http URLs in the file sites.

for s in $(cat sites)
do
  echo -n "$s " 1>&2
  wget --spider "$s" 2>&1
done | grep Location:


If the output is something like:

http://myblog.blogspot.com Location: https://myblog.blogspot.com/ [following]

that blog is fine.

The 1>&2 for the echo is so that its output isn't filtered out by the grep.

Monday 27 November 2017

EFI System Partition in soft RAID1

One reason you might want to put the EFI System Partition (ESP) in a RAID1 array on a computer with Linux soft RAID is to have redundancy when booting. If one disk fails, you want the boot to continue from the other disk.

At first I thought this wasn't possible since a RAID1 partition wouldn't have the specific FAT filesystem and GUID required by the specification. However the fact that the CentOS 7 install media offered the choice of putting the ESP on a RAID1 array and that it actually works, made me doubt my hypothesis.

The key to this that the CentOS 7 installer uses RAID metadata format 1.0, which is located at the end of the partition. Thus it doesn't clash with the beginning of the partition, which is where the BIOS will check to see if the partition is an ESP. However most Linux partition tools will detect it first as a RAID member so it's not immediately obvious that it's an ESP.

There are some caveats to this scheme. All writing of the ESP must be done while it's mounted as a RAID array so that there is no discrepancy between the two members. If the only OS on the disks is Linux, this won't be a problem. But don't use this scheme if the ESP also boots other operating systems that don't know about Linux RAID.

For CentOS when you look at the choice of boot devices in the BIOS, you should see two disk boot candidates, both labelled CentOS.

On the machines I used, HP z230 workstations, I found that I had to disable Legacy Boot or errors reading the boot sectors would be triggered.

The bottom line is I now have workstations with soft RAID1 whose disks are fully redundant. If one disk fails, the other will continue to boot and run with degraded arrays for each of the partitions.

Thursday 23 November 2017

grub2 error: failure reading sector 0x0 from 'hd0'. Press any key to continue

After I had installed CentOS 7 as the only OS on a HP z230 workstation in UEFI boot mode, I got this message before booting. It was actually the last of three errors:

error: failure reading sector 0xfc from 'hd0'. 
error: failure reading sector 0xe0 from 'hd0'. 
error: failure reading sector 0x0 from 'hd0'

Boot would resume from the hard disk after a timeout, but the pause was unacceptable and would worry users.

A search showed many articles like this but none solved my problem. I tried various things: refreshing grub.cfg, disabling the CD/DVD (thinking it might be trying to read the optical drive), checking if having the ESP in a EFI system partition in a RAID1 array was disallowed. (I figured out how ESP can work with RAID1, and its limitations, but that's for another blog entry.) None of my experiments worked.

However the linked to web page alluded to turning off Secure Boot so I went into that part of the BIOS setup. I found that it was already turned off but there was a setting there for Legacy Boot which was enabled. So I turned it off to see what would happen. Lo and behold, the error messages ceased, and UEFI boot worked as expected. Also the Boot Order menu stopped showing a Legacy section.

Since debugging the innards of the GRUB2 loader is beyond me, I can only surmise that the presence of Legacy Boot entries in the BIOS makes GRUB2 try reading the sectors in question but since the disk is formatted with GPT partitions and UEFI is in force, the sector reads fail, for some definition of fail. Maybe somebody can figure out the significance of the sectors 0xfc, 0xe0, and 0x0.

Thursday 16 November 2017

Dos and Don'ts deploying sssd for authentication against Windows AD

New: For deployment on Redhat/CentOS 6, see here.

sssd (and realmd) in RedHat/CentOS 7 offers the chance to use Windows as a single authentication base. The RedHat manual was the most useful but there were also good debugging tips on stackoverflow and similar forums. However in deploying sssd I found some things worked for me and some things didn't.
  • Do harmonise all the Windows and Linux login IDs. If there are users with two different IDs, then they'll have to bite the bullet and accept the change of one ID. Unfortunately domain logins cannot have aliases.
  • When you join Linux to AD using the realm command and an unprivileged account, you may encounter this 10 machine limit. Here's how to raise the limit.
  • Do use ntpd to keep all the clients in time sync. Specify the domain servers as NTP servers in ntp.conf. I had an issue where one client wouldn't authenticate. All the config files were identical with a working client. Finally I realised I had not enabled and started ntpd. It turned out to be clock skew. Kerberos is sensitive to this.
  • Do enable GSSAPI with MIC in sshd. It really works and you can use putty to ssh to the server without specifying a password provided the Windows user has authenticated to the domain.
  • Do use AD security groups to restrict access to the Linux servers. Otherwise all AD users can login by default. This means that enrolling a new Linux user across all the servers is simply adding the user to your chosen security group. Create one if necessary. Oddjobd will take care of creating the home directory on first login, which is very nice. I used the simple access_provider. I couldn't get the ad access_provider and ad_access_filter to work, but this is probably because I couldn't work out the correct LDAP strings.
  • You can also use a security group to specify who can have extra privileges in sudo.
  • I used the deterministic hash scheme for mapping SIDs to UIDs because I didn't want to (and didn't have authority to) add attributes to the AD schema.
  • When migrating existing user accounts, make sure you find all the places a user might have a file. Not just /home but also /var/spool/cron and /var/spool/mail. Kick all the users off and kill all of their processes before you do the chown. Since after the switchover the names will map to the new UIDs, you can cd /home and run a loop: for u in * do; chown -R $u $u; done. Also the cron and mail directories.
  • If you have software that must have simple login IDs, i.e. fred and not fred@example.com, then you should set use_fully_qualified_names = False. This implies you cannot have a default_domain_suffix. If you have a single domain, then you don't need domain suffixes. If you have multiple domains, then this is beyond my knowledge. I found that some applications cannot handle usernames of the domain form. Even the crontab command will create and require cron files of the domain form if domain suffixes are enabled.
  • I couldn't get the sssd idmap to work with Samba so I chose winbind. Also you have to use winbind if you have to support NTLM authentication.
  • New: If you are running 32-bit applications, you should also install the 32-bit libsss* shared libraries corresponding to the 64-bit ones, otherwise those applications may not be able to get user account info via PAM. This showed up in icfb, an old 32-bit Cadence executable, that worked for local users (in /etc/passwd) but failed for SSSD authenticated users.
  • New: If oddjob_mkhomedir doesn't work, as evidenced by no home directory created for a new login, check /var/log/messages. SELinux is probably blocking this. Either make the policy permissive, or create a policy for this.

Friday 15 September 2017

Use crontab to notify when a piece of software has been released

Sometimes I eagerly await the release of a distro version or a new version of a piece of software. But I don't want to remember to check constantly, so I wrote a script that can be run from cron to let me know.

#!/bin/sh
case $# in
0|1)
       echo Usage: $0 url message

       exit
       ;;
esac
url="$1"
shift
wget -q --spider "$url" && echo "$@"


As you can see, this runs a wget on a specified URL which does not produce any output, but if successful, will print the message. Put this in crontab and the message will be mailed to you.

This script depends on knowing a URL that will exist when the release happens. Often you can guess the URL from previous releases. Here are a couple of examples, each is all on a single line:

1. Check at 0808 every day to see if AntiX 17 has been released:

8 8 * * * watchurl https://sourceforge.net/projects/antix-linux/files/Final/antiX-17/ AntiX 17 released

2. Check at 0809 every day to see if VirtualBox 5.1.30 has been released:

9 8 * * * watchurl https://www.virtualbox.org/download/hashes/5.1.30/MD5SUMS VirtualBox 5.1.30 has been released


Rebuild kernel modules for VirtualBox on Linux automatically

I prefer to install packages distributed by VirtualBox rather than packages built by the distro maintainers as I don't have to wait for a new package when a new kernel is released. Unfortunately this involves rebuilding the vbox kernel modules after the new kernel is booted. So I decided to devise a way to automate this.

First I wrote a script to check if the vbox modules are loaded and if not to run the setup script. It's just this:

#!/bin/sh
sleep 120
if ! lsmod | grep -s vbox > /dev/null
then
       /usr/lib/virtualbox/vboxdrv.sh setup
fi


The sleep is needed as in some init systems cron comes up even before all the modules are loaded. Next, I installed a crontab entry that runs the script above once at bootup, using the @reboot notation:


@reboot /root/bin/vboxdrv

This goes into root's crontab, using crontab -e.

And voilĂ , that's all that's needed.

Friday 11 August 2017

Fix for format problem importing contacts from Yahoo into Outlook via CSV format

After some unpleasantness with Yahoo mail blocking mail to friends addressed in the BCC field by mistakenly labelling my mail as spam, causing me to have to change the password, I decided to phase out my Yahoo mail account and use Outlook instead for sending out tidbits of interest. So I needed to export my contacts from Yahoo to Outlook.

I followed instructions found on the Internet for transferring contacts Yahoo to Outlook, but Outlook kept saying that the imported CSV file was not in the required format.

So I decided to look at a contact export file from Outlook to see what the difference might be. The first thing I noticed was that it had the Byte Order Mark, U+FEFF, at the beginning.

So I processed the CSV file with this Linux pipeline, the unix2dos is to change NL to CR-NL for good measure:

unix2dos < yahoo_contacts.csv | uconv -f ascii -t utf-8 --add-signature > y.csv

uconv is from the ICU package. After that I imported y.csv with no problems. Success!

Friday 12 May 2017

A variety of mail address change protocols

I'm currently in the process of phasing out one email address and going through accounts I have on websites. In the process I've encountered a lot of variations on protocol and praxis. I'll start with the most secure examples:
  • Confirmation required on both the old and new addresses.
  • Confirmation required on the new address and notification to the old one, or vice versa.
  • Can just edit and save, perhaps with notification afterwards. This is bad, anybody who gains access to the account can change it. Also what if you make a typo, then you're locked out.
  • No way to edit, have to ask support to change it. Surprisingly some large sites require this. One asked me to create a new account, then they would migrate the history to it.
  • There is a user initiated process but there is a glitch such as the contact email can be changed but the login ID is still the old email. Have to contact support again.
  • The user initiated process doesn't work. Have to contact support.
And a lot of websites have no way to delete the account. All you can do is hope that your password is unique and encrypted securely, and that their database doesn't get stolen some day.

Thursday 9 March 2017

Don't halt boot if loopback mount fails

I mount the installation DVD image for my distribution with loopback mount so that I don't have to download packages if they are on the ISO image and up to date. To do this I have a line in /etc/fstab that looks like this:

/home/data/software/opensuse/openSUSE-Leap-42.2-DVD-x86_64.iso /srv/www/htdocs/42.2 iso9660 auto,ro,loop

The problem with this is that if for some reason the mount fails, say the ISO file has been renamed, or somehow the directory of the image or mountpoint are inaccessible, the boot process fails.

Enter the nofail option of systemd.mount. If the line is changed to this:

/home/data/software/opensuse/openSUSE-Leap-42.2-DVD-x86_64.iso /srv/www/htdocs/42.2 iso9660 auto,ro,loop,nofail,x-systemd.device-timeout=10

this prevents failure to mount from affecting the boot process. The problem can then be investigated after the machine has started up. The option x-systemd.device-timeout=10 specifies a shorter timeout than the default.

Thursday 2 March 2017

Another reason for 500 Server Error from Wordpress 4.7

I tried to login to my local installation of Wordpress last night and while the home page worked, the login page resulted in a blank screen and a HTTP 500 error in error_log.

I thought it might be the move to PHP 7 by Wordpress, although I was puzzled why since it worked the last time I used it. No luck, even after upgrading my PHP packages to PHP 7, the error persisted.

So I did what I should have done in the first place, I set display_errors = On in /etc/php7/apache2/php.ini. The error was then obvious:

PHP Fatal error:  Uncaught Error: Call to undefined function gzinflate()

An install of the php7-zlib package fixed that.

I think the reason why it stopped working was that my web browsers started requesting gzip compression. That was why when I browsed the page with w3m, it worked.

There are lots of reasons why Wordpress might result in a HTTP 500 error. The takeaway lesson is that you should enable display_errors to get more clues. Remember to set it back after you have fixed the problem.

Monday 27 February 2017

Batch service load threshold too low

In openSUSE Leap 42.2, the batch service doesn't start a job if the load average is above a threshold. This defaults to 0.8 in the source. This means that even if you have a multi-core multi-thread CPU which can handle the load, a job will not start until the CPU is fairly quiet.

Fortunately there is a command line setting to raise this threshold, see man 8 atd. What you have to do is

systemctl edit atd

and enter these lines:

[Service]
ExecStart=
ExecStart=/usr/sbin/atd -f -l 2


Then do

systemctl restart atd

A ps ax should show atd running with the new threshold. The first ExecStart resets the command line, and the second is the one that overrides the service that will be started by systemd. I have 4 cores so I chose 2. You might choose a different load threshold.

If you are running another distro that uses systemd, you should get the ExecStart command from the existing unit file, probably /usr/lib/systemd/system/atd.service and add the -l load to suit.

It would be nice if openSUSE could provide a setting for the threshold in /etc/sysconfig/atd in future.

Saturday 28 January 2017

Declaring the correct OS type to VMWare Player/Fusion matters

I built a pair of CentOS 6 VMs.

The first was constructed from an OVA file exported from VirtualBox. When it was booted, there was no network adapter detected. A little search showed that I had to add the line:

ethernet0.virtualDev = "e1000"

to the .vmx file. After that it worked.

The second was built from the installation DVD. I expected to have to edit the .vmx file again, but an e1000 network adapter was provisioned for the VM.

Looking at the two VMs the major difference was the first had an OS type of Other, while the second was declared as RHEL 6 (which CentOS 6 is equivalent to). This was probably because I had imported from an OVA file.

It seems that Player/Fusion is smart enough to provide a virtual e1000 adapter with the correct OS type declaration.

I expect that I will discover other aspects, such as the client tools, that will depend on this declaration when I continue with the configuration next week, so I will be fixing up the OS type of the first VM.

Friday 20 January 2017

How to rerun @reboot crontab entries without a reboot

Vixie cron and its descendants have a feature where an entry with the special time specification @reboot in place of the first five date and time fields indicates a one-shot action to be run when the machine is first booted.

But how do you test such entries without actually having to reboot the machine? My thinking was that somewhere crond must note the information that it already has been run once at boot.

Indeed a quick search of the filesystem found the zero length file /var/run/cron.reboot used as a flag that @reboot jobs have already been done.

So, to rerun @reboot jobs:

rm -f /var/run/cron.reboot

followed by:

systemctl restart crond

for systemd systems or

/etc/init.d/crond restart

for SysVinit systems.

Tuesday 10 January 2017

Configure Postfix to relay to Gmail with noanonymous

I am the only user on my home machine, so although I could configure my mail user agents, Thunderbird and alpine, to relay to Gmail directly, I preferred to set up Postfix as a relay.

There are many tutorials on how to do this, for example this one from Howtoforge so I will not go over familiar territory. However if you find that Gmail is giving you an authorization required error in your Postfix logs, you need this setting:

smtp_sasl_security_options = noanonymous

A lot of tutorials fail to mention this.

Also if you find in the logs that Postfix is attempting to connect to the IPv6 address of Gmail, and you don't have a IPv6 capable connection with your ISP, then you might want to set this:

inet_protocols = ipv4

You may not notice this without looking at the logs because Postfix retries with IPv4 after giving up on IPv6, so there will be a delay relaying the mail.

Thursday 5 January 2017

Installing the Brother HL2130 printer driver on Linux

I recently upgraded to openSUSE Leap 42.2. Since I did a fresh install, one of the things I had to get working again was my Brother HL2130 laser printer. It's a very basic model but does what I want and is cheap to run. It connects to the host computer with USB.

Brother supplies drivers for Linux as both RPM and DEB packages, do a web search to find a download server. These are actually filters that convert Postscript to Brother printer language. They integrate with the CUPS or the old LPD spooler systems.

After I did the install, nothing would print. The log showed the printer connecting and disconnecting from the USB port.

I looked at the files installed by the Brother packages and found in the directory /usr/local/Brother/Printer/HL2130/cupswrapper/ the file cupswrapperHL2130-2.0.4 I ran this manually with sh -x cupswrapperHL2130-2.0.4 -i and noticed that it ran a lpinfo towards the end to get some information. I ran lpinfo manually and found that it didn't detect the printer. I switched on the printer and this time the printer was detected with the path: usb://Brother/HL-2130%20series?serial=XXXXXXXX I reran the install script and this time the installation worked. This time it had edited /etc/cups/printers.conf with the correct DeviceURI.

So the short solution is: Switch on the printer before installing the Brother driver packages so that detection and configuration of the printer can happen.

This probably also applies to other Brother printer models using USB connectivity.

Later on a search showed somebody with the same problem and a manual solution.