Showing posts with label Debian. Show all posts
Showing posts with label Debian. Show all posts

Thursday, 2 January 2025

Installing mplayer on dietpi on Raspberry Pi

In a previous article, I explained how to use mplayer to record Internet radio broadcast non-interactively.

I got it working on my workhorse PC, but it suffered from using 100% of one core as mentioned in that article. I have 12 cores so this wasn't a disaster. But I thought I could shove the job to a less important computer and also have a backup means of recording. I have a very old Raspberry Pi 2B which was idle.

I tried installing the latest Raspberry Pi OS, but I couldn't write the entire image on the micro SD card. I think the USB to micro SD adaptor got too hot and caused sector errors towards the end. Maybe I should have limited the writing rate. Anyway I decided to use a lighter RPi distro: dietpi.

This installed and booted up fine. I had an issue with the default NTP server until I specified a regional pool. Then I encountered a series of problems:

Apt repos need to be signed

It couldn't update from the default Debian Bookworm archive because the signing key wasn't present. Normally this is provided by the distro but since this is dietpi they only provided keys for the repos they used. Or they provided an old key.

Cut to the chase: Install all the relevant Bookworm repos, you can find a definitive list of them at at various sites. If possible use a local mirror for the repos. Don't forget bookworm-security, but this will come from security.debian.org, not a mirror.

When you do an apt update it will complain about various unsigned repos. Note down the key fingerprints for the next stage.

Apt-key is deprecated

Ignore any tutorials that talk about using apt-key to install the required keys. For security reasons, apt-key is deprecated. The new way of doing it is:

Download the GPG keys for all the repos missing keys. You'll need to find a suitable keyserver with the Debian keys.

Feed each through gpg to dearmor the keys and write the output to a suitably named file in /etc/apt/trusted.gpg.d/ Here's what I have:

root@DietPi:/etc/apt/trusted.gpg.d# ls
bookworm-security.gpg       debian-bookworm-archive.gpg  dietpi.asc
bullseye-security.gpg       debian-bookworm-stable.gpg   raspberrypi-archive-stable.gpg
deb-multimedia-keyring.asc  debian-bullseye-archive.gpg  raspbian-archive-keyring.gpg

I haven't given the details to avoid duplication and because I could have forgotten some bits. You can find them in up-to-date tutorials.

You need Deb multimedia

Mplayer uses some codecs that are not supplied in Debian, so you have to get them from Deb Multimedia. Use a mirror if you can. You need to install the signing key for this in the same manner shown above.

Finally install mplayer

Do an apt update and then apt install mplayer. If you have any issues at the update step, fix those. I came across issues like mirrors no longer existing, or didn't specify their Debian repo domain in their list of alternate domain names which caused failure on verification.

Also any other packages with problems could block the installation. For example I had issues with libgomp1 where it had a spurious dependency. I actually hacked /var/lib/dpkg/status with a text editor to bypass this.

What made this exercise worthwhile

When I use mplayer on the RPi to record an Internet radio station I was surprised to find that it didn't eat up 100% of a core like on my workstation. So I ran a strace on mplayer on my workstation and saw that it was looping on reading file descriptor 0 (stdin). Recalling that mplayer reads single keystrokes from the controlling terminal to control the playback, I reasoned that it must be doing that in non-interactive mode and looping on failure. So I found the -noconsolecontrols and -slave options to mplayer and adding these to the command made the CPU usage normal again.

Friday, 29 December 2017

How to change the video resolution in Raspbian PC/Mac Pixel

In case you are not aware, you can also get Raspbian Pixel for your PC or Mac. It's a 32-bit Debian Stretch distro with Raspbian enhancements, notably the Pixel desktop.

One problem is that the default video mode after install is 640x480, rather limiting. I searched a bit for how to change the video resolution but most articles were about Raspbian on the RPi. However this GRUB documentation was the key.

To find the modes available to you, interrupt GRUB's booting with c, then at the grub> prompt, type videoinfo. You will get a list of available modes. This depends on your (real or virtual) video card. In my case I was running under VirtualBox and had sufficient video memory configured. Note a video mode you want, then as root (either via sudo or getting a root shell) do the following:

Edit /etc/default/grub. Change the entry GRUB_GFXMODE which is commented out by default to for example:

GRUB_GFXMODE=1280x1024x32

Check that the file also contains

GRUB_GFXPAYLOAD_LINUX=keep

Run update-grub, which will rewrite the grub.cfg file. Reboot and enjoy your new video resolution.

Addendum: I've found that when the VirtualBox guest additions are installed, a different resolution is used. This can be configured with the program lxrandr. I have to investigate under what circumstances the display switches from the resolution inherited from grub to its own setting.

Thursday, 27 June 2013

A caching scheme to reduce downloads for Linux package updates

As I administer sites with a lot of (nearly) identical Linux hosts, I thought a lot about the issue of reducing the download for package updates. Even if you have unlimited data quota for your site, downloading the updates multiple times for multiple machines takes time.

I looked at various schemes suggested on the Internet. One is to set up a satellite repository. This is reliable but involves a lot of configuration. Also you may download a lot of package updates that are not used at your site.

Another class of scheme involves interposing a proxy. It could be a specialised proxy for the distro, or a general proxy like squid. This can work well but is sometimes defeated by load balancers or smart mirrors where you are not guaranteed to hit the same repository every download, as the actual repository used could vary.

Eventually I came up with a scheme that is applicable to three distros that I have tried it on: CentOS, Debian (and derivatives) and openSUSE. The key observation is that fact that the updaters in these distros have a package cache. If packages are added to the cache on the side, the updater will skip downloading them. I describe this as a caching scheme, not a proxying scheme as no change is made to the updater configuration files and normal, independent update works.

The scheme:

There are three parts to the scheme.

1. A download script runs on a designated master host. This runs during off-peak hours. We use the updater program in download but not install mode so we will only download packages we need. It's also necessary to set the repository to retain packages.

2. A synchronisation script runs on the other hosts to pull the new updates to their cache. This is run as soon as possible after the download.

3. Various utility scripts are used to fire off updates on all hosts. I make use of parallel ssh and ssh-agent to make life easy.

Let's use a concrete example to illustrate this.

CentOS:

1. Download:

The download script is this:

yum --disableexcludes=updates --downloadonly -y update

You need to install the auxiliary package yum-downloadonly. The --disableexcludes=updates is to disable the regular expressions that prevent the installation of the kernel on this host. On this host I disabled kernel updates due to the need to reboot and recompile VirtualBox modules whenever the kernel is updated, and as this is a server that engineers use constantly, I delay kernel updates until downtime can be scheduled to do them manually. Other software that may need recompilation on kernel update include proprietary video drivers.

To set the repository to retain packages, add

keepcache=1

to /etc/yum.conf

2. Synchronisation:

Next I set up a rsync server. Initially I used rsync over ssh, but this requires setting up a login account, restricted commands, etc. for security reasons. Package updates are not sensitive data so I just share the cache directory with rsyncd. Please consult your distro documentation on how to enable the rsyncd daemon. Usually it's run out of xinetd. Here is /etc/rsyncd.conf:

secrets file = /etc/rsyncd.secrets
motd file = /etc/rsyncd.motd
read only = yes
list = yes
uid = nobody
gid = nobody

[yum]
comment = YUM cache
path = /var/cache/yum
filter = + *.rpm + */ - *

Note the filter expression to share RPM files and directories but not files of other types.

The synchronisation script, call it syncfrommaster, on the other hosts is:

rsync -avi masterhost::yum /var/cache/yum

In the cron job that runs this command I insert a random delay so that the hosts don't all try to synchronise at the same time and overload the master.

sleep $(($RANDOM \% 900)); /root/bin/syncfrommaster

The % is escaped with \ in cron scripts by the way.

I arrange for this job to be run an hour after the download script.

3. Update.

You have probably noticed that it is possible that an update will be published in the time between the master pulling it down and you applying the updates. In this case each host merely downloads the package on its own, and nothing breaks. However you may wish to run a script to cause the cache to be updated and the hosts to catch up.

#!/bin/sh
yum --disableexcludes=updates --downloadonly -y update
pssh -h /root/linux-servers -P -p 1 -t -1 /root/bin/syncfrommaster

This first command is just the download script again. The second command makes use of the parallel ssh program to fire off syncfrommaster on each host. Before you run this you need to run ssh-agent bash, followed by ssh-add to make it possible to run pssh without supplying a login password for each host. This assumes have already installed ssh keys on each host.

4. Finally you can update all the hosts in one go:

pssh -h /root/linux-servers -P -p 1 -t -1 yum update -y

You need the -y as pssh cannot run interactive commands. You might want to run

yum update

on the master host first to check that the updates will go as expected.

It is possible that the master doesn't have all the packages in the union of all packages installed on all hosts. No harm done, the host that has distinct packages will download it on its own. Or you may wish to install that package on the master so the next update will be covered.

openSUSE:

1. Download:

zypper -n up -l -d

To set the repository to retain packages, run

zypper mr -k <repo>

for the repositories must be retained, usually repo-update and any extras such as packman.

2. Synchronisation:

/etc/rsyncd.conf (I only show the stanza added to the existing config file)

[zypp]
comment = Zypp archive
path = /var/cache/zypp/packages
read only = yes
list = yes

Notice that read only and list are in the zypp section. It can be either in the global or module section.

Synchronisation script:

rsync -avi masterhost::zypp /var/cache/zypp/packages

3. Update:

zypper up

With this info you can work out the catchup script and parallel ssh scripts. As in CentOS the -y flag also works to make zypper non-interactive.

Debian:

1. Download:

apt-get update; apt-get upgrade -d -y

2. Synchronisation:

/etc/rsyncd.conf:


[apt]
comment = APT archive
path = /var/cache/apt/archives
read only = yes
list = yes
filter = - lock

Synchronisation script:

rsync -avi masterhost::apt /var/cache/apt/archives

3. Update:

apt-get upgrade

and you can work out the catchup script and parallel ssh scripts. Again -y works to make apt-get upgrade non-interactive.

Notes:

To keep the explanation simpler I have not added the shell directives to redirect output to /dev/null. If cron jobs generate output, root or you will get mailed the output. This could be useful for debuging and to keep an eye on things, but gets tiresome after a while, given that this all works well and falls back to normal update if the caching is not effective. You will need 2>&1 to redirect stderr also.

You will notice that packages will accumulate in the cache after a while. While Debian commendably has apt-get autoclean, the other two RPM based distros do not have an analogous feature. You might just have a cron job remove packages older than a certain number of days.

Wednesday, 4 April 2012

Old netbook as a tethered Internet gateway


I had an ancient Asus EEE700 useless even as a netbook for the road. I wondered if I could turn it into an emergency gateway to my Android smartphone for when my ADSL connection goes down. It has Crunchbang Linux installed which is basically Debian. Here's what I did, for the learning experience:


First you need to disable NetworkManager. Easiest way to do this is to first install the sysv-rc-conf package, and then use its ncurses interface to shut NetworkManager down. Next you need to configure usb0 (the tethered interface) and eth0, the gateway interface by editing /etc/network/interfaces. You might want to preserve an old version of this file to restore to the default setup. Here are the stanzas used:


allow-hotplug usb0
iface usb0 inet dhcp


auto eth0
iface eth0 inet static
        address GW
        netmask 255.255.255.0

GW should be replaced by whatever gateway address is suitable for your LAN. If your ADSL router is 10.0.1.254, perhaps 10.0.1.253 is suitable.

Here are the rules to make it a NATting gateway, taken from Masquerading Made Simple. I have replaced ppp0 with usb0 everywhere:


iptables -F; iptables -t nat -F; iptables -t mangle -F
iptables -t nat -A POSTROUTING -o usb0 -j MASQUERADE
echo 1 > /proc/sys/net/ipv4/ip_forward


And to secure it:


iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -m state --state NEW ! -i usb0 -j ACCEPT
iptables -P INPUT DROP   #only if the first two are succesful
iptables -A FORWARD -i usb0 -o usb0 -j REJECT


You could save the setup using iptables-save, like this:


iptables-save > /etc/gateway.iptables


Then make a shell script to start the gateway for convenience:


#!/bin/sh
echo 1 > /proc/sys/net/ipv4/ip_forward
iptables-restore < /etc/gateway.iptables


Now tether the netbook to the Android smartphone. When you do ip addr, you should see that it has obtained a private address from your smartphone, and your eth0 address should be static.


Next, on your workstation change the default gateway. From a Linux command line it's


route add default gw 10.0.1.253


When you access the Internet, you should see the traffic icons on your smartphone flash. In addition if you go to a website that tells you your IP address, like whatismyipaddress.com, you should see the external address of your smartphone.


When your ADSL service comes back, delete the route with:


route del default gw


Oh yes, this setup does double NAT, since your smartphone does one NAT.