Saturday, 8 February 2020

Moving VirtualBox VM directory in VirtualBox 6

I have some VMs on my Linux machine which run under VirtualBox 6. When I first created the VMs, I accepted the default directory "VirtualBox VMs" in $HOME. After a while this long directory name containing a space irked me and I looked into ways of renaming it. I found lots of old articles that suggested steps like dumping the old VM and then rereading it, or detaching the VM disks and reattaching.

It turns out that in VirtualBox 6 on Linux (and perhaps earlier versions, I cannot say), it's trivial. The configuration information is held in the configuration file $HOME/.config/VirtualBox/VirtualBox.xml With VirtualBox not running, I changed all instances of "VirtualBox VMs" to VMs, and moved the directory of course. Upon starting VirtualBox, everything worked as before.

If you have N VMs there will be N+1 instances of the old directory name in that XML file, one for each VM and one for the base directory of VMs that are created.

Tuesday, 24 December 2019

How to remove the msftdata flag on a GPT partition

You've bought this large disk for use on Linux and it's been preformatted for Windows. Usually there is a msftres partition (which you can leave alone, in case you need it later, as it takes up only a tiny fraction of the disk capacity) and a msftdata partition. You can format the filesystem as ext4 or whatever Linux filesystem you like and it will mount properly, but a list of the partitions still shows it as Microsoft. This annoys you.

The relevant documentation for parted shows that msftdata is a flag. Unfortunately you can't just set the flag off. The document states:
This flag can only be removed within parted by replacing it with a competing flag, such as boot or msftres.
So the solution is (assuming the partition is number 2):
set 2 boot on
set 2 boot off

Sunday, 8 December 2019

Discrete TTL IC desk calculator

Today a memory of an old project that was a spectacular case of unfortunate timing came back to me. The British magazine Practical Electronics designed a desk calculator using TTL ICs. Unfortunately just as the 1st (or 2nd) part of 11 was published, MOS chips from Texas Instruments implementing an entire calculator came onto the market. And with better specs. The technology was obsolete even before it was published. PE decided to publish the project anyway for pedagogical benefit. Here is the cover of the July 1972 issue and the first 2 pages of that series:


The series has been archived here. It gets a mention in the EPE (Everyday Practical Electronics, two magazines having been merged sometime before) 50 year retrospective of 2014 (parts 1 and 2).

The component list is interesting. They weren't even Low power Schottky TTL packages, just standard TTL. Must have run a bit warm. 145 silicon diodes? Sounds like a microcode matrix.

I couldn't have built one anyway, it was far beyond my means at the time but it was interesting to read how the functions were implemented. I wonder if anybody other than the series author actually made one and wrote it up.

Tuesday, 11 June 2019

Photos got binned in Google Photos

You decide you want to free up storage on your phone so you go into Google photos and start deleting photos that you have already downloaded onto your personal computer. Some of those photos you have also shared in a particular folder on Google photos by uploading from your PC.

You notice that when you tap on the trash can icon it says that the photo will be deleted from all folders and albums you have shared it to. Surely Google would not go delete a photo that had gone through your PC?

Wrong! It turns out that Google matches up what you uploaded from your PC with what's on your phone and knows it's the same photo. When you go to your shared album, the photos you deleted are in the bin too. If you have already emptied the bin, either on the phone or on your PC, or 60 days have elapsed, those photos are gone from your album. Your only recourse is to re-upload from your PC.

I can understand the reason for this synchronised deletion. Say you shared a photo and now you regret it. It would be logical to delete from all places where it exists. However as implemented, Google is being heavy handed. There is no setting I can see to turn this behaviour off. To avoid accidental deletion from your album when you clear space on your phone, you should not use the trash can icon. Instead, use the actions Save to device, followed by Delete from device. This is far less convenient, but the latter might be done in bulk. Another way is to use a different app for deleting the photo from your device.

I have not experimented to see if the matching depends on some metadata in the JPEG that could be stripped out from your PC's copy. Maybe the device name?

Thursday, 23 May 2019

Upgrading from Puppet 4 to Puppet 6

One day I discovered that the puppetlabs-release-pc1 repository had stopped working. After some searching I learnt that these repositories had been deprecated. Things sure move fast in the configuration management world. Only seems like yesterday that I migrated from 3 to 4.

A bit of reading convinced me that an upgrade to 6 should be straightforward, as the pain of the language update was over, and at this site I did not use advanced features like puppetdb. The one area of large difference was the change from puppet cert to puppetserver ca commands.

The machines comprise a whole bunch of CentOS 6 and 7 servers, and one Debian Stretch server. First on the master I removed the old release RPM with:

yum remove -y puppetlabs-release-pc1

Then I installed the new puppet release RPM:

rpm -i puppet-release-el-6.rpm

followed by a

yum update -y

This updated the puppetserver and puppet-agent packages to the latest version. The server started fine and agents could connect to it. So on all the CentOS machines, I did the same thing and updated the puppet-agent package. The package should be puppet-release-el-7.rpm for the CentOS 7 machines of course.

On Debian Stretch it was a bit trickier. I had installed the Debian Puppet packages which use non-standard directory paths. So I had to first remove the old Stretch packages before adding the APT repository with:

dpkg -i puppet-release-stretch.deb

Then I installed the Puppetlabs agent package:

apt update
apt install -y puppet-agent

On the first run of puppet agent --test --noop it couldn't contact the master. This is due to the puppet.conf being under /etc/puppetlabs rather than /etc/puppet (I don't use a CNAME for puppet here). I added the lines:

[agent]
server = master.example.com

and it connected to the master. But this time it generated a new PEM certificate then said there was a mismatch with the master's copy. I located the old PEM and copied it over the new PEM, and the mismatch went away. This is, as mentioned, due to the Debian package using non-standard directory paths. By the way the first run also loaded all the configuration files from the master into the local cache which is in a standard directory now. At some point I should clean up the old non-standard puppet directories.

On the master the puppetserver ca commands didn't work, giving a 403 not allowed error. A bit of searching turned up this blog entry. However the instructions that say to modify auth.conf can't be used verbatim. If you add the indicated rule, the server will not start. This I suspected was due to it being a duplicate of an existing rule. Instead what you should do is add the line:

allow: master.example.com   # <- add your puppet master certname

to the existing rule for the match:

path: "/puppet-ca/v1/certificate_status"

Restarting the server worked and I could run commands like puppetserver ca list --all Business as usual again.

Monday, 13 May 2019

Advances in microcontrollers — a potted viewpoint

I wrote this piece to explain to myself the advances in microcontroller technology before I restarted my electronics making hobby with a project using a retro MCU, the 8042, from the 8048 family. It was originally embedded in the writeup of the project but I think it stands on its own well enough.

Semiconductor logic technology

MCUs of the 8048 era used NMOS logic. This was already an improvement over PMOS logic which required higher voltages so interfacing with TTL had become easier. However NMOS logic has now been supplanted by CMOS logic which contributes to some of the advantages below.

Power consumption

MCUs of that era drew tens of milliamps. This obviously limited battery operation. Nowadays MCUs can draw microamps in quiescent mode. So battery and solar cell operation become possible. Cue the rise of IoT devices. This partly offsets concerns about the power consumption of lots of these devices in consumer hands.

Power handing

The MCUs of the past could only drive a small number of TTL unit loads, so interface chips were needed if you wanted to drive loads. Now MCUs are capable of driving higher currents of the order of tens of milliamps per pin and can be connected directly to displays, saving chips.

Clock rate

Clock rates of that era were around 1-10 MHz. Now MCUs can easily be clocked 5-10 times faster.

Clock drive

Clock drive inputs were already getting simpler. The 8080 microprocessor required a complicated set of clock signals, usually provided by an auxiliary chip, the 8224. The 8085 simplified that by taking that clock circuit on chip.

Later on, for non-timing critical applications, the quartz crystal could be replaced by a lower cost ceramic resonator, or even an RC circuit. Today there are chips that have the RC components on chip which saves another pin or two and the designer doesn't even have to think about the clock.

Program storage

In that era, programs were stored in mask ROM or EPROM. Mask ROM was only for mass production, so testing and small runs had to be done with EPROM. So either one had to connect an external EPROM (see Pin count below) or the MCU had to have a quartz window for UV erasure, increasing chip price. If you were quite sure the program worked, you could use an OTP version that was programmed in the same way, but only once.

The first advance was moving to EEPROM. This meant that the program memory could be reprogrammed electrically, eliminating the quartz window. Then the next step, combining the EEPROM with the MCU contributed to advantages below.

Greater integration

One advantage MCUs had over microprocessors was that peripherals were integrated on the same chip. However this ran up against the number of package pins. It was not unusual to multiplex the buses to the external memory and peripherals to conserve pins. Or use expansion chips like the 8243. With EEPROM and peripherals on chip, the buses no longer needed to be exposed to the outside world. This means that more pins can be dedicated to peripherals. Or you could put MCUs in packages with fewer pins, even as low as 8. At the other end of the spectrum, smaller pin footprints of SMD ICs allow more pins to the outside world if desired.

CPU program and data bit width

Now that the EEPROM is on chip, there no longer needs to be a relationship between the width of the program memory and the data memory. For instance the PIC program words are 12 bits and other weird sizes. This assumes a Harvard computer architecture where the program and data stores are separate.

Glue technologies

These days instead of using discrete logic ICs to interface MCUs to other devices, one can use CPLDs or an even older technology, GALs. These are reprogrammable, sometimes even in situ, allowing you to correct logic errors without circuit board rework.

However, CPLDs cater more to professional products which are transitioning to 3.3V or lower so there are less options for connecting with 5V logic and also the packaging won't be DIP, but PLCC or SMD.

Some hobbyists have cunningly used MCUs like AVRs as glue logic. One recent Z80 SBC used an AVR to provide program memory and interfaces. Yes, it does feel strange to use a more powerful micro as a slave to an old one.

Rise of serial protocols

Another way to conserve pins and interconnections is to transfer data serially. Nowadays many peripherals are interfaced with serial protocols reducing lines compared to parallel protocols. Protocols like I2C, SPI, and custom ones are used. This is assisted by the increased clock rate of MCUs so transfer time isn't an issue. Sometimes the chip hardware or instruction set has assists for these protocols.

Rise of modules

With standard protocols, manufacturers could make modules to simplify design. Instead of, for example, designing drive circuitry for an array of seven-segment LEDs, one can get a module which incorporates the LEDs, multiplexing, current limiting, brightness control, and even keyboard input. One example is the TM1637 chip found on many cheap 4 digit LED displays. There are only 4 pins to connect, 2 power, clock and data.

You see usage of this in this clock where I have reduced the hassle of interfacing seven-segment LEDs with all the attendant drive transistors and resistors by using a cheap off-the-shelf serial driven display board. But this requires writing bit-banging code and ensuring that the MCU has enough processor cycles.

The whole concept of modules and specific implementations like Arduino shields makes life easy for the experimenter. Less soldering is required.

Toolchains

As MCU architectures became more sophisticated, they became better targets for higher level languages like C. The architectural conventions of C mean that you could not support C on an 8048 or 8042. On the 8051 family, you can. The increase in sizes of program memory and clock speed mean that it's no longer as crucial to program in assembler and one can afford to "waste" memory and CPU power in return for being able to write clearer code and larger programs. The Arduino ecosystem allows experimenters to connect their computer to the board, then edit a sketch, compile and test in a tight loop.

Improvements in software tools means that it has become easier to write compilers; it's no longer an arcane art. The open source movement makes more candidates to choose from and community participation makes the software better.

Construction techniques

While this is not directly related to MCUs, construction choices are different now. The breadboard is still useful, but for production, you design PCBs with CAD tools and send the plot files over the Internet for fabrication. No need for the old-school resist pens, or photoresist and exposure masks, and noxious chemicals to prepare, etch, and clean the board. Other steps like silk screen printing, solder masks, and drilling are thrown in. The factory can even presolder standard SMD components, or custom components you provide, for a price of course.

Monday, 6 May 2019

gcc cross-compiler for 16-bit x86 processors

This may be of interest to embedded developers for the ancient 16-bit Intel x86 processors.

A port of a gcc cross-compiler for the ia16 architecture has existed for some years but there is now a developer (TK Chia) maintaining this port and trying to add support for more features, like far pointers. The source is in github but there is also an Ubuntu PPA for bionic and xenial for both i386 and amd64 architecture hosts.

I don't have Ubuntu but I do have a VM instance of AntiX 17 (32-bit) so being a lazy person I decided to see if I could just install the deb packages directly on Debian. I found that the bionic packages were too new for AntiX but the xenial packages worked. I found that these were the minimum packages I needed to not have any unsatisfied dependencies:

binutils-ia16-elf
gcc-ia16-elf
libi86-ia16-elf
libnewlib-ia16-elf


I compiled the standard hello.c program.

$ cat hello.c
#include <stdio.h>
#include <stdlib.h>

int main(int argc, char **argv)
{
       printf("Hello world\n");
       exit(0);
}

$ ia16-elf-gcc -S hello.c

and got:

$ cat hello.s
        .arch i8086,jumps
       .code16
       .att_syntax prefix
#NO_APP
       .section        .rodata
.LC0:
       .string "Hello world"
       .text
       .global main
       .type   main, @function
main:
       pushw   %bp
       movw    %sp,    %bp
       movw    $.LC0,  %ax
       pushw   %ax
       pushw   %ss
       popw    %ds
       call    puts
       addw    $2,     %sp
       movw    $0,     %ax
       pushw   %ax
       pushw   %ss
       popw    %ds
       call    exit
       .size   main, .-main
       .ident  "GCC: (GNU) 6.3.0"


Compiling to a binary with:

$ ia86-elf-gcc -o hello hello.c

worked, but the binary isn't executable on Linux. It is:

$ file hello
hello: COM executable for DOS

I didn't explore any further as I don't have a current need for this compiler so I don't know what the library routine did for putchar.

You'd have to tweak the linker and libraries to generate code for an embedded target. But this looks promising.