Tuesday 24 December 2019

How to remove the msftdata flag on a GPT partition

You've bought this large disk for use on Linux and it's been preformatted for Windows. Usually there is a msftres partition (which you can leave alone, in case you need it later, as it takes up only a tiny fraction of the disk capacity) and a msftdata partition. You can format the filesystem as ext4 or whatever Linux filesystem you like and it will mount properly, but a list of the partitions still shows it as Microsoft. This annoys you.

The relevant documentation for parted shows that msftdata is a flag. Unfortunately you can't just set the flag off. The document states:
This flag can only be removed within parted by replacing it with a competing flag, such as boot or msftres.
So the solution is (assuming the partition is number 2):
set 2 boot on
set 2 boot off

Sunday 8 December 2019

Discrete TTL IC desk calculator

Today a memory of an old project that was a spectacular case of unfortunate timing came back to me. The British magazine Practical Electronics designed a desk calculator using TTL ICs. Unfortunately just as the 1st (or 2nd) part of 11 was published, MOS chips from Texas Instruments implementing an entire calculator came onto the market. And with better specs. The technology was obsolete even before it was published. PE decided to publish the project anyway for pedagogical benefit. Here is the cover of the July 1972 issue and the first 2 pages of that series:


The series has been archived here. It gets a mention in the EPE (Everyday Practical Electronics, two magazines having been merged sometime before) 50 year retrospective of 2014 (parts 1 and 2).

The component list is interesting. They weren't even Low power Schottky TTL packages, just standard TTL. Must have run a bit warm. 145 silicon diodes? Sounds like a microcode matrix.

I couldn't have built one anyway, it was far beyond my means at the time but it was interesting to read how the functions were implemented. I wonder if anybody other than the series author actually made one and wrote it up.

Tuesday 11 June 2019

Photos got binned in Google Photos

You decide you want to free up storage on your phone so you go into Google photos and start deleting photos that you have already downloaded onto your personal computer. Some of those photos you have also shared in a particular folder on Google photos by uploading from your PC.

You notice that when you tap on the trash can icon it says that the photo will be deleted from all folders and albums you have shared it to. Surely Google would not go delete a photo that had gone through your PC?

Wrong! It turns out that Google matches up what you uploaded from your PC with what's on your phone and knows it's the same photo. When you go to your shared album, the photos you deleted are in the bin too. If you have already emptied the bin, either on the phone or on your PC, or 60 days have elapsed, those photos are gone from your album. Your only recourse is to re-upload from your PC.

I can understand the reason for this synchronised deletion. Say you shared a photo and now you regret it. It would be logical to delete from all places where it exists. However as implemented, Google is being heavy handed. There is no setting I can see to turn this behaviour off. To avoid accidental deletion from your album when you clear space on your phone, you should not use the trash can icon. Instead, use the actions Save to device, followed by Delete from device. This is far less convenient, but the latter might be done in bulk. Another way is to use a different app for deleting the photo from your device.

I have not experimented to see if the matching depends on some metadata in the JPEG that could be stripped out from your PC's copy. Maybe the device name?

Thursday 23 May 2019

Upgrading from Puppet 4 to Puppet 6

One day I discovered that the puppetlabs-release-pc1 repository had stopped working. After some searching I learnt that these repositories had been deprecated. Things sure move fast in the configuration management world. Only seems like yesterday that I migrated from 3 to 4.

A bit of reading convinced me that an upgrade to 6 should be straightforward, as the pain of the language update was over, and at this site I did not use advanced features like puppetdb. The one area of large difference was the change from puppet cert to puppetserver ca commands.

The machines comprise a whole bunch of CentOS 6 and 7 servers, and one Debian Stretch server. First on the master I removed the old release RPM with:

yum remove -y puppetlabs-release-pc1

Then I installed the new puppet release RPM:

rpm -i puppet-release-el-6.rpm

followed by a

yum update -y

This updated the puppetserver and puppet-agent packages to the latest version. The server started fine and agents could connect to it. So on all the CentOS machines, I did the same thing and updated the puppet-agent package. The package should be puppet-release-el-7.rpm for the CentOS 7 machines of course.

On Debian Stretch it was a bit trickier. I had installed the Debian Puppet packages which use non-standard directory paths. So I had to first remove the old Stretch packages before adding the APT repository with:

dpkg -i puppet-release-stretch.deb

Then I installed the Puppetlabs agent package:

apt update
apt install -y puppet-agent

On the first run of puppet agent --test --noop it couldn't contact the master. This is due to the puppet.conf being under /etc/puppetlabs rather than /etc/puppet (I don't use a CNAME for puppet here). I added the lines:

[agent]
server = master.example.com

and it connected to the master. But this time it generated a new PEM certificate then said there was a mismatch with the master's copy. I located the old PEM and copied it over the new PEM, and the mismatch went away. This is, as mentioned, due to the Debian package using non-standard directory paths. By the way the first run also loaded all the configuration files from the master into the local cache which is in a standard directory now. At some point I should clean up the old non-standard puppet directories.

On the master the puppetserver ca commands didn't work, giving a 403 not allowed error. A bit of searching turned up this blog entry. However the instructions that say to modify auth.conf can't be used verbatim. If you add the indicated rule, the server will not start. This I suspected was due to it being a duplicate of an existing rule. Instead what you should do is add the line:

allow: master.example.com   # <- add your puppet master certname

to the existing rule for the match:

path: "/puppet-ca/v1/certificate_status"

Restarting the server worked and I could run commands like puppetserver ca list --all Business as usual again.

Monday 13 May 2019

Advances in microcontrollers — a potted viewpoint

I wrote this piece to explain to myself the advances in microcontroller technology before I restarted my electronics making hobby with a project using a retro MCU, the 8042, from the 8048 family. It was originally embedded in the writeup of the project but I think it stands on its own well enough.

Semiconductor logic technology

MCUs of the 8048 era used NMOS logic. This was already an improvement over PMOS logic which required higher voltages so interfacing with TTL had become easier. However NMOS logic has now been supplanted by CMOS logic which contributes to some of the advantages below.

Power consumption

MCUs of that era drew tens of milliamps. This obviously limited battery operation. Nowadays MCUs can draw microamps in quiescent mode. So battery and solar cell operation become possible. Cue the rise of IoT devices. This partly offsets concerns about the power consumption of lots of these devices in consumer hands.

Power handing

The MCUs of the past could only drive a small number of TTL unit loads, so interface chips were needed if you wanted to drive loads. Now MCUs are capable of driving higher currents of the order of tens of milliamps per pin and can be connected directly to displays, saving chips.

Clock rate

Clock rates of that era were around 1-10 MHz. Now MCUs can easily be clocked 5-10 times faster.

Clock drive

Clock drive inputs were already getting simpler. The 8080 microprocessor required a complicated set of clock signals, usually provided by an auxiliary chip, the 8224. The 8085 simplified that by taking that clock circuit on chip.

Later on, for non-timing critical applications, the quartz crystal could be replaced by a lower cost ceramic resonator, or even an RC circuit. Today there are chips that have the RC components on chip which saves another pin or two and the designer doesn't even have to think about the clock.

Program storage

In that era, programs were stored in mask ROM or EPROM. Mask ROM was only for mass production, so testing and small runs had to be done with EPROM. So either one had to connect an external EPROM (see Pin count below) or the MCU had to have a quartz window for UV erasure, increasing chip price. If you were quite sure the program worked, you could use an OTP version that was programmed in the same way, but only once.

The first advance was moving to EEPROM. This meant that the program memory could be reprogrammed electrically, eliminating the quartz window. Then the next step, combining the EEPROM with the MCU contributed to advantages below.

Greater integration

One advantage MCUs had over microprocessors was that peripherals were integrated on the same chip. However this ran up against the number of package pins. It was not unusual to multiplex the buses to the external memory and peripherals to conserve pins. Or use expansion chips like the 8243. With EEPROM and peripherals on chip, the buses no longer needed to be exposed to the outside world. This means that more pins can be dedicated to peripherals. Or you could put MCUs in packages with fewer pins, even as low as 8. At the other end of the spectrum, smaller pin footprints of SMD ICs allow more pins to the outside world if desired.

CPU program and data bit width

Now that the EEPROM is on chip, there no longer needs to be a relationship between the width of the program memory and the data memory. For instance the PIC program words are 12 bits and other weird sizes. This assumes a Harvard computer architecture where the program and data stores are separate.

Glue technologies

These days instead of using discrete logic ICs to interface MCUs to other devices, one can use CPLDs or an even older technology, GALs. These are reprogrammable, sometimes even in situ, allowing you to correct logic errors without circuit board rework.

However, CPLDs cater more to professional products which are transitioning to 3.3V or lower so there are less options for connecting with 5V logic and also the packaging won't be DIP, but PLCC or SMD.

Some hobbyists have cunningly used MCUs like AVRs as glue logic. One recent Z80 SBC used an AVR to provide program memory and interfaces. Yes, it does feel strange to use a more powerful micro as a slave to an old one.

Rise of serial protocols

Another way to conserve pins and interconnections is to transfer data serially. Nowadays many peripherals are interfaced with serial protocols reducing lines compared to parallel protocols. Protocols like I2C, SPI, and custom ones are used. This is assisted by the increased clock rate of MCUs so transfer time isn't an issue. Sometimes the chip hardware or instruction set has assists for these protocols.

Rise of modules

With standard protocols, manufacturers could make modules to simplify design. Instead of, for example, designing drive circuitry for an array of seven-segment LEDs, one can get a module which incorporates the LEDs, multiplexing, current limiting, brightness control, and even keyboard input. One example is the TM1637 chip found on many cheap 4 digit LED displays. There are only 4 pins to connect, 2 power, clock and data.

You see usage of this in this clock where I have reduced the hassle of interfacing seven-segment LEDs with all the attendant drive transistors and resistors by using a cheap off-the-shelf serial driven display board. But this requires writing bit-banging code and ensuring that the MCU has enough processor cycles.

The whole concept of modules and specific implementations like Arduino shields makes life easy for the experimenter. Less soldering is required.

Toolchains

As MCU architectures became more sophisticated, they became better targets for higher level languages like C. The architectural conventions of C mean that you could not support C on an 8048 or 8042. On the 8051 family, you can. The increase in sizes of program memory and clock speed mean that it's no longer as crucial to program in assembler and one can afford to "waste" memory and CPU power in return for being able to write clearer code and larger programs. The Arduino ecosystem allows experimenters to connect their computer to the board, then edit a sketch, compile and test in a tight loop.

Improvements in software tools means that it has become easier to write compilers; it's no longer an arcane art. The open source movement makes more candidates to choose from and community participation makes the software better.

Construction techniques

While this is not directly related to MCUs, construction choices are different now. The breadboard is still useful, but for production, you design PCBs with CAD tools and send the plot files over the Internet for fabrication. No need for the old-school resist pens, or photoresist and exposure masks, and noxious chemicals to prepare, etch, and clean the board. Other steps like silk screen printing, solder masks, and drilling are thrown in. The factory can even presolder standard SMD components, or custom components you provide, for a price of course.

Monday 6 May 2019

gcc cross-compiler for 16-bit x86 processors

This may be of interest to embedded developers for the ancient 16-bit Intel x86 processors.

A port of a gcc cross-compiler for the ia16 architecture has existed for some years but there is now a developer (TK Chia) maintaining this port and trying to add support for more features, like far pointers. The source is in github but there is also an Ubuntu PPA for bionic and xenial for both i386 and amd64 architecture hosts.

I don't have Ubuntu but I do have a VM instance of AntiX 17 (32-bit) so being a lazy person I decided to see if I could just install the deb packages directly on Debian. I found that the bionic packages were too new for AntiX but the xenial packages worked. I found that these were the minimum packages I needed to not have any unsatisfied dependencies:

binutils-ia16-elf
gcc-ia16-elf
libi86-ia16-elf
libnewlib-ia16-elf


I compiled the standard hello.c program.

$ cat hello.c
#include <stdio.h>
#include <stdlib.h>

int main(int argc, char **argv)
{
       printf("Hello world\n");
       exit(0);
}

$ ia16-elf-gcc -S hello.c

and got:

$ cat hello.s
        .arch i8086,jumps
       .code16
       .att_syntax prefix
#NO_APP
       .section        .rodata
.LC0:
       .string "Hello world"
       .text
       .global main
       .type   main, @function
main:
       pushw   %bp
       movw    %sp,    %bp
       movw    $.LC0,  %ax
       pushw   %ax
       pushw   %ss
       popw    %ds
       call    puts
       addw    $2,     %sp
       movw    $0,     %ax
       pushw   %ax
       pushw   %ss
       popw    %ds
       call    exit
       .size   main, .-main
       .ident  "GCC: (GNU) 6.3.0"


Compiling to a binary with:

$ ia86-elf-gcc -o hello hello.c

worked, but the binary isn't executable on Linux. It is:

$ file hello
hello: COM executable for DOS

I didn't explore any further as I don't have a current need for this compiler so I don't know what the library routine did for putchar.

You'd have to tweak the linker and libraries to generate code for an embedded target. But this looks promising.

Saturday 16 March 2019

My experience using a PCB fab service (Pcbway)

When I counted a dozen 8085 CPUs in my retro tech stash, I decided to do a SBC project with them. The goals would be to learn to order PCBs online, and several other things. Coincidentally about this time a Pcbway rep contacted me asking if I would like to sample their service and post public feedback.

The design I submitted is not mine, it's a completed 8085 SBC design published by Sergey Kiselev, see my project page for the links. This means I am confident that there are no mistakes in the placement and routing. It is not a complex design, only a handful of ICs due to the use of a GAL to replace a lot of discrete logic, two layers, 0.1 inch pitch ICs, and through hole components. One step at a time; I'll learn to use Kicad/pcbnew for my own designs later.

A personal note: A PCB fab service over the Internet is something only dreamt of back in the days when I started making PCBs, first with resist pens, then later with photoresist, while dealing with noxious chemicals like ferric chloride and xylene. Well the Internet didn't even exist then. The disadvantage of course is the turnaround time so you should be fairly sure of your circuit before you commit to PCB. The old tailor's or carpenter's adage measure twice cut once is very apt here.


Submitting the order through the website

Like many fabs, registering an account on the Pcbway site will get you starter credit.

At this point assume you have breadboarded and debugged your circuit, have laid out the PCB and are ready to generate the Gerber and drill files.

The website is attractive and the steps are easy to follow. Just click on the Quote and Order button and it will take you through the steps. There are help buttons for the various entry fields. Pcbway seems to offer a great variety of options for the boards. I accepted the defaults as I have no special requirements. Maybe one day I'll design boards requiring advanced features. Like other fabs, the magical size 100x100 mm (from the limit of the free version of Eagle EDA, Kicad EDA is always free and has no limit) has special pricing, so usually people start with 5 or 10 boards within this limit.

I noticed that Pcbway offers to upgrade you from HASL (Hot Air Solder Levelling) to the more expensive ENIG (Electroless Nickel Immersion Gold) at their discretion for no extra charge. This Pcbway page​​ explains the two technologies. Searching on those acronyms will find you more explanations. You can veto giving them discretion if you wish.

For Kicad 5 the documentation of pcbnew explains how to generate Gerber and drill files and can also be found in Pcbway's tutorial. For other EDA software there are abundant tutorials on the Internet. Package the generated files (usually 9) as a zip archive and upload to the website.

At this point Pcbway will do a review of the files. Now I uploaded on a Sunday night so either they have software doing the checks, or there are reviewers working shifts which is plausible as a big fab like Pcbway must get a continuous stream of orders from all over the world, thanks to the Internet. In any case it was finished within an hour and I was notified by email that I could go back to the website to pay to start the fabrication process.

There are many choices for postage depending on how impatient you are or how urgent the work is, and of course the faster the more expensive, and could be more than the fab charge. I took the tracked China EMS (E-packet) option which costs a bit more than the cheapest China air option.

As an amusing digression, I expect that the package of 10 PCBs (1.6 mm x 10 = 16 mm, or more) will be too thick to go through my mailbox slot. Our postal delivery workers have heavy workloads so if they can't deliver it into your mailbox, they don't ring the doorbell, but leave a card telling you to collect at the post office where you have to fret impatiently in queue with dozens of people paying their bills because they haven't learnt to pay by Internet. ☹️
Card has become a verb in this country. 😊 Funny talking about carding on forums, not so funny when you are the unfortunate recipient. Fortunately we can also have parcels delivered to a parcel locker which we can collect from 24/7 using a code sent to us, and this is the delivery address I gave.


Monitoring the fabrication steps

Once your order is paid for, the process will start. You can monitor the steps it's going through. Here is a window snapshot. I learnt something today, the order in which the steps are done. The one acronym new to me was AOI. This turns out to stand for Automated Optical Inspection. From beginning to end takes 1 to 2 days. BTW, Pcbway: this popup window doesn't work properly on my mobile (Chrome browser), most of the rest of the site does.
2019-03-27 edit: If you would like to learn more about the manufacturing process, there are lots of videos on YouTube, including some by Pcbway themselves. This one is a good start​ and here is another one by a visitor​. Be aware that more than 2 layers goes through more steps so some videos show these extra steps. What I find interesting is how much automation and yet how much manual handling are involved. Search on "PCB manufacture" for many more videos.

The wait

There was a delay of one day between the end of production and the boards being shipped. Then one more day before tracking information appeared. Beijing post office huh? Wonder which one of two PCB factories did my order.
I tracked the progress of the parcel. What was my parcel doing between 3rd and 12th of March? Standing on Beijing airport tarmac with a thumb extended to hitchhike a ride to Sydney? 😊
This goes to show that unless you are willing to pay a lot for postage, you should have several projects going so that you can occupy yourself while waiting for PCBs.


Evaluation of received boards

Finally this arrived in my parcel locker about 2 weeks after the end of fabrication. For sure nothing is going to fall out with the huge amount of packing tape they used, hahaha.
Inside there were 11 PCBs, one extra just in case I suppose, in a waterproof bubble wrap bag. So how good were the PCBs? Here is the top of the PCB.
And a closer look. Both the tracks and the screen printing are crisp and well defined. Although mine is not a demanding board, I'm sure their process can handle thinner traces.
The bottom of the PCB.
Notice the well defined dots in the legends for the jumpers.
At this point I was going to insert a picture of a partially soldered board, but that would only demonstrate my skill, or actually lack of, in soldering. So I reckon Pcbway has done an excellent job. I would not hesitate to use their service again. And the board does look like the preview later on this page, but perhaps a darker shade of green.

Apologies for the fish-eye distortion in the closeups. I had to use macro mode and put the lens really close to the board. You are viewing it perhaps double life-size on your screen.

If you have issues with the received product, be sure to contact customer service for resolution.


Other services

Pcbway hosts an online Gerber viewer which appears to be an instance of Mike Cousin's tracespace viewer, which is also available as a hosted service here. It's more responsive than other sites I've used. Go wild with it, you are not loading a remote server, the rendering is done using the SVG capabilities of your web browser. Here is Pcbway's announcement. You can use Pcbway's viewer without an account. I used the Gerber viewer in Kicad 5 but the tracespace viewer can also show you the front and back of the board as you will receive them with the silkscreens correctly oriented, not with one reversed as in the Kicad viewer. The board looks so realistic that you sit there, admire it, take selfies with it, and forget that you still have to build your board later. 😊

Conclusion


I hope that my newbie experience ordering a PCB fab job has shown you how to get started. Whichever fab house you choose to go with, in fact, whatever circuit assembly process you choose to use, I wish you much success with your projects. May you never let the magic smoke inside chips escape.

Tuesday 29 January 2019

How to get fall through behaviour in bash case statements

As many people know C has the famous (or infamous) fall through behaviour in switch statements. Is something similar available for bash?

Indeed, just look at the man page:

Using ;& in place of ;; causes execution to continue with the list associated with the next set of patterns. Using ;;& in place of ;; causes the shell to test the next pattern list in the statement, if any, and execute any associated list on a successful match.

In other words, where you would leave out the break; in C you would write ;& instead of ;; in bash. Hope you have a good use case for it (sorry for the pun).

Wednesday 23 January 2019

Upgrading DVD burner firmware from Linux via a VirtualBox VM

I was given a TSSTcorp SH-224DB DVD burner. This has a SATA interface. The firmware revision shown in dmesg was SB00. I decided to update this to SB01, the last available. Problem was the file was a Win32 executable. How to run this from Linux?

The drive in question is connected to my Linux machine via a USB to SATA dongle. I have a Windows XP VM so I decided to see if it could access the drive. The first time I tried connecting the drive which appeared as /dev/sr1 to the VM. The update executable said it couldn't find any suitable drive.

After some reading I tried another tack, I allowed XP to access the dongle via USB forwarding. This time running the update executable found the drive and duly updated the firmware.

The first attempt didn't work because VirtualBox was presenting a virtual DVD to the VM. But what the program wanted was access to the SATA interface, albeit via the USB layer.

This may work for other drives for which only a Windows executable is available.

Wednesday 16 January 2019

MPLABX 5.0 IDE error: (1180) directory "build/default/production" does not exist

Posting this so that people can find the solution.

I experienced the same problem as posted here where compilation would fail with this error if the project file is on a network drive, a NFS mount in my case. Running make in the directory from the CLI got the same error. Strange to say, the fix of copying it to a local directory worked.

I don't know what the root cause is but this is another thing you could try.