The Ubuntu ZFS boot pool problem [Part I]

Sorry, as you can see from the title, this is another “not-ESP8266” article, so you can skip this if you’re only interested in our magical little wireless modules. However, if you’re running a system (laptop, server or workstation) loaded with Ubuntu and have the root/boot partitions mounted on a ZFS filesystem, you should at least stick around long enough to read through the problem description, below.


13th Oct 2022 – IMPORTANT NOTE:- Directly related to the “bpool problem”, there is a separate issue with Grub2 not supporting zstd compression for ZFS. So, if you are struggling with the bpool problem, do not, Do Not, DO NOT enable zstd compression as a “fix” on your bpool, otherwise your system will be rendered completely un-bootable (the standard quip is normally “Don’t try this at home, folks!”, but in this case it is most certainly “Don’t try this at work!”, otherwise you’ll be out of a job in short order).


If you landed here from a web search for “can’t upgrade Ubuntu – not enough space on bpool” (or something similar), then you’re in the right place. You can probably skip the introduction to the problem (you already know what you’re facing) and go directly to the solution.

Note that in these articles I am assuming that you are familiar with Linux and confident in your own abilities to administer your own system. Because we are dealing with filesystems, I am also going to assume that you have made and verified back-ups of your whole disk(s) before you follow any of the tips below. I am not responsible for your data. You are. The advice below is given in good faith after having been tested on several of my own machines (of different types), but I cannot be held responsible for any loss of data.

The bpool Problem

You see the familiar Software Updater pop-up, announcing that there are package updates available and prompting you to install them. You scan the list of updates and everything looks good, so you hit the “Install Now” button, but instead of a progress meter, you see a pop-up error window with the message:-

The upgrade needs a total of 127 M free space on disk '/boot'. Please free at least an additional 107 M of disk space on '/boot'.

The message continues with a couple of suggestions of how to increase available space in “/boot”, but basically at this point you are unable to install updates.

You’ve just run into the “bpool problem”.


Background

For the last few releases, Ubuntu has provided an easy method for installing root on a ZFS filesystem, giving you access to snapshots and the instantaneous rollback to previous versions which it enables (as well as the possibility of mirroring) without having to go through the long and tedious process of adding ZFS support after the actual install. It was a major move forwards and the “one-click” install process made it simple for anyone to benefit from ZFS technology (not just snapshots, rollbacks and mirrors, but raidz and ZFS send/receive back-ups, too). With all of those benefits, why wouldn’t you just use it by default?

Well, as a long time ZFS user, I can tell you that it does come with a few drawbacks. It is, for instance, somewhat more difficult to judge the available space left on any physical device than it was in the pre-ZFS world (good ole’ “df -h” is very good at giving you an instant and pretty accurate impression of the current capacity of your disks). ZFS tends to muddy that simplistic view a little, until you get used to depending more on the output from “zpool list -v” and “zfs list -o space” instead of “df”.

Fragmentation is another issue. Die-hard Unix users will remember that “fragmentation” was always a problem for people who used that other operating system, not for Unixes (well, unless you started to run out of inodes, anyway). However, with ZFS you do need to keep an eye on that “FRAG” column in the output of “zpool list -v”; once it starts climbing to around the 50~60% level it’s definitely time to start planning for an upgrade and, once it’s past 70% it’s time to take some urgent action (usually, but not always, time to add more physical disk space — we’ll touch on de-fragmentation as a side-effect in the “Solutions” presented in part-II). Once it hits 80%, you’re in big trouble as, even with available free space, the system will be struggling to find usable space.

These issues are a pain to come to terms with after all of these years of doing the odd “df” just to check that everything was okay, but they are still heavily outweighed by the advantages (did I mention copy-on-write or checksummed data yet?).

So, our simple ZFS installation method from Ubuntu is a real game changer and gives everyone the chance to ride on the ZFS bandwagon with minimal effort. Unfortunately, along with that simplicity and ease of installation comes another problem — sizing your filesystems. The installer is a one-size-fits-all deal; it looks at your disk size and makes a couple of fairly simplistic calculations to create a very simple partition table and then later adds a spread of ZFS datasets on top of those physical partitions. The “bpool” (for “boot-pool”) is a single ZFS filesystem which occupies a dedicated partition just for the kernel/initramfs and EFI/grub data required to boot the system. The “rpool” (for “root-pool”) is the ZFS filesystem which contains datasets for /, /var and /usr (as well as sub-directories) and the user home directories. It’s all very well laid out, so that a rollback to (for instance) a previous kernel version need not affect the user’s data. However, the installation script for ZFS currently has one major issue; it limits the size of the bpool physical partition to 2GB (it can be smaller, but not bigger), no matter what the size of your physical disk

The major drawback that people are finding to this installation mode comes from one of the strengths of ZFS — the snapshot functionality. Put simply, if you make a snapshot of say, your whole bpool, ZFS will never delete anything from that point in time. If you subsequently delete a kernel which was present when you took the snapshot, ZFS will keep the file itself, hidden away in the snapshot and only remove the named link to the file from the normal /boot directory. If, at some later time, you decide that you no longer need that snapshot (because it is outdated and you know you’re never going to revert to that old kernel again) you can destroy the snapshot and only then will you recover that disk space. This is one of the reasons that space management on a ZFS filesystem is a bit of a minefield for new users and is exactly why the “bpool problem” exists.

Every time Ubuntu pushes out an update (even a non-kernel update), the installer will automatically create a snapshot of your filesystem, so that you can roll back to the previous version if things go horribly wrong. In the case of actual kernel updates, the snapshot can consist of tens, or even hundreds of megabytes of combined kernel and initramfs files. Given that we have an upper limit of 2GB on the size of /boot (the bpool filesystem), that space doesn’t go very far if you don’t actively manage the snapshots. Worse, the Software Updater will start to fail when there is no longer enough free space in /boot to fit another version of the kernel and, to be clear, even non-kernel updates will fail to be installed if there is a kernel update still in the queue (even if you de-select it manually). The error messages are clear and quite specific (and even give tips on how to ameliorate the issue). However, by the time Software Updater flags the problem, it can already be too late to easily fix it (and how many of us know which system-generated snapshots are safe to remove and which aren’t, anyway?).

Short-term band-aids

These suggestions are not a true solution, they are fixes to get you up and running in order to have Software Updater working again in the shortest possible time.

Before making any changes which might permanently erase data from your filesystems, you should make a back-up of your whole disk (I’ll be repeating this at various points throughout these articles, because I really mean it — I cannot and will not be held responsible for any loss of data you might suffer while trying to follow the tips given here; I always assume that you have backed-up your data and that you have verified that those back-ups work. Batteries not included. May contain nuts.).

First, one quick fix which Software Updater lists in its output when this problem occurs — edit the /etc/initramfs-tools/initramfs.conf file and change the COMPRESS setting from “lz4” to “xz”. This won’t take effect until the next kernel update, but at that point the initramfs install will use the more efficient (compression, not speed) “xz” method. This can slice around 20MB off each of the initrd.img files in the /boot directory and so will save you about 40MB of space on each update (there are usually two versions, current and “.old”).

The results of this second tip are much more difficult to predict in terms of exact space saved — snapshot whack-a-mole.

You can list your snapshots in bpool using:-

zfs list -t snap -r -o name,used,refer,creation bpool

This will get you a listing with entries that look something like this (but probably much longer):-

NAME              USED   REFER  CREATION
@autozsys_7lzjmg  197M   197M   Wed Dec 30 11:07 2020
@autozsys_9u20hc   17K   199M   Wed Jan 20  6:56 2021
@autozsys_6c4qgp    0B   199M   Thu Jan 21  6:12 2021

[Note that I have removed the pathname before the “@” symbol to fit the text without line wraps]

As you can see, the “used” and “refer” columns vary widely between the snapshots. The bottom entry (6c4qgp) has a “used” entry of zero bytes; surely that can’t be right, can it?. Well, that’s the whack-a-mole function coming into play. In this particular case, there don’t appear to have been any changes to bpool between autozsys_9u20hc and autozsys_6c4qgp, so if we destroyed 6c4qgp, we wouldn’t get any free space released back to the filesystem. So where does the “refer” come from? That’s letting us know that even though there were no changes between the last two snapshots in our list, they both have references to 199MB of data already being held in other, previous snapshots (or the filesystem itself). What this means for us is that while destroying 6c4qgp may not free up any space, it is very likely that destroying 7lzjmg or 9u20hc will cause 6c4qgp to inherit some of that referenced data, thus causing the “used” count for 6c4qgp to increase and the filesystem not to gain back as much free space as we expected (hence “whack-a-mole”, the used data count might just pop back up elsewhere).

Now, to be fair, because we’re dealing with /boot and the turnover is usually caused by the replacement of kernel/initramfs files, it is more than likely that destroying the oldest snapshot will simply delete the oldest of those kernel-related files, returning about 140MB of free space to the filesystem. Likely, but not certain, so don’t go destroying snapshots left, right and centre without checking their contents first.

LISTING OF /boot CAPACITY AND AVAILABLE SPACE
["df -h"]
Filesystem                 Size  Used Avail Use% Mounted
bpool/BOOT/ubuntu_g1cutr   145M  117M   29M  81% /boot

["zfs list"]
NAME                       USED  AVAIL     REFER  MOUNT
bpool/BOOT/ubuntu_g1cutr  1.52G  28.5M      116M  /boot

["zfs list -o space"]
NAME                      AVAIL   USED  USEDSNAP  USEDDS
bpool/BOOT/ubuntu_g1cutr  28.5M  1.52G     1.41G    116M

You can always check the contents of any given snapshot by noting where the ZFS pool is mounted and then navigating to the .zfs/snapshot directory at that point. So, for our bpool snapshots, we can see from the output of “df” or “mount” that bpool/BOOT/ubuntu_g1cutr is mounted at /boot. Listing /boot/.zfs/snapshot will give us a listing of directories which correspond to the snapshot names in the listing above. You can list each of those directories to see what files and directories are included in the snapshot. As /boot is actually quite small, you can easily do an “ls -i” on two of the snapshot directories and see which files have the same inodes and which are different (which gives a good indication of which files are shared between different snapshots and the current, live filesystem and which are unique to a given snapshot).

Snapshots are removed using the “zfs destroy” command, by the way, not by removing the snapshot directories.

Don’t forget that destroying any snapshot restricts your ability to recover to a known point in time, so I would urge you to err on the side of caution — if you’re not 100% certain of what you’re about to remove, don’t do it!

If you’re a user who likes to create their own snapshots (before a major upgrade, for instance) you might already be able to easily target some of your own snapshots as candidates for deletion (perhaps you already know that you’re not going to roll back to that ancient, previous release?).


The [partial and not very satisfactory] Solution

The stop-gap solutions listed above are just that; short term solutions which will give you some extra breathing space, but not long term fixes. To remedy the bpool problem long term, we obviously need to add a substantial chunk of extra disk space.

One brutal, but simple way of getting back more bpool space (and a solution which is very topical with the release of 21.10 almost upon us) is to re-install Ubuntu after editing the ZFS section of the install script to bypass the problem. I’m not suggesting that this is the best or most versatile answer to the issue (in most cases it won’t be), but if you happen to be on the verge of upgrading, or perhaps have a machine where all of the application and user data is mounted from a fileserver rather than local disk, this may be an option (but, as always, I would recommend a full back-up of the system before taking such a drastic step).


Just in case you didn’t quite get that… ==WARNING== The following steps will delete -ALL- of the data on your disk. Do -NOT- proceed with the steps below if you are not prepared to have your disk(s) totally wiped of all existing data.


Booting from the Ubuntu install image will drop you into the Try-or-Install header page. Select “Try Ubuntu”, which will restart the desktop to the normal, live image. From there you can open a terminal session and become root using “sudo -s” (no password required).

Using whatever editor you’re most comfortable with, open /usr/local/share/ubiquity/zsys-setup.

On (or about) line number 267, you’ll find this code:-

[ ${bpool_size} -gt 2048 ] && bpool_size=2048

You just need to comment out that line completely for the simplest fix. Doing so will allow the bpool size calculation to grab a much larger chunk of your available disk space. Note that if your disk is 50GB or less, this isn’t going to help — you’ll still end up with roughly 2GB. Conversely, if your disk is 1TB or larger, you may end up with much more bpool space than you actually need, or want. In these cases, you might want to change line 267 to use some value other than 2GB; I found 10GB (ie:- replace “2048” with “10240”) to be satisfactory for my machines, but if you happen to be a kernel developer, you might want to bump that up even further.

Following the edit of the ubiquity file, you simply select “Install Ubuntu” from the desktop icon and proceed with the normal ZFS install.


Okay, that’s the gist of the bpool problem, along with a couple of suggestions for clawing back some disk space and the most drastic way of fixing the issue.

In the second part of this series, we’ll be looking at a couple of more reasonable methods of fixing existing installations without resorting to a full re-install (fair warning though — it will still involve booting from the install media, so is not entirely without risk).

And don’t forget …back up early and back up often!


An ESP32 project worth looking at…

…especially if you’re a farmer.

You might have noticed that it has been fairly quiet around here recently. That’s mainly because real life has been getting in the way quite a lot, but also because what little spare time I have has been taken up by a new project. I’m not ready to spill the beans on what that project is quite yet, but you can get a couple of fairly strong hints on what direction I’m going (!) by taking a look at the beautifully executed work of Matthias Hammer (and his daughter) on their autonomous equipment project. Note that Matthias isn’t limiting his work just to autonomous control of the tractor, but is also extending the project to automate the other agricultural equipment that he uses.

Matthias is using ESP32s in his project as replacements for the Arduinos used in the base AgOpenGPS project. He has a few videos up on YouTube giving a guided-tour through the equipment, as well as showing the tractor at work. If you’re at all interested in GPS/RTK and autonomous vehicles, his (short) videos, GitHub repository and the AgOpenGPS web site are all worth a visit (even if you’re not a farmer).


You could have used a 555!

Yup, the most frequent comment on Hackaday, “You could have used a 555 for that”. Well, I did. In fact, I used a pair of 555s …but probably not the sort you’re thinking of.

A pair of 555s

A few years back (see date-code photo, below), I found these batteries in a local equivalent of the “Dollar Store”. They only came in pairs, which made them more than twice as expensive as the normal, no-name AA alkaline cells, but with that branding I had to get at least one pack.

I used them (probably in one of my PIC projects back then) and then forgot about them, until I was scrambling to find some not-quite-so-dead batteries to check an ESP01S project a couple of weeks back and found them at the back of a drawer.

I popped the DVM probes on them (unloaded, of course) and was very surprised to see a healthy 1.5v reading.

Well okay, lets fire this thing up and see what happens! What happened was that the ESP01S blue LED flashed, then …nothing. Turning one of them upside-down, the date code reminded me that it was very many moons ago that I last used them.

"555 - 2013-09"

They certainly didn’t owe me anything, but I wondered what the voltage had gone down to under load. Having had lots of experience with these el-cheapo batteries, I wasn’t at all surprised to see that the voltage across the cell swung well into negative territory when I held down the “On” switch. What still surprised me was that the voltage swung almost straight back to that “healthy” 1.5v reading once the load was removed. Usually these cells are hard-pressed to register a no-load voltage of 0.9 ~ 1.1v in their fully depleted states.

Maybe these we’re just catching their “second wind” …or maybe they do have a 555 hidden inside them after all!

-1.33v displayed on DVM when load applied
…wth load
1.54v reading on DVM from single cell
No load

The ESP01S board wouldn’t stay powered with these batteries in the pack, so I had to hold down the “On” switch long enough to snap the photo.


No ESP01S modules were harmed in the production of this blog post, but it probably didn’t do my poor little 555s much good.

FreeBSD Notes: 13-Release and bectl

We’ve had several release candidates (at least one more than expected) of FreeBSD-13, of which I’ve tested the later ones as VMs on a 12.2-p6 host and found them to be lightweight and sprightly. So, when the final release was announced last week, I didn’t have any particular qualms about updating said 12.2 system to release 13.0. Having said that, FreeBSD makes it particularly easy to add belt-and-braces insurance to your updates nowadays, using “bectl“.

If you haven’t come across bectl before, I would heartily recommend that you check it out and read the manual page before you start any future upgrades. The “be” part of the name stands for “Boot Environment” and this extremely useful tool helps us to manage our ZFS filesystems to allow virtually risk-free changes to our root filesystem without needing to get deeply into the nitty-gritty of snapshots, rollbacks and boot menus. Yes, it only works through the magic of ZFS, but it’s just another reason why you definitely should be using this filesystem everywhere (even on your laptop).

bectl” will enable us to quickly create a snapshot of our current root filesystem, add that snapshot to our “beastie” boot menu and, importantly, then allow us to create a completely new clone of the existing snapshot (which we can then upgrade or change in whatever way we want) and optionally make that updated filesystem available as the default boot for our machine. This means that we can install an upgrade (in my case, release 13.0) and boot it immediately, but still have the option of selecting our original root from the boot menu, if things don’t quite go to plan.

Before we start the actual upgrade, just a couple of words of warning…

  • As noted above, the system must be using root on ZFS.
  • It should be running the GENERIC kernel.
  • We should be starting from a recently updated version of the previous major release (ie:- 12.2-p6 if updating to 13.0).

How do we prepare our system in advance of the actual upgrade? It couldn’t be much easier. The “back-up” boot environment will be the root filesystem which we have now. We will create a new clone of this existing environment to work on during the 13.0 install (leaving the original untouched).

First, we check our current boot environment with:-

root#   bectl list -D

BE              Active Mountpoint Space Created
default         NR     /          10.2G 2020-02-16 18:02

The bectl list command shows the currently available boot environments and the “-D” argument shows the space used. The “Active” column shows an “N” for the boot environment which is in use right Now and an “R” for the boot environment which will be activated on the next Reboot. In our case, we only have one boot environment, so it is both active and selected to be used on the next reboot.

To create the cloned (work) boot environment, we do:-

root#   bectl create 13.0-RELEASE

root#   bectl activate 13.0-RELEASE

The “create” is obvious. The “activate” command tells the system that we want to use the new, “13.0-RELEASE” boot environment at the next reboot. When we use the “list” command again now, we should see something similar to this:-

root#   bectl list -D
BE              Active Mountpoint Space Created
13.0-RELEASE    R      -          369K  2021-04-22 08:32
default         N      /          10.2G 2020-02-16 18:02

…indicating that “default” is currently in use, but that “13.0-RELEASE” will be used on the next reboot. If everything looks okay (and there were no error messages from the bectl commands), we can go ahead and reboot our system. It should boot as normal and look exactly the same as before (we haven’t actually changed anything on the filesystem yet, so we’re still running 12.2-p6). However, if we were to do another bectl list -D we’d see that the “Active” column would now show “NR” for 13.0-RELEASE and just “-” for “default”.

Do not proceed with the upgrade steps below unless you see the “NR” active status for the “13.0-RELEASE” boot environment.

Having checked that we’re using the cloned root filesystem, we can now go ahead and use that other wonderfully useful utility, freebsd-update, to start the actual upgrade process:-

root#   freebsd-update -r 13.0-RELEASE

Note that “13.0-RELEASE” in this command is specifying which release we want freebsd-update to upgrade to, not where we want to install it. We simply follow the instructions which freebsd-update gives as it goes along (we will need to reboot a couple of times to complete the full upgrade process).

If everything goes as planned, after the final reboot we should have a working FreeBSD 13.0 system.

My upgrade process went smoothly; all system utilities came back up and the VMs ran with no problems. Everything looked fine …until the first, automated, weekly reboot rolled around. The machine didn’t come back up. The console was complaining that there was a fatal ZFS error with the main user-data pool (zstore00 – not the root filesystem). I had to cycle power on the system to have it reboot and, when it came back up it did indeed show a checksum error on that pool. I cleared the error and initiated a scrub and once everything was stable and had been running for a couple of days, did a manual reboot. It failed again, in the same way.

The actual error shown was;-

uhub2: detached
uhub1: detached
umass0: detached
uhub3:detached
ubt0:detached
Solaris: WARNING: Pool zstore00 has encountered an uncorrectable I/O failure and has been suspended.

At that point the actual shutdown process hung solid and, as before, I needed to power cycle the system to have it come back up. I also tried updating the /boot/loader.conf file (as suggested in Graham Perrin’s very useful guide) and rebooted a couple of times manually, but to no avail.

Note that I had not run zfs upgrade on the pools, since this would have made it very difficult to roll back.

So, at this point I needed to get the system back up and running and decided to do that roll-back to 12.2. Here’s how you can do that…

Power cycle the machine (if you’re stuck with a hung system, as above) and once the beastie menu appears, wait a second for the additional options to appear at the bottom (it takes the boot process a moment or two to collect the boot environment listings) and then select the “Select Boot [E]nvironment…” option. You’ll get another, short menu where you can use #2 to cycle through the boot environment names until you see “default” displayed. At this point you can simply hit Enter to have the system select and boot into this saved environment.

Your system will now boot into the original 12.2-p6 root filesystem and work as before. The only minor issue that you have is that booting from the beastie menu only sets the boot environment for this one-time boot, so you must use bectl activate default to have your 12.2-p6 environment set as the permanent boot environment (if you do a bectl list -D at this point, “default” should now have “NR” showing in the Active column).

Looking a little closer at the listing you’ve just done, you’ll also notice that because we’ve actually got a complete 13.0 install on the alternate “13.0-RELEASE”, it now shows a much greater space-used than before (in my case around 12GB).

FUZIX (Unix-like OS) ported to the ESP8266

Well here’s some really interesting news. I have a job set-up to update “interesting” GitHub projects to local disk on a daily basis, so I can just do a quick listing every morning to see what, if anything, has changed. This morning the FUZIX project flagged changes (FUZIX is a minimal, unix-like OS for very small, resource-limited micros, which started off being targeted at the Z80 series). When I checked the repository to see what the updates actually were, I found:-

drwxrwxr-x 10 gaijin gaijin 20 Feb 18 05:38 ..
-rw-rw-r-- 1 gaijin gaijin 11536 Feb 18 05:38 Makefile
-rw-rw-r-- 1 gaijin gaijin 273 Feb 18 05:38 .gitignore
-rw-rw-r-- 1 gaijin gaijin 35670 Feb 18 05:38 filesys.c
-rw-rw-r-- 1 gaijin gaijin 252 Feb 18 05:38 dep.mk
drwxrwxr-x 2 gaijin gaijin 6 Feb 18 05:38 cpu-armm0
drwxrwxr-x 2 gaijin gaijin 24 Feb 18 05:38 platform-esp8266

Which led, in turn, to David Given’s marathon videos (there are roughly 30-hours of keyboard-bashing and puzzling-out-loud available right now, with another five videos still to be released on a daily basis) detailing his work to port FUZIX to the ESP8266. David’s web site has some further explanation of how far along the port currently is, along with some pre-compiled, loadable binaries for the ESP8266 (I haven’t had time to try them yet). All of David’s work is available from his GitHub repository.

You will need to attach an SD card to get the full FUZIX experience, but David says it is reasonably speedy on the ESP8266, with a boot time of just 4 seconds.

|�l�c|����{���B�p�n�dNn���cp��cl r$p�N�s����cN�|����B�|�N�l��l�Nl�d Nr���N��{$�o�nod���no��rdp�n���l"o�|����p��no�$�$�no$�{$or���o� �n�2NNl��|�N��lp�N��d�#n�|���b��Nn�d�l �oo$�{lor���N$�$sےNl��FUZIX version 0.4pre1
 Copyright (c) 1988-2002 by H.F.Bower, D.Braun, S.Nitschke, H.Peraza
 Copyright (c) 1997-2001 by Arcady Schekochikhin, Adriano C. R. da Cunha
 Copyright (c) 2013-2015 Will Sowerbutts will@sowerbutts.com
 Copyright (c) 2014-2020 Alan Cox alan@etchedpixels.co.uk
 Devboot
 80kB total RAM, 64kB available to processes (15 processes max)
 Enabling interrupts … ok.
 Scanning flash: 2591kB: hda: 
 SD drive 0: hdb: hdb1 hdb2 
 Mounting root fs (root_dev=18, ro): warning: mounting dirty file system, forcing r/o.
 OK
 Starting /init
 init version 0.9.0ac#1
 fsck-fuzix: 
 
 login: root
 Welcome to FUZIX.

David notes in the README (contained in the binary tar-file) that the system does have some limitations; first and foremost is that it is almost unusable running from flash, so you really do need to wire up an SD card (pinouts for this are included in the same README). Here is his ToDo list:-

  • userland binaries can’t find their error messages.
  • CPU exceptions should be mapped to signals.
  • single-tasking mode should be switched off (which would allow pipes to work).
  • someone needs to overhaul the SD SPI code who understands it.
  • not all the ROM routines are hooked up to userland binaries.

Right, I’m off to look for an SD card and a spare ESP. See you later!

Great price on an Atom Z8350-based system (plus a giggle)

As regular readers know, I occasionally take a dive into the world of mini-pcs (especially low cost models) in my quest to get the best bang for the buck/watt. On my travels through the depths of Aliexpress this morning, I came across very reasonably priced Z8350-based system. It’s not going to blow the doors off, but I have a soft spot for these boxes, simply because they do so much for what they are.

This one is a design which is new to me, sporting more USB ports than normal, as well as both VGA and HDMI video ports. You’ll need to be careful though, as the retailer seems to be a bit confused about what hardware they’re selling …in the product “overview” they state it has 2 x USB3 and 2 x USB2, but the images clearly show 1 x USB3 and 5 x USB2 ports. The header of their advert also rather vaguely claims “support 2.5 inch HDD”, but (despite the extra height of this case compared to other models) there are no obvious SATA headers on the motherboard pictures and in the overview they only mention support of “external” disks.

They’re also a bit on the boastful side about the service they provide (hence the “giggle” in the title of this post — see image, below).

The price for the 4GB/32GB model is a modest $84.45 (plus $3 shipping to my part of the world). Although these boxes are sold as “Windows 10” machines, I certainly wouldn’t advise anyone to try running Windows on the 32GB eMMC model, as the initial download of updates will overflow the available storage space.

As I’ve mentioned before though, these machines make excellent little low-cost, low-power-usage servers for anyone doing 24/7 self hosting (I’ve even got a similar box with more than 12TB of USB disks hanging off the back as a back-up server on my home network). This version also comes with 4GB of main memory, which is 2GB more than my original Z8350 based system (which has been running 24/7 since Dec 2016).

Please note that I don’t own this exact system, so I can’t verify its actual hardware configuration or performance. I haven’t ever used the supplier SZMZ before either, so I can’t vouch for their delivery times or packing. Another point to note is that although SZMZ has a very impressive 100% feedback rating, it is only based on two orders (at the time of writing). All the same, this looks like a nicely built mini-pc (with more ventilation holes than equivalent models, if nothing else).


‡  —  As to the strangely worded phrase in their advert, I’m working on the assumption that what they’re trying to convey is that different builds and versions of this model are available  …or maybe the copywriter just took this chance to broadcast their views on men in general.

Will it mirror?

Pic of (old) Laptop with SSD attached to lid

Here’s another silly one for you. What do you do if the latest release of your OS of choice ships with ZFS, but you don’t have space in your laptop for a second disk? …Answer:- Reach for the velcro.

This Sony Vaio has a Centrino Core-2 p8600 processor, so it’s not going to break any speed records, but it works well enough for day to day use. Courtesy of the Buffalo 500GB SSD taped to the lid, it now sports 1TB of disk space (500GB mirrored), which is probably well in excess of what the designers originally envisaged.

Centrino.2 sticker next to USB ports

This is one of those little “because I can” projects which I don’t necessarily recommend to anyone else, but at the same time, the lightness of an SSD compared to a normal hard-disk (even a 2.5″ one) means this is now an eminently practical solution if your old laptop happens to be running out of space (I do move the laptop around the house to work in different rooms at different times, but it generally doesn’t travel much further afield than a deck-chair out on the veranda).

So, will it mirror? Heck yes!

Should I mirror?

No, probably not. It’s much more sensible to use a periodic ZFS send/receive job to back up your work to an existing server, that way you don’t need to worry about the extra drain on your battery and you still have your work if your laptop is stolen or knocked off the deck of your yacht, mid-ocean (what, you mean that’s never happened to you?).

One other consideration when thinking about installing Ubuntu on ZFS — currently (as of 21.04) Ubuntu will not allow you to edit the size of the disk partitions; you must accept their optimized defaults. Unfortunately , those defaults include a /boot partition which is much too small (typically 2GB). It will work fine for a couple of months, but with every apt update, the system will automatically add a snapshot of the boot partition. When the upgrade includes a kernel update, this means that tens, or even hundreds of megabytes of storage can be used. Even when you set the system defaults to compress the kernels using “xz”, it doesn’t take too many updates before you start getting “not enough free space” messages from apt and it will refuse to continue with the update. This is not something a novice user can easily recover from (hint: deleting files on a ZFS partition doesn’t always return that free space to the system — it all depends on whether it is still being held by a snapshot).

The OLEBY sensor light’s hidden secret

Older, yellow OLEBY on stairs

I’m working on a simple little project (ESP01S-based) right now which needs to be able to sense a warm body nearby, so naturally I turned to my stock of IKEA OLEBY sensor lights (having hacked several in the past and having been impressed with their all round, mmm …cheapness). For such a bargain price, it was almost worth buying them just for the battery holder, but an ancient grey-beard like me really needed a couple for their intended purpose …to light the way to the toilet at 02:00. So I bought a small stock of them (not enough, as it turned out), ripped out all of the white LEDs and replaced them with a single, yellow one and added a CDS sensor on pin 9 of the BSS0001 chip to ensure that they only switch on at night. A couple of them have been sitting (out of kicking range) on the stairs for several years now and are worth their weight (without batteries) in gold.

Anyway, when the need came up for the warm-body-sensor, I immediately thought of the depleted stock of OLEBYs sitting in the drawer, still in their original packaging. As I mentioned above, I wanted the sensor to interface with an ESP01S, so that I could MQTT the heck out of any warm bodies that came into range in the middle of the night (I’m totally screwed if the intruder happens to be a zombie of course). The reason the OLEBY and ESP01S are such a good fit is that the sensor will be working in the middle of a field …and the bodies in question (zombies or not) may not always be human shaped. The field in question is outside of mains-extension-cable range, but is still fairly close to our house; close enough for an ESP to be able to piggy-back off our WiFi network. The idea is that the OLEBY will trigger as usual, but instead of turning on a bunch of LEDs, it’ll turn on the ESP8266 instead. The ESP will boot, latch the power switch on (as the OLEBY will time out if not re-triggered) and then quietly send an alert message to our MQTT server, which we can then act upon depending upon how close to harvest time it is (lights, noises, hand grenades, dynamite or 200W rendition of Slade’s “Merry Christmas” …no, you’re right, that last one is probably banned by the Geneva convention).

The bare (apart from all of the flux) OLEBY PCB

So, poking about in the (fairly manky) guts of a dismembered OLEBY (don’t they have any de-fluxing solution in the middle kingdom?) trying to find where the trace from pin-2 went before it hits the LED switching transistor/FET, I discovered something interesting. The brand-new batteries I’d just slotted into the thing measured a pretty reasonable 4.83 volts …but the output from pin-2 measured 5.1v. Eh?!?

In all of the times I’d had the backs off these things, I’d never really looked closely at anything very much beyond the BSS0001 chip or the LEDs, but it seemed like there was something quite interesting going on here. There’s no mention of a charge-pump in the BSS0001 datasheet, so what was happening?

The answer appears to be in that clump of components across at the left-hand side of the board, away from the sensitive BSS0001, where an electrolytic capacitor sits on the reverse (LED) side of the PCB. Something needs a little bit of smoothing (first clue). Now that I get the magnifier out, I can see that a three-pinned device which I’d assumed was a transistor driver for the LEDs is actually labelled as “U1” (second clue). And there, hidden in plain view right next to U1, are a fairly chunky diode and another component labelled as “L1”. Well, who’d have guessed it …the humble (and don’t forget cheap) OLEBY has a fixed-voltage, boost regulator inside it. No wonder the things never seem to lose any sensing range, no matter how dead the batteries get.

U1 is something of an enigma. There are no particularly legible markings (“E502”?) on the chip itself and, until today, I would have been willing to bet that there was no such beast as a three-pin boost regulator chip. To begin with, I was working on the assumption that it was probably a transistor being driven by a clock signal from the BSS0001. However, a Gewgull search first turned up the ON Semi NCP1402, five-pin, micropower regulator, where one pin is marked as “NC” (no connection …hah, maybe the “NCP” part of the NCP1402 stands for Non-Connected-Pins!) and yet another, the chip-enable pin, can be permanently tied to the output pin, so we have at least a theoretical three-pin boost regulator after all. A little more searching through supplier product listings produced a couple of entries for SOT23-3 devices, like the TI TPS613222. So there is such a thing as a three-pin, SMD, boost regulator chip after all. Not only that, but the link to the TI datasheet above will open at an example circuit which seems to be a perfect match for the OLEBY layout (although the actual pin assignments for the TPS613222 don’t match U1).

I’ve just checked the on-line IKEA catalogue for the OLEBY sensor light and, here in Japan anyway, they still have them in stock (although the colours seem to be limited to black, white and red …and the price seems to have gone up, too), but it may still be worthwhile picking up a few the next time you’re in your local store buying some kitchen cabinets, coz’ now you know you’ll be getting a handy-dandy, micropower regulator for your battery-driven projects as part of the deal (oh, and lots of extra flux, too).

If the non-zombie detector ever gets to the decent working prototype stage, I’ll publish another article with the details and link to it from this page (but don’t hold your breath).

FreeBSD Diaries — Adding bootstrap code to ZFS root disks

When you add a new disk device to the “zroot” pool (or whatever it is that you’ve named the ZFS pool where your root partition resides) you should also add bootstrap code to that specific disk, so that the system can actually boot from it should the other disk(s) in the pool suffer a hardware failure.

Assuming that you’re using disks partitioned using “gpart” and have an EFI partition, your disk might look something like this (using “gpart show da3”, for example):-

=> 34 7814037101 da3 GPT (3.6T)
34 6 - free - (3.0K)
40 1024000 1 efi (500M)
1024040 12582912 2 freebsd-swap (6.0G)
13606952 209715200 3 freebsd-zfs (100G)
223322152 7590714976 4 freebsd-zfs (3.5T)
7814037128 7 - free - (3.5K)

You also use “gpart” to write the boostrap code to your new disk. In this example, the command would be:-

gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da3
partcode written to da3p1
bootcode written to da3

Note that “gpart” confirms that it has written to the disk.

PLEASE MAKE SURE that the you change the disk device specifier (“da3” above) to specify -your- correct target disk device. This command will quite happily destroy filesystems if you get it wrong.

FreeBSD Diaries — DHCPD

While trying to install the DHCP server daemon on FreeBSD 12.1, I got the error:-

Failed to start dhcpd.  Could not find dhcp-sync in services file.

This actually means what it says.  If you happen to have syncing of DHCP lease information enabled between multiple servers (master and back-ups in the same domain), then you need to add this line to the /etc/services file:-

dhcpd-sync   8067/udp      # dhcpd(8) synchronisation

As the dhcpd (server process) runs as user “_dhcp”, you should probably make sure that  “_dhcp” has write permission on the /var/db/dhcpd.leases file, too.

 

Hey WordPress, just a short note here (seeing as it’s next to impossible to write anything any longer) to let you know how much this user -detests- the new editor.  Hope you’re not betting the farm on it.