As what as you like

  • Fixing an error in Xcode Instruments's Leaks profile

    As part of our general effort to try and raise the quality of Hammerspoon, I’ve been working with @latenitefilms to track down some memory leaks, which can be very easy if you use the Leaks profile in Xcode’s “Instruments” tool. I tried this various ways, but I kept running into this error:


    After asking on the Apple Developer Forums we got an interesting response from an Apple employee that code signing might be involved. One change later to not do codesigning on Profile builds and Leaks is working again!

    So there we go, if you see “An error occurred trying to capture Leaks data” and “Unable to acquire required task port”, one thing to check is your code signing setup. I don’t know what specifically was wrong, but it’s easy enough to just not sign local debug/profile builds most of the time anyway.

  • Abusing Gmail as a ghetto dashboard

    I’m sure many of us receive regular emails from the same source - by which I mean things like a daily status email from a backup system, or a weekly newsletter from a blogger/journalist we like, etc.

    These are a great way of getting notified or kept up to date, but every one of these you receive is also a piece of work you need to do, to keep your Inbox under control. Gmail has a lot of powerful filtering primitives, but as far as I am able to tell, none of them let you manage this kind of email without compromise.

    My ideal scenario would be that, for example, my daily backup status email would keep the most recent copy in my Inbox, and automatically archive older ones. Same for newsletters - if I didn’t read last week’s one, I’m realistically never going to, so once it’s more than a couple of weeks stale, just get it out of my Inbox.

    Thankfully, Google has an indirect way of making this sort of thing work - Google Apps Script. You can trigger small JavaScript scripts to run every so often, and operate on your data in various Google apps, including Gmail.

    So, I quickly wrote this script and it runs every few hours now:

    // Configuration data
    // Each config should have the following keys:
    //  * age_min: maps to 'older_than:' in gmail query terms
    //  * age_max: maps to 'newer_than:' in gmail query terms
    //  * query: freeform gmail query terms to match against
    // The age_min/age_max values don't need to exist, given the freeform query value, 
    // but age_min forces you to think about how frequent the emails are, and age_max 
    // forces you to not search for every single email tha matches the query
    // TODO:
    //  * Add a per-config flag that skips the archiving if there's only one matching thread (so the most recent matching email always stays in Inbox)
    var configs = [
      { age_min:"14d", age_max:"90d", query:"subject:(Benedict's Newsletter)" },
      { age_min:"7d",  age_max:"30d", query:" subject:gnubert" },
      { age_min:"1d",  age_max:"7d",  query:"subject:(Nightly clone to Thunderbay4 Successfully)" },
      { age_min:"1d",  age_max:"7d",  query:"from:Amazon subject:(Arriving today)" },
    function processInbox() {
      for (var config_key in configs) {
        var config = configs[config_key];
        Logger.log("Processing query: " + config["query"]);
        var threads ="in:inbox " + config["query"] + " newer_than:" + config["age_max"] + " older_than:" + config["age_min"]);
        for (var thread_key in threads) {
          var thread = threads[thread_key];
          Logger.log("  Archiving: " + thread.getFirstMessageSubject());

    (apologies for the very basic JavaScript - it’s not a language I have any real desire to be good at. Don’t @ me).

  • AmigaOS 4.1 Final Edition in Qemu

    So this is a fun one, some marvellous hackers, including Zoltan Balaton and Sebastien Mauer have been working on Qemu to add support for the Sam460ex motherboard, a PowerPC system from 2010. Of particular interest to me is that this was a board which received an official port of Amiga OS 4, the spiritual successor to AmigaOS, one of my very favourite operating systems.

    I’ll probably write more about this later, but for now, here is a simple screenshot of the install CD having just booted.

    Update: Zoltan has published a page with information about how to get it working, see here


  • Home networking like a pro - Part 1.1 - Network Storage Redux

    Back in this post I described having switched from a Mac Mini + DAS setup, to a Synology and an Intel NUC setup, for my file storage and server needs.

    For a time it was good, but I found myself wanting to run more server daemons, and the NUC wasn’t really able to keep up. The Synology was plodding along fine, but I made the decision to unify them all into a more beefy Linux machine.

    So, I bought an AMD Ryzen 5 1600 CPU and an A320M motherboard, 16GB of RAM and a micro ATX case with 8 drive bays, and set to work. That quickly proved to be a disaster because Linux wasn’t stable on the AMD CPU - I hadn’t even thought to check, because why wouldn’t Linux be stable on an x86_64 CPU in 2018?! With that lesson learned, I swapped out the board/CPU for an Intel i7-8700 and a Z370 motherboard.

    I didn’t go with FreeNAS as my previous post suggested I might, because ultimately I wanted complete control, so it’s a plain Ubuntu Server machine that is fully managed by Ansible playbooks. In retrospect it was a mistake to try and delegate server tasks to an appliance like the Synology, and it was a further mistake to try and deal with that by getting the NUC - I should have just cut my losses and gone straight to a Linux server. Lesson learned!

    Instead of getting lost in the weeds of purchase choices and justifications, instead let’s look at some of the things I’m doing to the server with Ansible.

    First up is root disk encryption - it’s nice to know that your data is private when at rest, but a headless machine in a cupboard is not a fun place to be typing a password on boot. Fortunately I have two ways round this - firstly, a KVM (a Lantronix Spider) and secondly, one can add dropbear to an initramfs so you can ssh into the initramfs to enter the password.

    Here’s the playbook tasks that put dropbear into the initramfs:

    - name: Install dropbear-initramfs
        name: dropbear-initramfs
        state: present
    - name: Install busybox-static
        name: busybox-static
        state: present
    # This is necessary because of
    - name: Add initramfs hook to fix cryptroot-unlock
        dest: /etc/initramfs-tools/hooks/zz-busybox-initramfs-fix
        src: dropbear-initramfs/zz-busybox-initramfs-fix
        mode: 0744
        owner: root
        group: root
      notify: update initramfs
    - name: Configure dropbear-initramfs
        path: /etc/dropbear-initramfs/config
        regexp: 'DROPBEAR_OPTIONS'
        line: 'DROPBEAR_OPTIONS="-p 31337 -s -j -k -I 60"'
      notify: update initramfs
    - name: Add dropbear authorized_keys
        dest: /etc/dropbear-initramfs/authorized_keys
        src: dropbear-initramfs/dropbear-authorized_keys
        mode: 0600
        owner: root
        group: root
      notify: update initramfs
    # The format of the ip= kernel parameter is: <client-ip>:<server-ip>:<gw-ip>:<netmask>:<hostname>:<device>:<autoconf>
    # It comes from
    - name: Configure boot IP and consoleblanking
        path: /etc/default/grub
        line: 'GRUB_CMDLINE_LINUX_DEFAULT="ip= loglevel=7 consoleblank=0"'
      notify: update grub

    While this does rely on some external files, the important one is zz-busybox-initramfs-fix which works around a bug in the busybox build that Ubuntu is currently using. Rather than paste the whole script here, you can see it here.

    The last task in the playbook configures Linux to boot with a particular networking config on a particular NIC, so you can ssh in. Once you’re in, just run cryptsetup-unlock and your encrypted root is unlocked!

    Another interesting thing I’m doing, is using Borg for some backups. It’s a pretty clever backup system, and it works over SSH, so I use the following Ansible task to allow a particular SSH key to log in to the server as root, in a way that forces it to use Borg:

    - name: Deploy ssh borg access
        user: root
        state: present
        key_options: 'command="/usr/bin/borg serve --restrict-to-path /srv/tank/backups/borg",restrict'
        key: "ssh-rsa BLAHBLAH cmsj@foo"

    Now on client machines I can run borg create --exclude-caches --compression=zlib -v -p -s ssh://gnuborg:22/srv/tank/backups/borg/foo/backups.borg::cmsj-{utcnow} $HOME and because gnuborg is defined in ~/.ssh/config it will use all the right ssh options (username, hostname and the SSH key created for this purpose):

    Host gnuborg
      User root
      Hostname gnubert.local
      IdentityFile ~/.ssh/id_rsa_herborg
  • Homebridge server monitoring

    Homebridge is a great way to expose arbitrary devices to Apple’s HomeKit platform. It has helped bridge the Google Nest and Netgear Arlo devices I have in my home, into my iOS devices, since neither of those manufacturers appear to be interested in becoming officially HomeKit compatible.

    London has been having a little bit of a heatwave recently and it got me thinking about the Linux server I have running in a closet under the stairs - it has pretty poor airflow available to it, and I didn’t know how hot its CPU was getting.

    So, by the power of JavaScript, Homebridge and Linux’s /sys filesystem, I was able to quickly whip up a plugin for Homebridge that will read an entry from Linux’s temperature monitoring interface, and present it to HomeKit. In theory I could use it for sending notifications, but in practice I’m doing that via Grafana - the purpose of getting the information in HomeKit is so I can ask Siri what the server’s temperature is.

    The configuration is very simple, allowing you to configure one temperature sensor per instance of the plugin (but you could define multiple instances in your Homebridge config.json):

        "accessory": "LinuxTemperature",
        "name": "gnubert",
        "sensor_path": "/sys/bus/platform/devices/coretemp.0/hwmon/hwmon0/temp1_input",
        "divisor": 1000

    (gnubert is the hostname of my server).

    Below is a screenshot showing the server’s CPU temperature mingling with all of the Nest and Arlo items :)


Complete post archive