Archive

Hyper Key in macOS Sierra with Karabiner Elements

Over the last few years, various people have used Karabiner to remap Caps Lock to cmd+shift+opt+ctrl, which is such an unusual combination of modifier keys, that it effectively makes Caps behave as a completely new modifier (which we have collectively called "Hyper", in reference to old UNIX workstation keyboards).

And for a time, it was good.

Then came macOS Sierra, which changed enough of the input layers of its kernel, that Karabiner was unable to function. Thankfully, Karabiner's author, Fumihiko Takayama, began work on a complete rewrite of Karabiner, which is currently called Karabiner Elements.

Initially, Elements only supported very simple keyboard modifications - you could swap one key for another, but that was it. Various folk quickly got to work offering quick hacks to get a Hyper key to work, and others started to try to work around the missing support, with other tools.

I'm very glad to say that it is now possible to do a proper Hyper remap with Karabiner Elements (and to be clear, none of this is my work, all credit goes to Fumihiko).

Here's how you can get it:

  • Download and install https://pqrs.org/latest/karabiner-elements-latest.dmg
  • Launch the Karabiner Elements app, go to the Misc tab and check which version you have, if it's less than 0.91.1, click either Check for updates or Check for beta updates until you get offered 0.91.1 or higher, then install that update and re-launch the Karabiner Elements app.
  • You probably want to remove the example entry in the Simple Modifications tab.
  • Edit ~/.config/karabiner/karabiner.json
  • Find the simple_modifications section, and right after it, paste in:
"complex_modifications": {
    "rules": [
        {
            "manipulators": [
                {
                    "description": "Change caps_lock to command+control+option+shift.",
                    "from": {
                        "key_code": "caps_lock",
                        "modifiers": {
                            "optional": [
                                "any"
                            ]
                        }
                    },
                    "to": [
                        {
                            "key_code": "left_shift",
                            "modifiers": [
                                "left_command",
                                "left_control",
                                "left_option"
                            ]
                        }
                    ],
                    "type": "basic"
                }
            ]
        }
    ]
},
  • As soon as you save the file, Elements will notice it has changed, and reload its config. You should immediately have a working Hyper key 😁

If you're not confident at your ability to hand-merge JSON like this, and don't need anything from Elements other than the basic defaults, plus Hyper, feel free to grab my config and drop it in ~/.config/karabiner/.

Supplemental note for High Sierra

I've only tested this very briefly on High Sierra, but I had to disable SIP to get the Elements .kext to load. I'm not quite sure what's going on, but I reported it on GitHub. (Note that you can re-enable SIP after the kext has been loaded successfully once)

Update

Many people like to turn Caps into Hyper, but also have it behave as Escape if it is tapped on its own. As of Karabiner Elements 0.91.3 this appears to be possible by adding this to the manipulator:

"to_if_alone": [
    {
        "key_code": "escape",
        "modifiers": {
            "optional": [
                "any"
            ]
        }
    }
],

(thanks to Brett Terpstra for the sample of this


New blog setup

Bye bye Blogger, hello GitHub Pages. This means no more comments, and probably a bunch of posts have terrible formatting at the moment, but at least my blog is now just static Markdown files I can edit nicely 😁

(Also the domain has changed, cmsj.net is the new tenshu.net :)


Changing GPG key

It's been 15 years and my GPG key is now looking hilariously out of date - 1024bit DSA key. Yuck. So, time to start over. Below is a statement about the change, with details of my new and old keys, signed by both keys. Since it will almost certainly not paste properly out of a browser, I have also uploaded it to GitHub: here.

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

- -----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Hello world

My name is Chris Jones and I am changing my GPG key.
The original key fingerprint is: 6C99 9021 9B3A EC6D 4A28  7EE7 C574 7646 7313 2D75
The new key fingerprint is: 79D2 2E89 6591 210E 45F3  75D3 BCA2 36E2 E19F 727D

You should find this message signed by the new key, and that combined message signed by the old key, as proof that I am doing this. I have also updated my keybase.io account and secured a couple of signatures on the new key to start rebuilding my place in the web of trust.

Thank you for your time,

Chris
- -----BEGIN PGP SIGNATURE-----
Version: GnuPG/MacGPG2 v2.0

iQIcBAEBCAAGBQJXxffoAAoJECwUb1eIgw1qUMEP+wfaB84vYC4tHdo9OS9szaQv
cmNW0sIuDx/RQr4iYrcs+8QTQf2I3FPXUc6auhyF4J7Lntu67sTKI9zyfsDm0IW9
NbIzysTP1Y35lJPA12VM9O9IRaf5G7J57BKAmAuUbpnY7icIzA0MoD4SwCgcRXtA
QPZd8JVPDLaNpwb1O5rdQLBAdo+OUjF+bB8jZzzoORX0oVVdGhaGJFVhuq8aV1dY
0YB/nZr71ZApUVnvSZBfj1FHsgXZ5Fai70iI+oAox/Gj6BJ0IJhBIk5hLAzAiXoR
8EMmtqkWkKc8Jd3NonMzKRGF+qT+G3YuZIDmSptOZWjJ5volT25bYFknEBxPslwC
TiBGSs6rKz9RhfxGxCmM350zBIVFFCn+RNCWrgn/z4OJp4xSvZJ0IwqB/CwkrmUb
0ZhG44O2W5lpuSCDh1dhCsiryq4JeSiUy1GENyHl8eIkXjzDOjKTt6OT8wYFFPL7
XheyIfvMbRNh86o79Skch6Qoyh7nvVALAwsLVHKSDtQRzHbVF6ED9h2ISxdABiZ2
CkiJ95bf8JQeNVoqLJ78uwSYN96AyGPXfMQKG45SavgkzNLyqeoI1iMJE2yYuVIy
Z9XUaKhoDI9ERLps+Fw6NY+v2BVSKTvl5MDEDHmfvjK0m3x6C4tF3/QdgwRsJmvE
2XMWXLrBkcZx9tnYNFIG
=9PwA
- -----END PGP SIGNATURE-----
-----BEGIN PGP SIGNATURE-----
Version: GnuPG/MacGPG2 v2.0

iQEcBAEBCAAGBQJXxff1AAoJEPH//8xn2jUBNOcIAL4fqY/DViwNj2/Va4ePEg1w
9uUWhGnMUcG4CsQGkhtODU6Qg45inrWnG0VE8jwhGilP4w5tQoIe+m53cUp5m/Rv
ueRBvfDxBw/SrF6eFZ1SGkXv6kcUkOYjueKsDtxObaX9dN7PrDUljtZWpGTzE77k
5EWPGfUT89oXa2eGwYnr6t7t9f76cO9eKFck7rIWT+p1tzmF6amm7IjoS8gsjfSb
lPk3PoC0G71wSseh7iesIgw+vZRZ7tYg59RdpwWLmZjQJVhMzW/QpX87CPAM2m0A
OcyivXLlbaSZ58AsHZSIA4ZjeoDnlWNsFHemBUSAOMa03b4JtgnGbTaHTZhiLc8=
=/lkZ
-----END PGP SIGNATURE-----

Raspberry Pi project: PiBus

The premise of this project is simple - my family lives in London, there's a bus route that runs along our road, and we use it a lot to take the kids places. The reality of using the bus is a little more complex - getting three kids ready to go out, is a nightmare and it's not made any easier by checking a travel app on your phone to see how close the bus is.

I don't think there is much I can do to make the preparation of the children any less manic, but I can certainly do something about the visibility of information about buses, mainly thanks to the excellent open APIs/data provided by Transport For London.

So, armed with their API, a Python 3 interpreter and a Raspberry Pi, I set out to make a little box for the kitchen wall which will show when the next 3 buses are due to arrive outside our house.

The code itself is easy enough to throw together because Python has libraries for everything (it also helps if you don't bother to design a decent architecture!). Requests to fetch the bus data from TfL, json/iso8601 to parse the data, Pillow to render it as an image, and APScheduler to give it a simple run-loop.

The question then becomes, how to display the data. The easiest answer would be a little LCD screen, but that brings with it the downside of having a backlight in the kitchen, which would be ugly and distracting, and it also raises the question of viewing angles. Another answer would be some kind of physical indicator, but that requires skills I don't have time for. Instead, I decided to look for an E-Ink display (think Kindle) - it would let me display simple images without producing light.

The first option I looked at was the PaPiRus, but it's in the window between its crowdfunding drive having finished, and being available to buy. The only other option I could find was the E-Paper HAT, from Percheron Electronics, which also started life as a crowdfunding project, but is actually available to buy.

Unfortunately, these displays are super fragile, which I discovered by destroying the first one, but Neil at Percheron was super helpful and I quickly had a new display and some tips about how to avoid cracking it.

My visualisation of this data isn't going to win any awards for beauty, but it serves its purpose by showing a big number to tell us how many minutes we have, and I managed to minimise the number of times you see the white-black-white refresh cycle of the eInk display with partial screen updates. Here are some photos of the project in various stages of construction:

Freshly assembled out of the box Freshly assembled out of the box

The smallest USB WiFi adapter I've ever seen The smallest USB WiFi adapter I've ever seen!

Modified PiBow case Sadly I had to make some modifications to the PiBow case to fit this particular rPi

Running an eInk test program Running one of the eInk display test programs

Initially I was rather hoping I could use the famous font that TfL (and London Transport before it) use, which is known as Johnston, but sadly they will not licence the font outside their own use and use by contracted partners. There is a third party clone of the font, but it may have legal issues, presumably because TfL values their braaaaaand. Instead, I decided to just drop the idea of shipping a font with the code, and exported Courier.ttf from my laptop to the Pi directly.

TfL font This would have been nice, but I cannot have nice font things

I did briefly try Ubuntu Mono, which is a lovely font, but the zeros look like eyes and it freaked me out.

PIBUS IS WATCHING YOU PIBUS IS WATCHING YOU

The display needs to handle various different situations, most obviously, when no data can be fetched from the API. Rather than get too bogged down in the details of whether our Internet connection is down, TfL's API servers are down, London is on fire, or it's just night time and there are no buses, I went for a simple message with a timestamp. Once this has been displayed, the code skips any further screen updates until it has valid data again. This makes it easy to see when a problem occurred.

Date message Maybe aliens stole the Internet, maybe it's a bus strike. It doesn't matter.

I also render a small timestamp on valid data screens too, showing when the last data fetch happened. This is mostly so I can be sure that the fetching code isn't stuck somehow. Once I trust the system a bit more, this can probably come out.

Final design The final design, showing a fallback for when there is data for  0 < x < 3 buses

Three buses Data for three buses, plenty of time to get ready for the second one!

So there it is, project completed! Grab the code from https://github.com/cmsj/pibus, install the requirements on a Pi, give money to the awesome Percheron Electronics for the E-Paper HAT (and matching PiBow case), throw a font in the directory and edit the scripts for the bus stop and bus route that you care about!


Sending iMessages and SMS through Messages.app with AppleScript

I was searching around for ways to automate sending iMessages, so I could write a plugin for Hammerspoon. I found various scripts lurking around the place for sending iMessages, but I also found one that can send SMS if you have SMS Relay enabled (which means you need OS X 10.10 and an iPhone running iOS 8.1). I figured I'd collect them as a single post, to help future searchers, so without further ado, here are two stripped down AppleScript snippets that let you control Messages.app to send either an iMessage, or an SMS. Firstly, sending an iMessage:

tell application "Messages"
  send "This is an iMessage" to buddy "foo@bar.com" of (service 1 whose service type is iMessage)
end tell

The buddy address can be either an email or a phone number that's registered with Apple for use with iMessage. Secondly, sending an SMS:

tell application "Messages"
  send "This is an SMS" to buddy "+1234567890" of service "SMS"
end tell

Here, the buddy address should be a phone number. Simple! (and for the Hammerspoon users, you'll find hs.messages available in the next release, 0.9.23)


The curious Moto X pricing

Comparing the Moto X to the Nexus 4 is interesting in one particular respect - the price. The Nexus 4 (made by LG, sold by Google) had very respectable specs when it was launched, but its price was surprisingly low ($300 off contract). We were told this was because it was being sold very close to cost price. The Moto X (made by Motorola, which is owned by Google) has mid-range specs, but its price is surprisingly high ($200 up front *and* an expensive two year contract). Overall Motorola is probably getting something like $400-$600 for each Moto X sold, when you factor in the carrier subsidy. The inevitable question is why Google is happy to make almost no money off the Nexus 4, but wants to have its Motorola division make a respectable margin on the Moto X.

  • Is it because doing otherwise would undermine the carriers' abilities to sell other phones, so they would refuse to do it?
  • Is it because Google wants the Motorola division to look good in their accounts, which is easier if you are selling mid-range phones for the kind of money that an iPhone sells for?
  • Something else?

Moving on from Terminator

Anyone who's been following Terminator knows this post has been a long time coming and should not be surprised by it.

As of a few days ago, I have handed over the reigns of the project to the very capable Stephen J Boddy (a name that will be no stranger to followers of the Terminator changelogs - he has contributed a great deal over the last few years).

We're still working out the actual details of the handover, so for now the website is still here and I am still technically the owner of the Launchpad team that runs the project, but going forward all code/release decisions will come from Stephen and we'll move the various administrivia over to new ownership in due course.

Everyone please grab your bug trackers and your python interpreters and go send Stephen patches and feature requests! :D


Some Keyboard Maestro macros

I've started using Keyboard Maestro recently and it is one impressive piece of software. Here are a couple of the macros I've written that aren't completely tied to my system:

Type current Safari URL - This will type the URL of the frontmost Safari tab/window into your current application. Handy if you're chatting with someone and want to paste them a hilarious YouTube URL without switching apps, copying the URL to the clipboard and switching back. - It does not use the clipboard, it actually types the URL into the current application, so any modifier keys you hold will change what is being typed. I've configured the macro to fire when the specified hotkey is released, to minimise the chances of this happening.

Toggle Caffeine - Very simple, just toggles the state of Caffeine with a hotkey.


The (simplest) case against a new Mac Pro at WWDC

This is pretty simple really - unless Apple wants to launch a new Mac Pro and have it be out of date almost immediately, they need to wait until Intel has released Ivy Bridge Xeons, which won't be until next month at the earliest (and given the delays with Haswell, July seems unlikely). Also coming later this year on Intel's roadmap is the introduction of Thunderbolt 2.

Both of these things would seem like an excellent foundation for a new line of professional Macs.

Given the very short list of hardware model numbers that leaked ahead of today's WWDC keynote, my guess is that Apple is going to hold a pro focused event in 2-4 months and refresh MacBook Pros, Mac Pros and hopefully the surrounding halo like Thunderbolt displays (which are crying out for the new iMac style case, the newer non-glossy screen, USB3.0 and soon, Thunderbolt 2) and the pro software Apple sells.

Having a pro-only event would also help calm the worries that Apple has stopped caring about high-value-low-volume professional users.


Thoughts on a modular Mac Pro

There have been some rumours recently that the next iteration of the Mac Pro is going to be modular, but we have had very little information about how this modularity might be expressed. In some ways the current Mac Pro is already quite modular - at least compared to every other Mac/MacBook. You have easy access to lots of RAM slots, you have multiple standards-compliant disk bays, PCI slots and CPU sockets. This affords the machine an extremely atypical level of upgradeability and expandability, for a Mac. Normal levels for a PC though. Even with that modularity in mind, the machine itself is fairly monolithic - if you do need more than 4 disk drives, or more PCI cards than it can take, you have limited or no expansion options. You could burn a PCI slot for a hardware disk controller and attach some disks to it externally, but you are quickly descending into an exploding mess of power supplies, cables and cooling fans. If Apple decides to proceed along that route, the easiest and most obvious answer is that they slim down the main Pro itself and decree that all expansion shall take place over Thunderbolt (currently 10Gb/s bidirectional, but moving to 20Gb/s bidirectional later this year when the Thunderbolt 2 Falcon Ridge controllers launch). This is a reasonable option, but even though Thunderbolt is essentially an external PCI-Express bus, its available bandwidth is considerably lower than the peak levels found on an internal PCI-E bus (currently around 125Gb/s). A much better option, it would seem to me, would be to be able to go radically modular and expand the Mac itself, but how could that be possible? How can you just snap on some more PCI slots if you want those, or some more disks if that's what you need? I will say at this point that I have absolutely no concrete information and I am not an electronic engineer, so what you read below is poorly informed speculation and should be treated as that :) I think the answer is Intel's QuickPath Interconnect (QPI), a high bandwidth (over 200GB/s), low latency point-to-point communication bus for connecting the main components of an Intel based computer. If you have any Intel CPU since around 2009, you probably have a QPI bus being used in your computer. Looking at the latest iteration of their CPUs, QPI is always present - on the uniprocessor CPUs it is used on the chip package to connect the CPU core to the elements of the northbridge that have migrated into the CPU package (such as the PCI-Express controller), however, on these chips the QPI bus is not presented externally. On the multiprocessor-capable chips, it is, and is the normal way to interconnect the CPUs themselves, but it can be used for other point-to-point links, such as additional north bridges providing PCI-Express busses. So you could buy a central module from Apple that contains 1, 2 or 4 CPUs (assuming Ivy Bridge Xeons) and all of the associated RAM slots, with maybe two minimal disk bays for the core OS to boot from, and a few USB3.0 and Thunderbolt ports. For the very lightest of users, this would likely be a complete computer - you have some disk, some RAM, CPUs and assuming the Xeons carry integrated GPUs, the Thunderbolt ports can output video. It would not be much of a workstation, but it would essentially be a beefed up Mac Mini. I would then envision two kinds of modules that would stack on to the central module. The simplest kind would be something like a module with a disk controller chip and a load of disk bays and, not needing the raw power of QPI, this would simply connect to the existing PCI-Express bus of the main module. There would clearly be a limit to how many of these modules you could connect, since there are a limited number of PCI-E lanes provided by any one controller (typically around 40 lanes on current chipsets), but with the second type of module, you could then take the expansion up a considerable number of notches. That second kind would have a large and dense connector that is a QPI. These modules could then attach whatever they wanted to the system - more CPUs (up to whatever maximum is supported by that generation of Xeon - likely 8 in Ivy Bridge), or very very powerful IO modules. My current working example of this is a module that is tasked with capturing multiple 4K video streams to disk simultaneously. This module would provide its own PCI-Express controller (linked back to the main module over QPI), internally connected to a number of video capture chips/cards and to one or more disk controller chips/cards which would connect to a number of disk bays. It sounds a lot like what would happen inside a normal PC, just without the CPU/RAM and that's because it's exactly that. This would allow for all of the video capture to be happening within the module. It would be controlled as normal from the software running in the main module, which would be issuing the same instructions as if the capture hardware was on the main system PCI-E bus, causing the capture cards to use DMA to write their raw video directly to the disk controller exactly as if they were on the main system PCI-E bus. The difference would be that there would be no other hardware on the PCI-E bus, so you would be able to make reasonable promises around latency and bandwidth, knowing that no user is going to have a crazy extra set of cards in PCI slots, competing for bandwidth. Even if you have two of these modules capturing a really silly amount of video simultaneously. It's a model for being able to do vast amounts of IO in parallel in a single computer. There would almost certainly need to be a fairly low limit on the number of QPI modules that could attach to the system, but being able to snap on even two or three modules would elevate the maximum capabilities of the Pro to levels far beyond almost any other desktop workstation. As a prospective owner of the new Mac Pro, my two reasonable fears from this are:

  • They go for the Thunderbolt-only route and my desk looks like an awful, noisy mess
  • They go for the radical modularity and I can't afford even the core module

(While I'm throwing around random predictions, I might as well shoot for a name for the radical modularity model. I would stick with the Lightning/Thunderbolt IO names and call it Super Cell) Edit: I'd like to credit Thomas Hurst for helping to shape some of my thinking about QPI.