cmsj.nethttp://cmsj.net/2023-11-22T17:32:03+00:00Mounting a separate partition for Scrypted NVR storage in Home Assistant OS2023-09-06T00:00:00+01:002023-11-22T00:16:17+00:00Chris Jonestag:cmsj.net,2023-09-06:/2023/09/06/haosscryptednvr.html<p>All of my smart home stuff is running in Home Assistant, and I'm using their OS on a Mini PC.</p>
<p>One of the things I'm running on that OS is the Scrypted Add-On to bridge my generic RTSP cameras into HomeKit, and to use Scrypted's NVR.</p>
<p>By default, Scrypted will …</p><p>All of my smart home stuff is running in Home Assistant, and I'm using their OS on a Mini PC.</p>
<p>One of the things I'm running on that OS is the Scrypted Add-On to bridge my generic RTSP cameras into HomeKit, and to use Scrypted's NVR.</p>
<p>By default, Scrypted will be storing NVR recordings in HassOS' <code>data</code> partition. I wanted to make sure that even if the NVR goes wild and fills the disk, the rest of my Home Assistant system wouldn't have to deal with running out of disk space, so I started looking for a way to configure HassOS and/or Scrypted to store the NVR recordings elsewhere. Turns out that isn't directly possible, but there is a very useful feature of HassOS that makes it possible.</p>
<p>Specifically, that feature is that you can import <code>udev</code> rules. With that I can ensure that a dedicated partition can be mounted in the location that the NVR recordings will be written, and now even if Scrypted goes haywire and runs that partition out of space, the rest of the system will function normally.</p>
<p>The official documentation for how to import <code>udev</code> rules (amongst many other useful things) is <a href="https://github.com/home-assistant/operating-system/blob/dev/Documentation/configuration.md">here</a>, but the approximate set of steps is:</p>
<ul>
<li>Arrange for there to be a partition available on your Home Assistant OS machine, formatted as <code>ext4</code>, with the label <code>NVR</code>. I put a second disk in, so it's completely separate from the boot disk which Home Assistant OS may choose to modify later.</li>
<li>Format a USB stick as FAT32, named <code>CONFIG</code></li>
<li>Create a directory on that stick with the name <code>udev</code></li>
<li>In that <code>udev</code> folder create a plain text file called <code>80-mount-scrypted-nvr-volume.rules</code></li>
<li>Place the udev rules below in that file</li>
<li>Put the USB stick in your Home Assistant OS machine and, from a terminal, run <code>ha os import</code></li>
</ul>
<p>The contents of that rules file should be:</p>
<div class="highlight"><pre><span></span><code><span class="c1"># This will mount a partition with the label "NVR" to: /mnt/data/supervisor/addons/data/09e60fb6_scrypted/scrypted_nvr/</span>
<span class="c1"># Import partition info into environment variables</span>
<span class="n">IMPORT</span><span class="p">{</span><span class="n">program</span><span class="p">}</span><span class="o">=</span><span class="s2">"/usr/sbin/blkid -o udev -p %N"</span>
<span class="c1"># Exit if the partition is not a filesystem</span>
<span class="n">ENV</span><span class="p">{</span><span class="n">ID_FS_USAGE</span><span class="p">}</span><span class="o">!=</span><span class="s2">"filesystem"</span><span class="p">,</span><span class="w"> </span><span class="n">GOTO</span><span class="o">=</span><span class="s2">"abort_rule"</span>
<span class="c1"># Exit if the partition isn't for NVR data</span>
<span class="n">ENV</span><span class="p">{</span><span class="n">ID_FS_LABEL</span><span class="p">}</span><span class="o">!=</span><span class="s2">"NVR"</span><span class="p">,</span><span class="w"> </span><span class="n">GOTO</span><span class="o">=</span><span class="s2">"abort_rule"</span>
<span class="c1"># Store the mountpoint</span>
<span class="n">ENV</span><span class="p">{</span><span class="n">mount_point</span><span class="p">}</span><span class="o">=</span><span class="s2">"/mnt/data/supervisor/addons/data/09e60fb6_scrypted/scrypted_nvr/"</span>
<span class="c1"># Mount the device on 'add' action (e.g. it was just connected to USB)</span>
<span class="n">ACTION</span><span class="o">==</span><span class="s2">"add"</span><span class="p">,</span><span class="w"> </span><span class="n">RUN</span><span class="p">{</span><span class="n">program</span><span class="p">}</span><span class="o">+=</span><span class="s2">"/usr/bin/mkdir -p </span><span class="si">%E</span><span class="s2">{mount_point}"</span><span class="p">,</span><span class="w"> </span><span class="n">RUN</span><span class="p">{</span><span class="n">program</span><span class="p">}</span><span class="o">+=</span><span class="s2">"/usr/bin/systemd-mount --no-block --automount=no --collect $devnode </span><span class="si">%E</span><span class="s2">{mount_point}"</span>
<span class="c1"># Umount the device on 'remove' action (a.k.a unplug or eject the USB drive)</span>
<span class="n">ACTION</span><span class="o">==</span><span class="s2">"remove"</span><span class="p">,</span><span class="w"> </span><span class="n">ENV</span><span class="p">{</span><span class="n">dir_name</span><span class="p">}</span><span class="o">!=</span><span class="s2">""</span><span class="p">,</span><span class="w"> </span><span class="n">RUN</span><span class="p">{</span><span class="n">program</span><span class="p">}</span><span class="o">+=</span><span class="s2">"/usr/bin/systemd-umount </span><span class="si">%E</span><span class="s2">{mount_point}"</span>
<span class="c1"># Exit</span>
<span class="n">LABEL</span><span class="o">=</span><span class="s2">"abort_rule"</span>
</code></pre></div>
<p>It's likely a good idea to reboot the Home Assistant OS machine at this point.</p>
<p>Now you can go to Scrypted's web UI, into the Scrypted NVR plugin and configure it to use <code>/data/scrypted_nvr</code> as its NVR Recordings Directory. The Available Storage box there will show the correct free space for the new volume.</p>
<p>And that's it!</p>
<p>Notes:</p>
<ul>
<li>As far as I know, the mountpoint for the Scrypted add-on should be stable, but I can't promise this.</li>
<li>This should be very safe as it will ignore any partition that isn't labelled <code>NVR</code>.</li>
<li>This should work with removable disks (e.g. USB), however, the Scrypted addon will not be stopped if you unplug the disk, so I would strongly recommend not doing that without stopping Scrypted first.</li>
</ul>Running Tailscale in Docker2023-08-19T00:00:00+01:002023-09-06T23:11:04+01:00Chris Jonestag:cmsj.net,2023-08-19:/2023/08/19/tailscaledocker.html<p>I run most of my home services in Docker, and I decided it was time to migrate Tailscale from the host into Docker too.</p>
<p>This turned out to be an interesting journey, but I figured I'd talk about it here for anyone else hitting the same issues.</p>
<p>Here is my …</p><p>I run most of my home services in Docker, and I decided it was time to migrate Tailscale from the host into Docker too.</p>
<p>This turned out to be an interesting journey, but I figured I'd talk about it here for anyone else hitting the same issues.</p>
<p>Here is my resulting Docker compose yaml:</p>
<div class="highlight"><pre><span></span><code><span class="w"> </span><span class="nt">tailscale</span><span class="p">:</span>
<span class="w"> </span><span class="nt">hostname</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">tailscale</span>
<span class="w"> </span><span class="nt">image</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">tailscale/tailscale:latest</span>
<span class="w"> </span><span class="nt">restart</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">unless-stopped</span>
<span class="w"> </span><span class="nt">network_mode</span><span class="p">:</span><span class="w"> </span><span class="s">"host"</span><span class="w"> </span><span class="c1"># Easy mode</span>
<span class="w"> </span><span class="nt">privileged</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">true</span><span class="w"> </span><span class="c1"># I'm only about 80% sure this is required</span>
<span class="w"> </span><span class="nt">volumes</span><span class="p">:</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">/srv/ssdtank/docker/tailscale/data:/var/lib</span><span class="w"> </span><span class="c1"># tailscale/tailscale.state in here is where our authkey lives</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">/dev/net/tun:/dev/net/tun</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket</span><span class="w"> </span><span class="c1"># This seems kinda terrible, but the daemon complains a lot if it can't connect to this</span>
<span class="w"> </span><span class="nt">cap_add</span><span class="p">:</span><span class="w"> </span><span class="c1"># Required</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">NET_ADMIN</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">NET_RAW</span>
<span class="w"> </span><span class="nt">environment</span><span class="p">:</span>
<span class="w"> </span><span class="nt">TS_HOSTNAME</span><span class="p">:</span><span class="w"> </span><span class="s">"lolserver"</span>
<span class="w"> </span><span class="nt">TS_STATE_DIR</span><span class="p">:</span><span class="w"> </span><span class="s">"/var/lib/tailscale"</span><span class="w"> </span><span class="c1"># This gives us a persistent entry in TS Machines, rather than Epehmeral</span>
<span class="w"> </span><span class="nt">TS_USERSPACE</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">false</span><span class="w"> </span><span class="c1"># Bizarrely, even if you bind /dev/net/tun in, you still need to tell the image to not use userspace networking</span>
<span class="w"> </span><span class="nt">TS_AUTH_ONCE</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">false</span><span class="w"> </span><span class="c1"># If you have a config error somewhere, and this is set to true, it'll be really hard to figure it out</span>
<span class="w"> </span><span class="nt">TS_ACCEPT_DNS</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">false</span><span class="w"> </span><span class="c1"># I don't want TS pushing any DNS to me.</span>
<span class="w"> </span><span class="nt">TS_ROUTES</span><span class="p">:</span><span class="w"> </span><span class="s">"10.0.88.0/24,10.0.91.0/24"</span><span class="w"> </span><span class="c1"># Docs say this is for accepting routes. Code says it's for advertising routes. Awesome.</span>
<span class="w"> </span><span class="nt">TS_EXTRA_ARGS</span><span class="p">:</span><span class="w"> </span><span class="s">"--advertise-exit-node"</span>
<span class="w"> </span><span class="nt">labels</span><span class="p">:</span>
<span class="w"> </span><span class="nt">com.centurylinklabs.watchtower.enable</span><span class="p">:</span><span class="w"> </span><span class="s">"true"</span>
</code></pre></div>
<p>Important things to note are:</p>
<ul>
<li><code>TS_STATE_DIR</code> is useful if you want a persistent node rather than an Ephemeral one (I'm not running this as part of some app deployment, this is LAN infrastructure)</li>
<li><code>TS_USERSPACE</code> shouldn't just always default to <code>true</code>, it should check if <code>/dev/net/tun</code> is available, but it doesn't, so you have to force it to <code>false</code> if you want kernel networking.</li>
<li><code>TS_AUTH_ONCE</code> is great, but if you have an error in the lower level networking setup, having this set to <code>true</code> will hide it on restarts of the container. I suggest keeping this <code>false</code>.</li>
<li><code>TS_ROUTES</code> is currently wrong in the documentation. It is described as being for <em>accepting</em> routes <em>from</em> other hosts, but it's actually for <em>advertising</em> routes <em>to</em> other hosts.</li>
</ul>Lessons learned about using GitHub Actions to build macOS apps2021-04-12T00:00:00+01:002021-12-22T00:01:19+00:00Chris Jonestag:cmsj.net,2021-04-12:/2021/04/12/macgithubactions.html<h2>Introduction</h2>
<p><a href="https://www.hammerspoon.org/">Hammerspoon</a> now has <a href="https://github.com/Hammerspoon/hammerspoon/actions/workflows/ci_nightly.yml">per-commit development builds</a> generated automatically by a <a href="https://github.com/features/actions">GitHub Actions</a> <a href="https://github.com/Hammerspoon/hammerspoon/blob/master/.github/workflows/ci_nightly.yml">workflow</a>.</p>
<p>This was a surprisingly slow and painful process to set up, so here are some things I learned along the way.</p>
<h2>I prefer scripts to actions</h2>
<p>There are <em>tons</em> of third party GitHub Actions available in …</p><h2>Introduction</h2>
<p><a href="https://www.hammerspoon.org/">Hammerspoon</a> now has <a href="https://github.com/Hammerspoon/hammerspoon/actions/workflows/ci_nightly.yml">per-commit development builds</a> generated automatically by a <a href="https://github.com/features/actions">GitHub Actions</a> <a href="https://github.com/Hammerspoon/hammerspoon/blob/master/.github/workflows/ci_nightly.yml">workflow</a>.</p>
<p>This was a surprisingly slow and painful process to set up, so here are some things I learned along the way.</p>
<h2>I prefer scripts to actions</h2>
<p>There are <em>tons</em> of third party GitHub Actions available in their <a href="https://github.com/marketplace?type=actions">marketplace</a>. Almost every time I use one, I come to regret it and end up switching to just running a bash script.</p>
<h2>More useful checkouts</h2>
<p>If you want to do anything other than interact with the current code (e.g. access tag history) you'll find it fails. Add the <code>fetch-depth</code> argument to <code>actions/checkout</code>:</p>
<div class="highlight"><pre><span></span><code><span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">Checkout foo</span>
<span class="w"> </span><span class="nt">uses</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">actions/checkout@v2</span>
<span class="w"> </span><span class="nt">with</span><span class="p">:</span>
<span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">fetch-depth:0</span>
</code></pre></div>
<h2>Checking out a private repo from a public one is weirdly hard</h2>
<p>Since these development builds are signed, they need access to a signing key. GitHub has a system for sharing secrets with a repo, but it's limited to 64KB values. For anything else, you need to encrypt the secrets in a repo and set a repository secret with the passphrase.</p>
<p>It seemed to me like it would be a good idea to keep the encrypted secrets in a private repository that the build process would check out, so the ciphertext is never exposed to the gaze of the Internet.</p>
<p>Unfortuantely, GitHub's <a href="https://docs.github.com/en/developers/apps/scopes-for-oauth-apps">OAuth scopes</a> only allow you to give full read/write permission to all repositories a user can access, there's no way to grant read-only access.</p>
<p>So, I decided it was safer to just try and be extra-careful about how I encrypt my secrets, and keep them in a public repository.</p>
<h2>Code signing a macOS app in CI needs a custom keychain</h2>
<p>The default login keychain requires a password to unlock, so if you put a signing certificate there, your CI builds will hang indefinitely waiting for a password to be entered into a UI dialog you can't see.</p>
<p>I took some ideas from the <a href="https://github.com/devbotsxyz/import-signing-certificate">devbotsxyz action</a> and a couple of blog posts, to come up with <a href="https://github.com/Hammerspoon/hammerspoon/blob/master/scripts/github-ci-nightly-keychain.sh">my own script</a> to create a keychain, unlock it, import the signing certificate, disable the keychain's lock timeout, and allow codesigning tools to use the keychain without a password.</p>
<h2>Xcode scrubs the inherited environment</h2>
<p>Update: This is not actually true. When I wrote this item, I had forgotten that our build system included a Makefile and it's <em>make</em> not Xcode that was scrubbing the environment.</p>
<p>Normally, you can use environment variables like <code>$GITHUB_ACTIONS</code> to determine if you're running in a CI-style situation. I use this for our test framework to <a href="https://github.com/Hammerspoon/hammerspoon/blob/master/Hammerspoon%20Tests/HSTestCase.m#L94">detect CI</a> so certain tests can be skipped.</p>
<p>Unfortunately, it seems like <code>xcodebuild</code> scrubs the environment when running script build phases, so instead I created an empty file on disk that the build scripts could check for:</p>
<div class="highlight"><pre><span></span><code><span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">Workaround xcodebuild scrubbing environment</span>
<span class="w"> </span><span class="nt">run</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">touch ../is_github_actions</span>
</code></pre></div>
<p>This allows us to skip things like uploading debug symbols to Sentry.</p>
<h2>You can't upload artifacts from strange paths</h2>
<p>The <code>actions/upload-artifact</code> action will refuse to upload any artifacts that have <code>../</code> or <code>./</code> in their path. I assume this is for security reasons, but that makes no sense because all you have to do is move/copy the file you want into the runner's <code>$PWD</code> and you can upload them:</p>
<div class="highlight"><pre><span></span><code><span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">Prepare artifacts</span>
<span class="w"> </span><span class="nt">run</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">mv ../archive/ ./</span>
<span class="w"> </span><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">Upload artifact</span>
<span class="w"> </span><span class="nt">uses</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">actions/upload-artifact@v2</span>
<span class="w"> </span><span class="nt">with</span><span class="p">:</span>
<span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">foo</span>
<span class="w"> </span><span class="nt">path</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">archive/foo</span>
</code></pre></div>
<h2>It's pretty easy to verify your code signature, Gatekeeper acceptance, entitlements and notarization status</h2>
<p>For Hammerspoon these are part of a more complex <a href="https://github.com/Hammerspoon/hammerspoon/blob/master/scripts/libbuild.sh">release script library</a>, but in essence these are the commands that you can use to either check return codes, or outputs, for whether your app is as signed/notarized/entitled as you expect it to be:</p>
<div class="highlight"><pre><span></span><code><span class="c1"># Check valid code signature</span>
<span class="k">if</span><span class="w"> </span>!<span class="w"> </span>codesign<span class="w"> </span>--verify<span class="w"> </span>--verbose<span class="o">=</span><span class="m">4</span><span class="w"> </span><span class="s2">"/path/to/Foo.app"</span><span class="w"> </span><span class="p">;</span><span class="w"> </span><span class="k">then</span>
<span class="w"> </span><span class="nb">echo</span><span class="w"> </span><span class="s2">"FAILED: Code signature check"</span>
<span class="k">fi</span>
<span class="c1"># Check valid code signing entity</span>
<span class="nv">MY_KNOWN_GOOD_ENTITY</span><span class="o">=</span><span class="s2">"Authority=Developer ID Application: Jonny Appleseed (ABC123ABC)"</span>
<span class="nv">ACTUAL_SIGNER</span><span class="o">=</span><span class="k">$(</span>codesign<span class="w"> </span>--display<span class="w"> </span>--verbose<span class="o">=</span><span class="m">4</span><span class="w"> </span><span class="s2">"/path/to/Foo.app"</span><span class="w"> </span><span class="m">2</span>><span class="p">&</span><span class="m">1</span><span class="w"> </span><span class="p">|</span><span class="w"> </span>grep<span class="w"> </span>^Authority<span class="w"> </span><span class="p">|</span><span class="w"> </span>head<span class="w"> </span>-1<span class="k">)</span>
<span class="k">if</span><span class="w"> </span><span class="o">[</span><span class="w"> </span><span class="s2">"</span><span class="si">${</span><span class="nv">ACTUAL_SIGNER</span><span class="si">}</span><span class="s2">"</span><span class="w"> </span>!<span class="o">=</span><span class="w"> </span><span class="s2">"</span><span class="si">${</span><span class="nv">MY_KNOWN_GOOD_ENTITY</span><span class="si">}</span><span class="s2">"</span><span class="w"> </span><span class="o">]</span><span class="p">;</span><span class="w"> </span><span class="k">then</span>
<span class="w"> </span><span class="nb">echo</span><span class="w"> </span><span class="s2">"FAILED: Code signing authority"</span>
<span class="k">fi</span>
<span class="c1"># Check Gatekeeper acceptance</span>
<span class="k">if</span><span class="w"> </span>!<span class="w"> </span>spctl<span class="w"> </span>--verbose<span class="o">=</span><span class="m">4</span><span class="w"> </span>--assess<span class="w"> </span>--type<span class="w"> </span>execute<span class="w"> </span><span class="s2">"/path/to/Foo.app"</span><span class="w"> </span><span class="p">;</span><span class="w"> </span><span class="k">then</span>
<span class="w"> </span><span class="nb">echo</span><span class="w"> </span><span class="s2">"FAILED: Gatekeeper acceptance"</span>
<span class="k">fi</span>
<span class="c1"># Check Entitlements match</span>
<span class="nv">EXPECTED</span><span class="o">=</span><span class="k">$(</span>cat<span class="w"> </span>/path/to/source/Foo.entitlements<span class="k">)</span>
<span class="nv">ACTUAL</span><span class="o">=</span><span class="k">$(</span>codesign<span class="w"> </span>--display<span class="w"> </span>--entitlements<span class="w"> </span>:-<span class="w"> </span><span class="s2">"/path/to/Foo.app"</span><span class="k">)</span>
<span class="k">if</span><span class="w"> </span><span class="o">[</span><span class="w"> </span><span class="s2">"</span><span class="si">${</span><span class="nv">ACTUAL</span><span class="si">}</span><span class="s2">"</span><span class="w"> </span>!<span class="o">=</span><span class="w"> </span><span class="s2">"</span><span class="si">${</span><span class="nv">EXPECTED</span><span class="si">}</span><span class="s2">"</span><span class="w"> </span><span class="o">]</span><span class="p">;</span><span class="w"> </span><span class="k">then</span>
<span class="w"> </span><span class="nb">echo</span><span class="w"> </span><span class="s2">"FAILED: Entitlements"</span>
<span class="k">fi</span>
</code></pre></div>
<p>I do these even on local release builds, to ensure nothing was missed before pushing out a release, but they also make sense to do in CI.</p>
<h2>That's it</h2>
<p>Not a ground-shaking set of things to learn, but combined they took several hours to figure out, so maybe this post saves someone else some time.</p>EasyThreeD X1 Heated Bed Mod2021-01-12T00:00:00+00:002023-11-22T17:31:34+00:00Chris Jonestag:cmsj.net,2021-01-12:/2021/01/12/ez3dbedmod.html<p>(If you don't want to read this whole thing, skip to the end of the post for a tl;dr version)</p>
<p>I was lucky enough to get a <a href="https://amzn.to/3i8l8Sz">Labists X1</a> 3D printer for Christmas a few weeks ago, and it's the first 3D printer I've had or really even interacted …</p><p>(If you don't want to read this whole thing, skip to the end of the post for a tl;dr version)</p>
<p>I was lucky enough to get a <a href="https://amzn.to/3i8l8Sz">Labists X1</a> 3D printer for Christmas a few weeks ago, and it's the first 3D printer I've had or really even interacted with.</p>
<p>It's been a fascinating journey so far, learning about how to calibrate a printer, how to use slicers, and how to start making my own models.</p>
<p>Something that became obvious fairly quickly though, was that the printer would be more reliable with a heated bed. I've been able to get reliable prints via the use of rafts, but that adds time to prints and wastes filament, so I decided to see if I could mod the printer to have a heated bed.</p>
<p>I started Googling and quickly discovered that my printer is actually a rebadged <a href="https://www.easythreed.com/h-col-1223.html">EasyThreed X1</a> and that EasyThreed sell a <a href="https://www.aliexpress.com/i/4000911088465.html">hotbed accessory</a> for the X1, but it's an externally powered/controlled device. That's fine in theory, but I have quickly gotten very attached to being able to completely remotely control the printer via <a href="https://octoprint.org/">Octoprint</a>. So, the obvious next step was to try and mod the printer to be able to drive the heater directly.</p>
<p>Looking inside the controller box showed a pretty capable circuit board:</p>
<p><img alt="EasyThreed X1 controller board" src="http://cmsj.net/easythreed_x1_controller.jpg"></p>
<p>but it was instantly obvious that next to the power terminal for the extruder heater, was a terminal labelled <code>HOT-BED</code>:</p>
<p><img alt="Hot bed power terminal" src="http://cmsj.net/easythreed_x1_terminals.jpg"></p>
<p>Next on my journey of discovery was the communication info that Octoprint was sending/receiving, among which I saw:</p>
<p><code>Recv: echo:Marlin 1.1.0-RC3</code></p>
<p>which quickly led me to the <a href="https://github.com/MarlinFirmware/Marlin">Marlin</a> open source project which, crucially, is licensed as GPL. For those who don't know, GPL means that since Labists have given me a binary of Marlin in the printer, they have to give me the source code if I ask for it.</p>
<p>I reached out to Labists and they were happy to supply the source, then I also emailed EasyThreed to ask if I could have their source for the X1 as well (and while I was at it, their X3 printer, which looks a lot like the X1, but ships with a heated bed already as part of the product). They sent me the source with no real issues, so I grabbed the main Marlin repo, checked out the tag for <code>1.1.0-RC3</code> and started making branches for the various Labists/EasyThreed source trees I'd acquired. Since their changes were a bit gratuitous in places (random whitespace changes, DOS line endings, tabs, etc) I cleaned them up quite a bit to try and isolate the diffs to code/comment changes.</p>
<p>Since it's all GPL, I've republished their code with my cleanups:</p>
<ul>
<li><a href="https://github.com/cmsj/Marlin/tree/1.1.0-RC3-Labists-X1">Labists X1</a></li>
<li><a href="https://github.com/cmsj/Marlin/tree/1.1.0-RC3-EasyThreeD-X1">EasyThreed X1</a></li>
<li><a href="https://github.com/cmsj/Marlin/tree/1.1.0-RC3-EasyThreeD-X3">EasyThreed X3</a></li>
</ul>
<p>The specific diffs aren't particularly important (although the Labists firmware does have some curious changes, like disabling thermal runaway protection), but by reading up a bit on configuring Marlin, and comparing the differences between the X3 and the X1, it seemed like very little would need to change to enable the bed heater and its temperature sensor (a header for which is also conveniently present on the controller board).</p>
<p>At this point in the investigation I had:</p>
<ul>
<li>A controller board with:</li>
<li>A power terminal for a bed heater</li>
<li>A header for a bed temperature sensor</li>
<li>Source for the controller firmware</li>
<li>Source for an extremely similar printer that has a bed heater</li>
<li>An external bed heater with power and sensor cables</li>
</ul>
<p>Not a bad situation to be in!</p>
<p>Diving into the firmware, I found that Marlin keeps most board-specific settings in <code>Configuration.h</code> and specifically, it contains <code>#define TEMP_SENSOR_BED 0</code>. The number that <code>TEMP_SENSOR_BED</code> is defined as, indicates to Marlin what type of temp sensor is attached (with <code>0</code> obviously meaning nothing is attached). The X3 has a value of <code>1</code> (a 100k thermistor), but I found that I could only get reliable readings with it set to <code>4</code> (a 10k thermistor).</p>
<p>Believe it or not, that's actually the only thing that <em>has</em> to change, but I did also change <code>#define BED_MAXTEMP 150</code> because 150C seems kind of high. This define sets a temperature at which Marlin will shut itself down as a safety measure. As far as I can tell, 50C-70C is a more realistic range for PLA, and even with ABS it seems as though 110C is recommended. I haven't printed ABS yet and don't have any real plans to, so I reduced the safety limit to 100C.
I also modified the build version strings in <code>Default_Version.h</code> so I'd be able to quickly tell in Octoprint if I had successfully uploaded a new firmware.</p>
<p>Next came the challenge of building the firmware. I grabbed the latest Arduino IDE, but it failed to compile Marlin correctly (perhaps because I was using the macOS version of Ardino IDE). Labists helpfully included a Windows build of Arduino IDE 1.0.5 with their firmware source, which was able to build it. Arduino IDE is also GPL, but I haven't republished that yet because I haven't audited the archive for other things that I don't have rights to distribute.</p>
<p>To get the firmware to upload correctly to the X1, I had to set the board type in Arduino IDE to <code>Melzi</code> and select the COM port for its USB interface, except its USB interface wasn't showing up and Windows' Device Manager couldn't find a driver for it. Some Googling for the USB VID/PID of that device led me to the manufacturer of the CH340 chipset and <a href="http://www.wch.cn/download/ch341ser_exe.html">their drivers</a>.</p>
<p>Finally the moment of truth - was I about to destroy a controller board with a bad firmware/driver? I clicked the Upload button, waited for it to complete, attached the controller to my Octoprint machine again and.......</p>
<p><code>Recv: echo:Marlin 1.1.0-RC3-cmsj</code></p>
<p>Success! I then waited for Octoprint to start communicating with the printer and monitoring temperatures...</p>
<p><code>Recv: ok T:24.2 /0.0 B:23.6 /0.0 T0:24.2 /0.0 @:0 B@:0</code></p>
<p>For those of you who aren't familiar with Octoprint/GCode, the <code>T:24.2</code> is the temp sensor in the extruder and the <code>B23.6</code> is the reading from the bed sensor! Another success!</p>
<p>After replacing the X1's 30W power supply with a 60W variant so it could power the motors <em>and</em> the heater, I asked it to heat up to 50C, and after a little while....</p>
<p><code>Recv: ok T:28.1 /0.0 B:49.9 /50.0 T0:28.1 /0.0 @:0 B@:127</code></p>
<p>Perfect!</p>
<p>And here is the first test print I did, to make sure everything else was still working:</p>
<p><img alt="Calibration cubes" src="http://cmsj.net/easythreed_x1_cubes.jpg"></p>
<p>The cubes on the left are from before the heated bed, where I was having to level the bed closer to the nozzle to get enough adhesion and the cube on the right is the first print with the heated bed. I think the results speak for themselves - much better detail retention. It's not visible, but the "elephant's foot" is gone too!</p>
<p>This has been a super rewarding journey, and I'm incredibly grateful to all the people in the 3D printing community upon whose shoulders I am standing. It's a rare and beautiful thing to find a varied community of products, projects and people, all working on the same goals and producing such high quality hardware and software along the way.</p>
<h1>And now the tl;dr version</h1>
<p>If you want to do this mod to your X1, here are some things you should know, and some things you will need:</p>
<ul>
<li>I am not responsible for your printer. This is a physical and firmware mod, please be careful and think about what you're doing.</li>
<li>Buy the official <a href="https://www.aliexpress.com/i/4000911088465.html">hotbed accessory</a>, open its control box and unplug the temperature sensor cable. If for some reason you use a different hotbed, it needs to be 12V, draw no more than 30W, and your temp sensor will need to be something that Marlin can understand via the <code>TEMP_SENSOR_BED</code> define.</li>
<li>Buy a 12V 5A barrel plug power supply (I used <a href="https://amzn.to/3oM2VN3">this one</a> but there are a million options). Use this from now on to power your X1.</li>
<li>Grab the modified Marlin source from my GitHub repo:</li>
<li>Either <a href="https://github.com/cmsj/Marlin/tree/1.1.0-RC3-EasyThreeD-X1-cmsj">EasyThreed X1</a> - see the precise changes from EasyThreed's firmware <a href="https://github.com/cmsj/Marlin/compare/1.1.0-RC3-EasyThreeD-X1...cmsj:1.1.0-RC3-EasyThreeD-X1-cmsj">here</a></li>
<li>Or <a href="https://github.com/cmsj/Marlin/tree/1.1.0-RC3-Labists-X1-cmsj">Labists X1</a> - this has <a href="https://github.com/cmsj/Marlin/compare/1.1.0-RC3-Labists-X1...cmsj:1.1.0-RC3-Labists-X1-cmsj">more changes</a> than the EasyThreed version, since I pulled back in some of Labists changes, but left thermal runaway protection enabled.</li>
<li>Install the CH340 USB Serial drivers. There seem to be lots of places to get these from, I used <a href="http://www.wch.cn/download/ch341ser_exe.html">these</a></li>
<li>Install Arduino IDE 1.0.5 - still available from the bottom of <a href="https://www.arduino.cc/en/main/OldSoftwareReleases">this page</a></li>
<li>In Arduino IDE, open the <code>Marlin.ino</code> file from the <code>Marlin</code> directory and click the ✔ button on the toolbar, this will compile the source so you can check everything is installed correctly.</li>
<li>If you plan to print PLA, you might want to increase the <code>BED_MAXTEMP</code> define to something higher than <code>100</code>.</li>
<li>Remove the bed-levelling screws from your X1, swap the original bed for the heated one.</li>
<li>Open the controller box of your X1, plug the bed's thermal sensor into the controller board in the <code>TB1</code> header.</li>
<li>Wire the bed's power into the green <code>HOT-BED</code> terminal. For the best results you probably want to unsolder the original power cable from the bed and use something thinner and more flexible (but at the very least you need something longer).</li>
<li>Reassemble the controller box and run all the wires neatly. I recommend you manually move the bed around to make sure neither the power nor temp sensor wires snag on anything.</li>
<li>Connect the controller box's USB port to your PC, and in Arduino IDE click the ➡ button to compile and upload the firmware. Wait until it says <code>Upload complete</code>.</li>
<li>In theory, you're done! Check the temperature readings in some software that can talk to the printer (Octoprint, Pronterface, etc.), tell it to turn the bed heater on and make sure the temps rise to the level you asked for. I would definitely encourage you to do this while next to the printer, in case something goes dangerously wrong!</li>
</ul>Update: Failing to create an app2019-05-20T00:00:00+01:002019-05-20T17:53:11+01:00Chris Jonestag:cmsj.net,2019-05-20:/2019/05/20/app-creation-failed-update1.html<p><a href="/2019/04/24/app-creation-failed.html">Previously</a> I wrote about how I'd tried to create an app, but ultimately failed because I wasn't getting the results I wanted out of the macOS CoreAudio APIs.</p>
<p>Thanks to some excellent input from <a href="https://twitter.com/DavidLublin">David Lublin</a> I refactored the code to be able to switch easily between different backend audio …</p><p><a href="/2019/04/24/app-creation-failed.html">Previously</a> I wrote about how I'd tried to create an app, but ultimately failed because I wasn't getting the results I wanted out of the macOS CoreAudio APIs.</p>
<p>Thanks to some excellent input from <a href="https://twitter.com/DavidLublin">David Lublin</a> I refactored the code to be able to switch easily between different backend audio APIs, and <a href="https://github.com/cmsj/HotMic/blob/master/HotMic/Audio%20Backends/THMBackEndAVFCapture.m">implemented a replacement</a> for CoreAudio using AVFoundation's AVCaptureSession and it seems to work!</p>
<p>I'd still like to settle back on CoreAudio at some point, but for now I can rest assured that whenever the older versions of SoundSource stop working, I still have a working option.</p>Overengineered email migration2019-05-20T00:00:00+01:002019-05-20T20:20:44+01:00Chris Jonestag:cmsj.net,2019-05-20:/2019/05/20/imapsync-docker.html<p>I recently had the need to migrate someone in my family off an old ISP email account, onto a more modern email account, without simply shutting down the old account. The old address has been given out to people/companies for at least a decade, so it's simply not practical …</p><p>I recently had the need to migrate someone in my family off an old ISP email account, onto a more modern email account, without simply shutting down the old account. The old address has been given out to people/companies for at least a decade, so it's simply not practical to stop receiving its email.</p>
<p>Initially, I used the ISP's own server-side filtering to forward emails on to the new account and then delete them, however, all of the fantabulous anti-spam technologies that are used these days, conspired to make it unreliable.</p>
<p>So instead, I decided that since I can access IMAP on both accounts, and I have a server at home running all the time, I'd just use some kind of local tool to fetch any emails that show up on the old account and move them to the new one.</p>
<p>After some investigation, I settled on <a href="https://imapsync.lamiral.info/">imapsync</a> as the most capable tool for the job. It's ultimately "just" a Perl script, but it's fantastically well maintained by Gilles Lamiral. It's Open Source, but I'm a big fan of supporting FOSS development, so I happily paid the 60€ Gilles asks for.</p>
<p>My strong preference these days is always to run my local services in Docker, and fortunately Gilles publishes an <a href="https://hub.docker.com/r/gilleslamiral/imapsync/">official imapsync Dockule</a> so I set to work in Ansible to orchestrate all of the pieces I needed to get this running.</p>
<p>The first piece was a simple bash script that calls imapsync with all of the necessary command line options:</p>
<div class="highlight"><pre><span></span><code><span class="ch">#!/bin/bash</span>
<span class="c1"># This is /usr/local/bin/imapsync-user-isp-fancyplace.sh</span>
/usr/bin/docker<span class="w"> </span>run<span class="w"> </span>-u<span class="w"> </span>root<span class="w"> </span>--rm<span class="w"> </span>-v/root/.imap-pass-isp.txt:/isp-pass.txt<span class="w"> </span>-v/root/.imap-pass-fancyplace.txt:/fancyplace-pass.txt<span class="w"> </span>gilleslamiral/imapsync<span class="w"> </span><span class="se">\</span>
<span class="w"> </span>imapsync<span class="w"> </span><span class="se">\</span>
<span class="w"> </span>--host1<span class="w"> </span>imap.isp.net<span class="w"> </span>--port1<span class="w"> </span><span class="m">993</span><span class="w"> </span>--user1<span class="w"> </span>olduser@isp.net<span class="w"> </span>--passfile1<span class="w"> </span>/isp-pass.txt<span class="w"> </span>--ssl1<span class="w"> </span>--sslargs1<span class="w"> </span><span class="nv">SSL_verify_mode</span><span class="o">=</span><span class="m">1</span><span class="w"> </span><span class="se">\</span>
<span class="w"> </span>--host2<span class="w"> </span>imap.fancyplace.com<span class="w"> </span>--port2<span class="w"> </span><span class="m">993</span><span class="w"> </span>--user2<span class="w"> </span>newuser@fancyplace.com<span class="w"> </span>--passfile2<span class="w"> </span>/fancyplace-pass.txt<span class="w"> </span>--ssl2<span class="w"> </span>--sslargs2<span class="w"> </span><span class="nv">SSL_verify_mode</span><span class="o">=</span><span class="m">1</span><span class="w"> </span><span class="se">\</span>
<span class="w"> </span>--automap<span class="w"> </span><span class="se">\</span>
<span class="w"> </span>--nofoldersizes<span class="w"> </span>--nofoldersizesatend<span class="w"> </span><span class="se">\</span>
<span class="w"> </span>--delete1<span class="w"> </span>--noexpungeaftereach<span class="w"> </span><span class="se">\</span>
<span class="w"> </span>--expunge1
</code></pre></div>
<p>Please test this with the <code>--dry</code> option if you ever want to do this - the <code>--automap</code> option worked incredibly well for me (even translating between languages for folders like "Sent Messages"), but check that for yourself.</p>
<p>What this script will do is start a Docker container and run imapsync within it, which will then check all folders on the old IMAP server and sync any found emails over to the new IMAP server <em>and then delete them from the old server</em>. This is unfortunately necessary because the old ISP in question has a pretty low storage limit and I don't want future email flow to stop because we forgot to go and delete old emails. imapsync appears to be pretty careful about making sure an email has synced correctly before it deletes it from the old server, so I'm not super worried about data loss.</p>
<p>The IMAP passwords are read from files that live in /root/ on my server (with <code>0400</code> permissions) and they are mounted through into the container. For the new IMAP account, this is a "per-device" password rather than the main account password, so it won't change, and is easy to revoke.</p>
<p>This isn't a complete setup yet though, because after doing one sync, imapsync will exit and Docker will obey its <code>--rm</code> option and delete the container. What we now need is a regular trigger to run this script and while this used to mean cron, nowadays it could also mean a <a href="https://www.freedesktop.org/software/systemd/man/systemd.timer.html">systemd timer</a>. So, I created a simple systemd service file which gets written to <code>/etc/systemd/system/imapsync-user-isp-fancyplace.service</code> and enabled in systemd:</p>
<div class="highlight"><pre><span></span><code><span class="k">[Unit]</span>
<span class="na">Description</span><span class="o">=</span><span class="s">User IMAP Sync</span>
<span class="na">After</span><span class="o">=</span><span class="s">docker.service</span>
<span class="na">Requires</span><span class="o">=</span><span class="s">docker.service</span>
<span class="k">[Service]</span>
<span class="na">Type</span><span class="o">=</span><span class="s">oneshot</span>
<span class="na">ExecStart</span><span class="o">=</span><span class="s">/usr/local/bin/imapsync-user-isp-fancyplace.sh</span>
<span class="na">Restart</span><span class="o">=</span><span class="s">no</span>
<span class="na">TimeoutSec</span><span class="o">=</span><span class="s">120</span>
</code></pre></div>
<p>and a systemd timer file which gets written to <code>/etc/systemd/system/imapsync-user-isp-fancyplace.timer</code>, and then both enabled and started in systemd:</p>
<div class="highlight"><pre><span></span><code><span class="k">[Unit]</span>
<span class="na">Description</span><span class="o">=</span><span class="s">Trigger User IMAP Sync</span>
<span class="k">[Timer]</span>
<span class="na">Unit</span><span class="o">=</span><span class="s">imapsync-user-isp-fancyplace.service</span>
<span class="na">OnUnitActiveSec</span><span class="o">=</span><span class="s">10min</span>
<span class="na">OnUnitInactiveSec</span><span class="o">=</span><span class="s">10min</span>
<span class="na">Persistent</span><span class="o">=</span><span class="s">true</span>
<span class="k">[Install]</span>
<span class="na">WantedBy</span><span class="o">=</span><span class="s">timers.target</span>
</code></pre></div>
<p>This will trigger every 10 minutes and start the specified service, which executes the script that starts the Dockule to sync the email. Simple!</p>
<p>And just to show a useful command, you can check when the timer last triggered, and when it will trigger next, like this:</p>
<div class="highlight"><pre><span></span><code><span class="c1"># systemctl list-timers</span>
<span class="n">NEXT</span><span class="w"> </span><span class="n">LEFT</span><span class="w"> </span><span class="n">LAST</span><span class="w"> </span><span class="n">PASSED</span><span class="w"> </span><span class="n">UNIT</span><span class="w"> </span><span class="n">ACTIVATES</span>
<span class="n">Mon</span><span class="w"> </span><span class="mi">2019</span><span class="o">-</span><span class="mi">05</span><span class="o">-</span><span class="mi">20</span><span class="w"> </span><span class="mi">17</span><span class="p">:</span><span class="mi">38</span><span class="p">:</span><span class="mi">13</span><span class="w"> </span><span class="n">BST</span><span class="w"> </span><span class="mi">27</span><span class="n">s</span><span class="w"> </span><span class="n">left</span><span class="w"> </span><span class="n">Mon</span><span class="w"> </span><span class="mi">2019</span><span class="o">-</span><span class="mi">05</span><span class="o">-</span><span class="mi">20</span><span class="w"> </span><span class="mi">17</span><span class="p">:</span><span class="mi">28</span><span class="p">:</span><span class="mi">13</span><span class="w"> </span><span class="n">BST</span><span class="w"> </span><span class="mi">9</span><span class="nb">min</span><span class="w"> </span><span class="n">ago</span><span class="w"> </span><span class="n">imapsync</span><span class="o">-</span><span class="n">user</span><span class="o">-</span><span class="n">isp</span><span class="o">-</span><span class="n">fancyplace</span><span class="o">.</span><span class="n">timer</span><span class="w"> </span><span class="n">imapsync</span><span class="o">-</span><span class="n">user</span><span class="o">-</span><span class="n">isp</span><span class="o">-</span><span class="n">fancyplace</span><span class="o">.</span><span class="n">service</span>
<span class="p">[</span><span class="n">snip</span><span class="w"> </span><span class="n">unrelated</span><span class="w"> </span><span class="n">timers</span><span class="p">]</span>
<span class="mi">9</span><span class="w"> </span><span class="n">timers</span><span class="w"> </span><span class="n">listed</span><span class="o">.</span>
<span class="n">Pass</span><span class="w"> </span><span class="o">--</span><span class="n">all</span><span class="w"> </span><span class="n">to</span><span class="w"> </span><span class="n">see</span><span class="w"> </span><span class="n">loaded</span><span class="w"> </span><span class="n">but</span><span class="w"> </span><span class="n">inactive</span><span class="w"> </span><span class="n">timers</span><span class="p">,</span><span class="w"> </span><span class="n">too</span><span class="o">.</span>
</code></pre></div>Failing to create an app2019-04-24T00:00:00+01:002019-04-24T23:01:32+01:00Chris Jonestag:cmsj.net,2019-04-24:/2019/04/24/app-creation-failed.html<p>I've just published <a href="https://github.com/cmsj/HotMic/">https://github.com/cmsj/HotMic/</a> which contains a very good amount of a macOS app I had hoped to complete and sell for a couple of bucks on the Mac App Store.</p>
<p>However, I failed to get it working, primarily because I don't know enough of CoreAudio …</p><p>I've just published <a href="https://github.com/cmsj/HotMic/">https://github.com/cmsj/HotMic/</a> which contains a very good amount of a macOS app I had hoped to complete and sell for a couple of bucks on the Mac App Store.</p>
<p>However, I failed to get it working, primarily because I don't know enough of CoreAudio to get it working, and because I burned almost all of the time I had to write the app, fighting with things that, it turns out, were never going to work.</p>
<p>So, chalk that one up to experience I guess. Maybe the next person who has this idea will find my repo and spend their allotted time getting it to work :)</p>
<p>For the curious, the app's purpose was to be a Play Through mechanism for OS X. What is a Play Through app? It means the app reads audio from one device (e.g. a microphone or a Line In port) and writes it out to a different device (e.g. your normal speakers). This lets you use your Mac as a very limited audio mixer. I want it so the Line Out from my PC can be connected to my iMac - then all of my computer audio comes out of one set of speakers with one keyboard volume control setup.</p>
<p>For the super curious, I'd be happy to get back to working on the app if someone who knows more about Core Audio than I do, wants to get involved!</p>Abusing Gmail as a ghetto dashboard2018-07-12T00:00:00+01:002018-09-18T16:55:02+01:00Chris Jonestag:cmsj.net,2018-07-12:/2018/07/12/gmail-as-dashboard.html<p>I'm sure many of us receive regular emails from the same source - by which I mean things like a daily status email from a backup system, or a weekly newsletter from a blogger/journalist we like, etc.</p>
<p>These are a great way of getting notified or kept up to date …</p><p>I'm sure many of us receive regular emails from the same source - by which I mean things like a daily status email from a backup system, or a weekly newsletter from a blogger/journalist we like, etc.</p>
<p>These are a great way of getting notified or kept up to date, but every one of these you receive is also a piece of work you need to do, to keep your Inbox under control. Gmail has a lot of powerful filtering primitives, but as far as I am able to tell, none of them let you manage this kind of email without compromise.</p>
<p>My ideal scenario would be that, for example, my daily backup status email would keep the most recent copy in my Inbox, and automatically archive older ones. Same for newsletters - if I didn't read last week's one, I'm realistically never going to, so once it's more than a couple of weeks stale, just get it out of my Inbox.</p>
<p>Thankfully, Google has an indirect way of making this sort of thing work - <a href="https://developers.google.com/apps-script/">Google Apps Script</a>. You can trigger small JavaScript scripts to run every so often, and operate on your data in various Google apps, including Gmail.</p>
<p>So, I quickly wrote <a href="https://gist.github.com/cmsj/0d12c452277f32704f347c7fe117215a">this script</a> and it runs every few hours now:</p>
<div class="highlight"><pre><span></span><code><span class="c1">// Configuration data</span>
<span class="c1">// Each config should have the following keys:</span>
<span class="c1">// * age_min: maps to 'older_than:' in gmail query terms</span>
<span class="c1">// * age_max: maps to 'newer_than:' in gmail query terms</span>
<span class="c1">// * query: freeform gmail query terms to match against</span>
<span class="c1">//</span>
<span class="c1">// The age_min/age_max values don't need to exist, given the freeform query value,</span>
<span class="c1">// but age_min forces you to think about how frequent the emails are, and age_max</span>
<span class="c1">// forces you to not search for every single email tha matches the query</span>
<span class="c1">//</span>
<span class="c1">// TODO:</span>
<span class="c1">// * Add a per-config flag that skips the archiving if there's only one matching thread (so the most recent matching email always stays in Inbox)</span>
<span class="kd">var</span><span class="w"> </span><span class="nx">configs</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="p">[</span>
<span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="nx">age_min</span><span class="o">:</span><span class="s2">"14d"</span><span class="p">,</span><span class="w"> </span><span class="nx">age_max</span><span class="o">:</span><span class="s2">"90d"</span><span class="p">,</span><span class="w"> </span><span class="nx">query</span><span class="o">:</span><span class="s2">"subject:(Benedict's Newsletter)"</span><span class="w"> </span><span class="p">},</span>
<span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="nx">age_min</span><span class="o">:</span><span class="s2">"7d"</span><span class="p">,</span><span class="w"> </span><span class="nx">age_max</span><span class="o">:</span><span class="s2">"30d"</span><span class="p">,</span><span class="w"> </span><span class="nx">query</span><span class="o">:</span><span class="s2">"from:hello@visualping.io subject:gnubert"</span><span class="w"> </span><span class="p">},</span>
<span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="nx">age_min</span><span class="o">:</span><span class="s2">"1d"</span><span class="p">,</span><span class="w"> </span><span class="nx">age_max</span><span class="o">:</span><span class="s2">"7d"</span><span class="p">,</span><span class="w"> </span><span class="nx">query</span><span class="o">:</span><span class="s2">"subject:(Nightly clone to Thunderbay4 Successfully)"</span><span class="w"> </span><span class="p">},</span>
<span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="nx">age_min</span><span class="o">:</span><span class="s2">"1d"</span><span class="p">,</span><span class="w"> </span><span class="nx">age_max</span><span class="o">:</span><span class="s2">"7d"</span><span class="p">,</span><span class="w"> </span><span class="nx">query</span><span class="o">:</span><span class="s2">"from:Amazon subject:(Arriving today)"</span><span class="w"> </span><span class="p">},</span>
<span class="w"> </span><span class="p">];</span>
<span class="kd">function</span><span class="w"> </span><span class="nx">processInbox</span><span class="p">()</span><span class="w"> </span><span class="p">{</span>
<span class="w"> </span><span class="k">for</span><span class="w"> </span><span class="p">(</span><span class="kd">var</span><span class="w"> </span><span class="nx">config_key</span><span class="w"> </span><span class="ow">in</span><span class="w"> </span><span class="nx">configs</span><span class="p">)</span><span class="w"> </span><span class="p">{</span>
<span class="w"> </span><span class="kd">var</span><span class="w"> </span><span class="nx">config</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nx">configs</span><span class="p">[</span><span class="nx">config_key</span><span class="p">];</span>
<span class="w"> </span><span class="nx">Logger</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="s2">"Processing query: "</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="nx">config</span><span class="p">[</span><span class="s2">"query"</span><span class="p">]);</span>
<span class="w"> </span><span class="kd">var</span><span class="w"> </span><span class="nx">threads</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nx">GmailApp</span><span class="p">.</span><span class="nx">search</span><span class="p">(</span><span class="s2">"in:inbox "</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="nx">config</span><span class="p">[</span><span class="s2">"query"</span><span class="p">]</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="s2">" newer_than:"</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="nx">config</span><span class="p">[</span><span class="s2">"age_max"</span><span class="p">]</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="s2">" older_than:"</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="nx">config</span><span class="p">[</span><span class="s2">"age_min"</span><span class="p">]);</span>
<span class="w"> </span><span class="k">for</span><span class="w"> </span><span class="p">(</span><span class="kd">var</span><span class="w"> </span><span class="nx">thread_key</span><span class="w"> </span><span class="ow">in</span><span class="w"> </span><span class="nx">threads</span><span class="p">)</span><span class="w"> </span><span class="p">{</span>
<span class="w"> </span><span class="kd">var</span><span class="w"> </span><span class="nx">thread</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nx">threads</span><span class="p">[</span><span class="nx">thread_key</span><span class="p">];</span>
<span class="w"> </span><span class="nx">Logger</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="s2">" Archiving: "</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="nx">thread</span><span class="p">.</span><span class="nx">getFirstMessageSubject</span><span class="p">());</span>
<span class="w"> </span><span class="nx">thread</span><span class="p">.</span><span class="nx">markRead</span><span class="p">();</span>
<span class="w"> </span><span class="nx">thread</span><span class="p">.</span><span class="nx">moveToArchive</span><span class="p">();</span>
<span class="w"> </span><span class="p">}</span>
<span class="w"> </span><span class="p">}</span>
<span class="p">}</span>
</code></pre></div>
<p>(apologies for the very basic JavaScript - it's not a language I have any real desire to be good at. Don't @ me).</p>Fixing an error in Xcode Instruments's Leaks profile2018-07-12T00:00:00+01:002023-11-22T17:30:59+00:00Chris Jonestag:cmsj.net,2018-07-12:/2018/07/12/xcode-leaks-error.html<p>As part of our general effort to try and raise the quality of Hammerspoon, I've been working with <a href="https://twitter.com/latenitefilms">@latenitefilms</a> to track down some memory leaks, which can be very easy if you use the Leaks profile in Xcode's "Instruments" tool. I tried this various ways, but I kept running into …</p><p>As part of our general effort to try and raise the quality of Hammerspoon, I've been working with <a href="https://twitter.com/latenitefilms">@latenitefilms</a> to track down some memory leaks, which can be very easy if you use the Leaks profile in Xcode's "Instruments" tool. I tried this various ways, but I kept running into this error:</p>
<p><img alt="Screenshot" src="http://cmsj.net/2018-07-12-xcode-leaks-error.png"></p>
<p>After asking on the <a href="https://forums.developer.apple.com/thread/104011">Apple Developer Forums</a> we got an interesting response from an Apple employee that code signing might be involved. One change later to not do codesigning on Profile builds and Leaks is working again!</p>
<p>So there we go, if you see "An error occurred trying to capture Leaks data" and "Unable to acquire required task port", one thing to check is your code signing setup. I don't know what specifically was wrong, but it's easy enough to just not sign local debug/profile builds most of the time anyway.</p>AmigaOS 4.1 Final Edition in Qemu2018-07-05T00:00:00+01:002023-11-22T17:30:38+00:00Chris Jonestag:cmsj.net,2018-07-05:/2018/07/05/amigaos41-in-qemu.html<p>So this is a fun one, some marvellous hackers, including Zoltan Balaton and Sebastien Mauer have been working on Qemu to add support for the <a href="https://en.wikipedia.org/wiki/Sam460ex">Sam460ex motherboard</a>, a PowerPC system from 2010. Of particular interest to me is that this was a board which received an official port of Amiga …</p><p>So this is a fun one, some marvellous hackers, including Zoltan Balaton and Sebastien Mauer have been working on Qemu to add support for the <a href="https://en.wikipedia.org/wiki/Sam460ex">Sam460ex motherboard</a>, a PowerPC system from 2010. Of particular interest to me is that this was a board which received an official port of Amiga OS 4, the spiritual successor to AmigaOS, one of my very favourite operating systems.</p>
<p>I'll probably write more about this later, but for now, here is a simple screenshot of the install CD having just booted.</p>
<p><em>Update</em>: Zoltan has published a page with information about how to get it working, <a href="http://zero.eik.bme.hu/~balaton/qemu/amiga/">see here</a></p>
<p><img alt="Screenshot" src="http://cmsj.net/2018-07-05-amigaos-qemu.png"></p>Home networking like a pro - Part 1.1 - Network Storage Redux2018-07-04T00:00:00+01:002018-09-18T16:55:02+01:00Chris Jonestag:cmsj.net,2018-07-04:/2018/07/04/home-pro-part1-update.html<p>Back in <a href="/2017/06/22/home-pro-part-1-nas.html">this post</a> I described having switched from a Mac Mini + DAS setup, to a Synology and an Intel NUC setup, for my file storage and server needs.</p>
<p>For a time it was good, but I found myself wanting to run more server daemons, and the NUC wasn't really …</p><p>Back in <a href="/2017/06/22/home-pro-part-1-nas.html">this post</a> I described having switched from a Mac Mini + DAS setup, to a Synology and an Intel NUC setup, for my file storage and server needs.</p>
<p>For a time it was good, but I found myself wanting to run more server daemons, and the NUC wasn't really able to keep up. The Synology was plodding along fine, but I made the decision to unify them all into a more beefy Linux machine.</p>
<p>So, I bought an AMD Ryzen 5 1600 CPU and an A320M motherboard, 16GB of RAM and a micro ATX case with 8 drive bays, and set to work. That quickly proved to be a disaster because Linux wasn't stable on the AMD CPU - I hadn't even thought to check, because why wouldn't Linux be stable on an x86_64 CPU in 2018?! With that lesson learned, I swapped out the board/CPU for an Intel i7-8700 and a Z370 motherboard.</p>
<p>I didn't go with FreeNAS as my previous post suggested I might, because ultimately I wanted complete control, so it's a plain Ubuntu Server machine that is fully managed by Ansible playbooks. In retrospect it was a mistake to try and delegate server tasks to an appliance like the Synology, and it was a further mistake to try and deal with that by getting the NUC - I should have just cut my losses and gone straight to a Linux server. Lesson learned!</p>
<p>Instead of getting lost in the weeds of purchase choices and justifications, instead let's look at some of the things I'm doing to the server with Ansible.</p>
<p>First up is root disk encryption - it's nice to know that your data is private when at rest, but a headless machine in a cupboard is not a fun place to be typing a password on boot. Fortunately I have two ways round this - firstly, a KVM (a Lantronix Spider) and secondly, one can add dropbear to an initramfs so you can ssh into the initramfs to enter the password.</p>
<p>Here's the playbook tasks that put dropbear into the initramfs:</p>
<div class="highlight"><pre><span></span><code><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">Install dropbear-initramfs</span>
<span class="w"> </span><span class="nt">apt</span><span class="p">:</span>
<span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">dropbear-initramfs</span>
<span class="w"> </span><span class="nt">state</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">present</span>
<span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">Install busybox-static</span>
<span class="w"> </span><span class="nt">apt</span><span class="p">:</span>
<span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">busybox-static</span>
<span class="w"> </span><span class="nt">state</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">present</span>
<span class="c1"># This is necessary because of https://bugs.launchpad.net/ubuntu/+source/busybox/+bug/1651818</span>
<span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">Add initramfs hook to fix cryptroot-unlock</span>
<span class="w"> </span><span class="nt">copy</span><span class="p">:</span>
<span class="w"> </span><span class="nt">dest</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">/etc/initramfs-tools/hooks/zz-busybox-initramfs-fix</span>
<span class="w"> </span><span class="nt">src</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">dropbear-initramfs/zz-busybox-initramfs-fix</span>
<span class="w"> </span><span class="nt">mode</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">0744</span>
<span class="w"> </span><span class="nt">owner</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">root</span>
<span class="w"> </span><span class="nt">group</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">root</span>
<span class="w"> </span><span class="nt">notify</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">update initramfs</span>
<span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">Configure dropbear-initramfs</span>
<span class="w"> </span><span class="nt">lineinfile</span><span class="p">:</span>
<span class="w"> </span><span class="nt">path</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">/etc/dropbear-initramfs/config</span>
<span class="w"> </span><span class="nt">regexp</span><span class="p">:</span><span class="w"> </span><span class="s">'DROPBEAR_OPTIONS'</span>
<span class="w"> </span><span class="nt">line</span><span class="p">:</span><span class="w"> </span><span class="s">'DROPBEAR_OPTIONS="-p</span><span class="nv"> </span><span class="s">31337</span><span class="nv"> </span><span class="s">-s</span><span class="nv"> </span><span class="s">-j</span><span class="nv"> </span><span class="s">-k</span><span class="nv"> </span><span class="s">-I</span><span class="nv"> </span><span class="s">60"'</span>
<span class="w"> </span><span class="nt">notify</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">update initramfs</span>
<span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">Add dropbear authorized_keys</span>
<span class="w"> </span><span class="nt">copy</span><span class="p">:</span>
<span class="w"> </span><span class="nt">dest</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">/etc/dropbear-initramfs/authorized_keys</span>
<span class="w"> </span><span class="nt">src</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">dropbear-initramfs/dropbear-authorized_keys</span>
<span class="w"> </span><span class="nt">mode</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">0600</span>
<span class="w"> </span><span class="nt">owner</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">root</span>
<span class="w"> </span><span class="nt">group</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">root</span>
<span class="w"> </span><span class="nt">notify</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">update initramfs</span>
<span class="c1"># The format of the ip= kernel parameter is: <client-ip>:<server-ip>:<gw-ip>:<netmask>:<hostname>:<device>:<autoconf></span>
<span class="c1"># It comes from https://git.kernel.org/pub/scm/libs/klibc/klibc.git/tree/usr/kinit/ipconfig/README.ipconfig?id=HEAD</span>
<span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">Configure boot IP and consoleblanking</span>
<span class="w"> </span><span class="nt">lineinfile</span><span class="p">:</span>
<span class="w"> </span><span class="nt">path</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">/etc/default/grub</span>
<span class="w"> </span><span class="nt">regexp</span><span class="p">:</span><span class="w"> </span><span class="s">'GRUB_CMDLINE_LINUX_DEFAULT'</span>
<span class="w"> </span><span class="nt">line</span><span class="p">:</span><span class="w"> </span><span class="s">'GRUB_CMDLINE_LINUX_DEFAULT="ip=10.0.88.11::10.0.88.1:255.255.255.0:gnubert:enp0s31f6:none</span><span class="nv"> </span><span class="s">loglevel=7</span><span class="nv"> </span><span class="s">consoleblank=0"'</span>
<span class="w"> </span><span class="nt">notify</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">update grub</span>
</code></pre></div>
<p>While this does rely on some external files, the important one is <code>zz-busybox-initramfs-fix</code> which works around <a href="https://bugs.launchpad.net/ubuntu/+source/busybox/+bug/1651818">a bug</a> in the busybox build that Ubuntu is currently using. Rather than paste the whole script here, you can see it <a href="https://gist.github.com/cmsj/515fbf602f983e796ea11f95ce32d537">here</a>.</p>
<p>The last task in the playbook configures Linux to boot with a particular networking config on a particular NIC, so you can ssh in. Once you're in, just run <code>cryptsetup-unlock</code> and your encrypted root is unlocked!</p>
<p>Another interesting thing I'm doing, is using <a href="https://github.com/borgbackup">Borg</a> for some backups. It's a pretty clever backup system, and it works over SSH, so I use the following Ansible task to allow a particular SSH key to log in to the server as root, in a way that forces it to use Borg:</p>
<div class="highlight"><pre><span></span><code><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">Deploy ssh borg access</span>
<span class="w"> </span><span class="nt">authorized_key</span><span class="p">:</span>
<span class="w"> </span><span class="nt">user</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">root</span>
<span class="w"> </span><span class="nt">state</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">present</span>
<span class="w"> </span><span class="nt">key_options</span><span class="p">:</span><span class="w"> </span><span class="s">'command="/usr/bin/borg</span><span class="nv"> </span><span class="s">serve</span><span class="nv"> </span><span class="s">--restrict-to-path</span><span class="nv"> </span><span class="s">/srv/tank/backups/borg",restrict'</span>
<span class="w"> </span><span class="nt">key</span><span class="p">:</span><span class="w"> </span><span class="s">"ssh-rsa</span><span class="nv"> </span><span class="s">BLAHBLAH</span><span class="nv"> </span><span class="s">cmsj@foo"</span>
</code></pre></div>
<p>Now on client machines I can run <code>borg create --exclude-caches --compression=zlib -v -p -s ssh://gnuborg:22/srv/tank/backups/borg/foo/backups.borg::cmsj-{utcnow} $HOME</code> and because <code>gnuborg</code> is defined in <code>~/.ssh/config</code> it will use all the right ssh options (username, hostname and the SSH key created for this purpose):</p>
<div class="highlight"><pre><span></span><code>Host gnuborg
User root
Hostname gnubert.local
IdentityFile ~/.ssh/id_rsa_herborg
</code></pre></div>Homebridge server monitoring2018-07-02T00:00:00+01:002023-11-22T17:30:11+00:00Chris Jonestag:cmsj.net,2018-07-02:/2018/07/02/homebridge-server-monitoring.html<p><a href="https://github.com/nfarina/homebridge">Homebridge</a> is a great way to expose arbitrary devices to Apple's HomeKit platform. It has helped bridge the Google Nest and Netgear Arlo devices I have in my home, into my iOS devices, since neither of those manufacturers appear to be interested in becoming officially HomeKit compatible.</p>
<p>London has been …</p><p><a href="https://github.com/nfarina/homebridge">Homebridge</a> is a great way to expose arbitrary devices to Apple's HomeKit platform. It has helped bridge the Google Nest and Netgear Arlo devices I have in my home, into my iOS devices, since neither of those manufacturers appear to be interested in becoming officially HomeKit compatible.</p>
<p>London has been having a little bit of a heatwave recently and it got me thinking about the Linux server I have running in a closet under the stairs - it has pretty poor airflow available to it, and I didn't know how hot its CPU was getting.</p>
<p>So, by the power of JavaScript, Homebridge and Linux's <code>/sys</code> filesystem, I was able to quickly whip up <a href="https://github.com/cmsj/homebridge-linux-temperature">a plugin</a> for Homebridge that will read an entry from Linux's temperature monitoring interface, and present it to HomeKit. In theory I could use it for sending notifications, but in practice I'm doing that via <a href="https://grafana.com/">Grafana</a> - the purpose of getting the information in HomeKit is so I can ask Siri what the server's temperature is.</p>
<p>The configuration is very simple, allowing you to configure one temperature sensor per instance of the plugin (but you could define multiple instances in your Homebridge <code>config.json</code>):</p>
<div class="highlight"><pre><span></span><code><span class="p">{</span>
<span class="w"> </span><span class="nt">"accessory"</span><span class="p">:</span><span class="w"> </span><span class="s2">"LinuxTemperature"</span><span class="p">,</span>
<span class="w"> </span><span class="nt">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"gnubert"</span><span class="p">,</span>
<span class="w"> </span><span class="nt">"sensor_path"</span><span class="p">:</span><span class="w"> </span><span class="s2">"/sys/bus/platform/devices/coretemp.0/hwmon/hwmon0/temp1_input"</span><span class="p">,</span>
<span class="w"> </span><span class="nt">"divisor"</span><span class="p">:</span><span class="w"> </span><span class="mi">1000</span>
<span class="p">}</span>
</code></pre></div>
<p>(<code>gnubert</code> is the hostname of my server).</p>
<p>Below is a screenshot showing the server's CPU temperature mingling with all of the Nest and Arlo items :)</p>
<p><img alt="Screenshot" src="http://cmsj.net/2018-07-02-server-temp-homekit.jpg"></p>A little bit of automation of the Trello Mac App2018-06-19T00:00:00+01:002018-09-18T16:55:02+01:00Chris Jonestag:cmsj.net,2018-06-19:/2018/06/19/trello-mac-app-automation.html<p><a href="https://www.trello.com">Trello</a> have a Mac app, which I use for work and it struck me this morning that several recurring calendar events I have, which exist to remind me to review a particular board, would be much more pleasant if they contained a link that would open the board directly.</p>
<p>That …</p><p><a href="https://www.trello.com">Trello</a> have a Mac app, which I use for work and it struck me this morning that several recurring calendar events I have, which exist to remind me to review a particular board, would be much more pleasant if they contained a link that would open the board directly.</p>
<p>That would be easy if I used the Trello website, but I quite like the app (even though it's really just a browser pretending to be an app), so I went spelunking.</p>
<p>To cut a long story short, the Trello Mac app registers itself as a handler for <code>trello://</code> URLs, so if you take any <code>trello.com</code> board URL and replace the <code>https://</code> part with <code>trello://</code> you can use it as a link in your calendar (or anywhere else) and it will open the board in the app.</p>Homebridge in Docker, an adventure in networking2018-06-15T00:00:00+01:002018-09-18T16:55:02+01:00Chris Jonestag:cmsj.net,2018-06-15:/2018/06/15/homebridge-in-docker.html<p><a href="https://github.com/nfarina/homebridge">Homebridge</a> is a great way of connecting <a href="https://www.npmjs.com/search?q=homebridge">loads</a> of devices that don't support Apple's <a href="https://www.apple.com/uk/ios/home/">HomeKit</a>, to your iOS devices. It consists of a daemon that understands the <a href="https://developer.apple.com/support/homekit-accessory-protocol/">HomeKit Accessory Protocol</a> and <a href="https://www.npmjs.com/search?q=homebridge">many plugins</a> that talk to other devices/services.</p>
<p>My home server is running Ubuntu, so installing Homebridge is fairly …</p><p><a href="https://github.com/nfarina/homebridge">Homebridge</a> is a great way of connecting <a href="https://www.npmjs.com/search?q=homebridge">loads</a> of devices that don't support Apple's <a href="https://www.apple.com/uk/ios/home/">HomeKit</a>, to your iOS devices. It consists of a daemon that understands the <a href="https://developer.apple.com/support/homekit-accessory-protocol/">HomeKit Accessory Protocol</a> and <a href="https://www.npmjs.com/search?q=homebridge">many plugins</a> that talk to other devices/services.</p>
<p>My home server is running Ubuntu, so installing Homebridge is fairly trivial, except I run all my services in <a href="https://www.docker.com">Docker</a> containers. To make things even more fun, I don't build or manage the containers by hand - the building is done by <a href="https://hub.docker.com/u/cmsj/">Docker Hub</a> and the containers are deployed and managed by <a href="https://www.ansible.com/">Ansible</a>.</p>
<p>So far so good, except that for a long time Homebridge used <a href="https://www.avahi.org/">Avahi</a> (an Open Source implementation of Apple's Bonjour host/service discovery protocol) to announce its devices. That presented a small challenge in that I didn't want to have Avahi running only in that container, so I had to bind mount <code>/var/run/avahi-daemon/</code> into the container.</p>
<p>I recently rebuilt my Homebridge container to pull it up to the latest versions of Homebridge and the plugins I use, but it was no longer announcing devices on my LAN, and there were no mentions of Avahi in its log. After some digging, it turns out that the HomeKit Accessory Protocol (HAP) library that Homebridge uses, now instantiates its own multicast DNS stack rather than using Avahi.</p>
<p>Apart from not actually working, this was great news, I could remove the <code>/var/run</code> bind mount from the container, making things more secure, I just needed to figure out why it wasn't showing up.</p>
<p>The HAP library that Homebridge uses, ends up depending on <a href="https://github.com/mafintosh/multicast-dns">this library</a> to implement mDNS and it makes <a href="https://github.com/mafintosh/multicast-dns/blob/master/index.js#L147">a very simple</a> decision about which network interface it should use. In my case, it was choosing the <code>docker0</code> bridge interface which explicitly isn't connected to the outside world. With no configuration options at the Homebridge level to influence the choice of interface, I had to solve the problem at the Docker network layer.</p>
<p>So, the answer was the following Ansible task to create a Docker network that is attached to my LAN interface (<code>bridge0</code>) and give it a small portion of a reserved segment in the IP subnet I use:</p>
<div class="highlight"><pre><span></span><code><span class="p p-Indicator">-</span><span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">Configure LANbridge network</span>
<span class="w"> </span><span class="nt">docker_network</span><span class="p">:</span>
<span class="w"> </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">lanbridge</span>
<span class="w"> </span><span class="nt">driver</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">macvlan</span>
<span class="w"> </span><span class="nt">driver_options</span><span class="p">:</span>
<span class="w"> </span><span class="nt">parent</span><span class="p">:</span><span class="w"> </span><span class="l l-Scalar l-Scalar-Plain">bridge0</span>
<span class="w"> </span><span class="nt">ipam_options</span><span class="p">:</span>
<span class="w"> </span><span class="nt">subnet</span><span class="p">:</span><span class="w"> </span><span class="s">'10.0.88.0/24'</span>
<span class="w"> </span><span class="nt">gateway</span><span class="p">:</span><span class="w"> </span><span class="s">'10.0.88.1'</span>
<span class="w"> </span><span class="nt">iprange</span><span class="p">:</span><span class="w"> </span><span class="s">'10.0.88.32/29'</span>
</code></pre></div>
<p>then change the task for the Homebridge container to use this network:</p>
<div class="highlight"><pre><span></span><code> network_mode: lanbridge
</code></pre></div>
<p>and now Homebridge is up to date, and working, plus I have a Docker network I can use in the future if any other containerised services need to be very close to the LAN.</p>Receiving remote syslog events with systemd2018-06-15T00:00:00+01:002018-09-18T16:55:02+01:00Chris Jonestag:cmsj.net,2018-06-15:/2018/06/15/systemd-remote-syslog.html<p><a href="https://www.freedesktop.org/wiki/Software/systemd/">Systemd</a> includes <code>journald</code>, a fancy replacement for the venerable <code>syslog</code> daemon (and its descendents, <code>syslog-ng</code> and <code>rsyslog</code>).</p>
<p>One interesting, but frustrating, decision by <code>journald</code>'s maintainers is that it does not speak the syslog network protocol, so it's unable to receive remote syslog events. Remote syslog is a tremendously useful …</p><p><a href="https://www.freedesktop.org/wiki/Software/systemd/">Systemd</a> includes <code>journald</code>, a fancy replacement for the venerable <code>syslog</code> daemon (and its descendents, <code>syslog-ng</code> and <code>rsyslog</code>).</p>
<p>One interesting, but frustrating, decision by <code>journald</code>'s maintainers is that it does not speak the syslog network protocol, so it's unable to receive remote syslog events. Remote syslog is a tremendously useful feature for aggregating log data from many hosts on a network - I've always used it so my network devices can log somewhere I'm likely to look at, but I haven't been able to do that since <code>journald</code> arrived.</p>
<p>While there are many ways to skin this goose, the method I've chosen is a tiny Python daemon that listens on syslog's UDP port (514), does minimal processing of the data and then feeds it into <code>journald</code> via its API, to get the data as rich as possible (since one of <code>journald</code>'s strengths is that it can store a lot more metadata about a log entry).</p>
<p>So, <a href="https://gist.github.com/cmsj/e03b6d28325ce5c3d5b255256278a330">here is the source</a> for the daemon, and <a href="https://gist.github.com/cmsj/71f987d1129c5dc693243dd1aa5f8f4f">here is the systemd service file</a> that manages it - note that it runs as an unprivileged user, with the sole privilege escalation of being able to bind to low port numbers (something only root can do normally).</p>
<p>The daemon is certainly not perfect (patches welcome!), but it works. Here is a <code>journald</code> log entry from one of my UniFi access points:</p>
<div class="highlight"><pre><span></span><code><span class="n">Jun</span><span class="w"> </span><span class="mi">15</span><span class="w"> </span><span class="mi">21</span><span class="err">:</span><span class="mi">28</span><span class="err">:</span><span class="mi">26</span><span class="w"> </span><span class="n">gnubert</span><span class="w"> </span><span class="p">(</span><span class="ss">"U7PG2,802aa8d48ab3,v3.9.27.8537"</span><span class="p">)</span><span class="o">[</span><span class="n">23506</span><span class="o">]</span><span class="err">:</span><span class="w"> </span><span class="nl">kernel</span><span class="p">:</span><span class="w"> </span><span class="o">[</span><span class="n">4251792.410000</span><span class="o">]</span><span class="w"> </span><span class="o">[</span><span class="n">wifi1</span><span class="o">]</span><span class="w"> </span><span class="nl">FWLOG</span><span class="p">:</span><span class="w"> </span><span class="o">[</span><span class="n">58855274</span><span class="o">]</span><span class="w"> </span><span class="n">BEACON_EVENT_SWBA_SEND_FAILED</span><span class="w"> </span><span class="p">(</span><span class="w"> </span><span class="p">)</span>
</code></pre></div>
<p>(the more syslog-obsessed among you will notice that I'm setting the <code>identifier</code> to the hostname of the device that sent the message. Internally, the <code>facility</code> is mapped correctly, as is the <code>priority</code>. The text of the message then appears, prepended by its <code>identifier</code>.</p>Adventures in Lua stack overflows2018-04-13T00:00:00+01:002018-09-18T16:55:02+01:00Chris Jonestag:cmsj.net,2018-04-13:/2018/04/13/lua-stack-adventures.html<p><a href="http://www.hammerspoon.org">Hammerspoon</a> is heavily dependent on <a href="http://www.lua.org">Lua</a> - it's the true core of the application, so it's unavoidable that we have to interact with Lua's C API in a lot of places. If you've never used it before, Lua's C API is designed to be very simple to integrate with other code …</p><p><a href="http://www.hammerspoon.org">Hammerspoon</a> is heavily dependent on <a href="http://www.lua.org">Lua</a> - it's the true core of the application, so it's unavoidable that we have to interact with Lua's C API in a lot of places. If you've never used it before, Lua's C API is designed to be very simple to integrate with other code, but it also places a fairly high burden on developers to integrate it properly.</p>
<p>One of the ways that Lua remains simple is by being stack based - when you give Lua a C function and make it available to call from Lua code, you have to conform to a particular way of working. The function arguments supplied by the user will be presented to you on a stack, and when your C code has finished its work, the return values must have been pushed onto the stack. Here's an example:</p>
<div class="highlight"><pre><span></span><code><span class="k">static</span><span class="w"> </span><span class="kt">int</span><span class="w"> </span><span class="nf">someUsefulFunction</span><span class="p">(</span><span class="n">lua_State</span><span class="w"> </span><span class="o">*</span><span class="n">L</span><span class="p">)</span><span class="w"> </span><span class="p">{</span>
<span class="w"> </span><span class="c1">// Fetch our first argument from the stack</span>
<span class="w"> </span><span class="kt">int</span><span class="w"> </span><span class="n">someNumber</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">lua_tointeger</span><span class="p">(</span><span class="n">L</span><span class="p">,</span><span class="w"> </span><span class="mi">1</span><span class="p">);</span>
<span class="w"> </span><span class="c1">// Fetch our second argument from the stack</span>
<span class="w"> </span><span class="kt">char</span><span class="w"> </span><span class="o">*</span><span class="n">someString</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">lua_tostring</span><span class="p">(</span><span class="n">L</span><span class="p">,</span><span class="w"> </span><span class="mi">2</span><span class="p">);</span>
<span class="w"> </span><span class="cm">/* Do some useful work here */</span>
<span class="w"> </span><span class="c1">// Push two return values onto the stack and return 2 so Lua knows how many return values we provided</span>
<span class="w"> </span><span class="n">lua_pushstring</span><span class="p">(</span><span class="n">L</span><span class="p">,</span><span class="w"> </span><span class="s">"some result text"</span><span class="p">);</span>
<span class="w"> </span><span class="n">lua_pushinteger</span><span class="p">(</span><span class="n">L</span><span class="p">,</span><span class="w"> </span><span class="mi">42</span><span class="p">);</span>
<span class="w"> </span><span class="k">return</span><span class="w"> </span><span class="mi">2</span><span class="p">;</span>
<span class="p">}</span>
</code></pre></div>
<p>All simple enough.</p>
<p>In this scenario of calling from Lua→C, Lua creates a pseudo-stack for you, so while it's good practice to keep the stack neat and tidy (i.e. remove things from it that you don't need), it's not critical because apart from the return values, the rest of the stack is thrown away. That pseudo-stack only has 20 slots by default though, so if you're pushing a lot of return arguments, or using the stack for other things, you may need to use <code>lua_checkstack()</code> to grow it larger, up to the maximum (2048 slots).</p>
<p>Where things get more interesting, is when you're interacting with the Lua stack without having crossed a Lua→C boundary. For example, maybe you're in a callback function that's been triggered by some event in your C program, and now you need to call a Lua function that the user gave you earlier. This might look something like this:</p>
<div class="highlight"><pre><span></span><code><span class="kt">int</span><span class="w"> </span><span class="n">globalLuaFunction</span><span class="p">;</span>
<span class="kt">void</span><span class="w"> </span><span class="nf">someCallback</span><span class="p">(</span><span class="kt">int</span><span class="w"> </span><span class="n">aValue</span><span class="p">,</span><span class="w"> </span><span class="kt">char</span><span class="o">*</span><span class="w"> </span><span class="n">aString</span><span class="p">)</span><span class="w"> </span><span class="p">{</span>
<span class="w"> </span><span class="c1">// Fetch a pointer to the shared Lua state object</span>
<span class="w"> </span><span class="n">lua_State</span><span class="w"> </span><span class="o">*</span><span class="n">L</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">some_shared_lua_state_provider</span><span class="p">();</span>
<span class="w"> </span><span class="c1">// Push onto the stack, the Lua function previously supplied by the user, from Lua's global registry</span>
<span class="w"> </span><span class="n">lua_rawgeti</span><span class="p">(</span><span class="n">L</span><span class="p">,</span><span class="w"> </span><span class="n">LUA_REGISTRYINDEX</span><span class="p">,</span><span class="w"> </span><span class="n">globalLuaFunction</span><span class="p">);</span>
<span class="w"> </span><span class="c1">// Push the two arguments for the Lua function</span>
<span class="w"> </span><span class="n">lua_pushinteger</span><span class="p">(</span><span class="n">L</span><span class="p">,</span><span class="w"> </span><span class="n">aValue</span><span class="p">);</span>
<span class="w"> </span><span class="n">lua_pushstring</span><span class="p">(</span><span class="n">L</span><span class="p">,</span><span class="w"> </span><span class="n">aString</span><span class="p">);</span>
<span class="w"> </span><span class="c1">// Call the Lua function, telling Lua to expect two arguments</span>
<span class="w"> </span><span class="n">lua_call</span><span class="p">(</span><span class="n">L</span><span class="p">,</span><span class="w"> </span><span class="mi">2</span><span class="p">,</span><span class="w"> </span><span class="mi">0</span><span class="p">);</span>
<span class="w"> </span><span class="k">return</span><span class="p">;</span>
<span class="p">}</span>
</code></pre></div>
<p>Slightly more complex than the last example, but still manageable. Unfortunately in practice this is a fairly suboptimal implementation of a C→Lua call - storing things in the <code>LUA_REGISTRYINDEX</code> table is fine, but it's often nicer to use multiple tables for different things. The big problem here though is that <code>lua_call()</code> doesn't trap errors. If the Lua code raises an exception, Lua will <code>longjmp</code> to a panic handler and <code>abort()</code> your app.</p>
<p>So, writing this a bit more completely, we get:</p>
<div class="highlight"><pre><span></span><code><span class="kt">int</span><span class="w"> </span><span class="n">luaCallbackTable</span><span class="p">;</span>
<span class="kt">int</span><span class="w"> </span><span class="n">globalLuaFunctionRef</span><span class="p">;</span>
<span class="kt">void</span><span class="w"> </span><span class="nf">someCallback</span><span class="p">(</span><span class="kt">int</span><span class="w"> </span><span class="n">aValue</span><span class="p">,</span><span class="w"> </span><span class="kt">char</span><span class="o">*</span><span class="w"> </span><span class="n">aString</span><span class="p">)</span><span class="w"> </span><span class="p">{</span>
<span class="w"> </span><span class="c1">// Fetch a pointer to the shared Lua state object</span>
<span class="w"> </span><span class="n">lua_State</span><span class="w"> </span><span class="o">*</span><span class="n">L</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">some_shared_lua_state_provider</span><span class="p">();</span>
<span class="w"> </span><span class="c1">// Push onto the stack, the table we keep callback references in, from Lua's global registry</span>
<span class="w"> </span><span class="n">lua_rawgeti</span><span class="p">(</span><span class="n">L</span><span class="p">,</span><span class="w"> </span><span class="n">LUA_REGISTRYINDEX</span><span class="p">,</span><span class="w"> </span><span class="n">luaCallbackTable</span><span class="p">);</span>
<span class="w"> </span><span class="c1">// Push onto the stack, from our callback reference table, the Lua function previously supplied by the user</span>
<span class="w"> </span><span class="n">lua_rawgeti</span><span class="p">(</span><span class="n">L</span><span class="p">,</span><span class="w"> </span><span class="mi">-1</span><span class="p">,</span><span class="w"> </span><span class="n">globalLuaFunctionRef</span><span class="p">);</span>
<span class="w"> </span><span class="c1">// Push the two arguments for the Lua function</span>
<span class="w"> </span><span class="n">lua_pushinteger</span><span class="p">(</span><span class="n">L</span><span class="p">,</span><span class="w"> </span><span class="n">aValue</span><span class="p">);</span>
<span class="w"> </span><span class="n">lua_pushstring</span><span class="p">(</span><span class="n">L</span><span class="p">,</span><span class="w"> </span><span class="n">aString</span><span class="p">);</span>
<span class="w"> </span><span class="c1">// Protected call to the Lua function, telling Lua to expect two arguments</span>
<span class="w"> </span><span class="n">lua_pcall</span><span class="p">(</span><span class="n">L</span><span class="p">,</span><span class="w"> </span><span class="mi">2</span><span class="p">,</span><span class="w"> </span><span class="mi">0</span><span class="p">,</span><span class="w"> </span><span class="mi">0</span><span class="p">);</span>
<span class="w"> </span><span class="k">return</span><span class="p">;</span>
<span class="p">}</span>
</code></pre></div>
<p>Ok so this is looking better, we have our own table for neatly storing function references and we'll no longer <code>abort()</code> if the Lua function throws an error.</p>
<p>However, we now have a problem, we're leaking at least one item onto Lua's stack and possibly two. Unlike in the Lua→C case, we are not operating within the safe confines of a pseudo-stack, so anything we leak here will stay permanently on the stack, and at some point that's likely to cause the stack to overflow.</p>
<p>Now here is the kicker - stack overflows are really hard to find by default, you don't typically get a nice error, your program will simply leak stack slots until the stack overflows, far from the place where the leak is happening, then segfault, and your backtraces will have very normal looking Lua API calls in them.</p>
<p>If we were to handle the stack properly, the above could would actually look like this (and note that we've gone from four Lua API calls in the first C→Lua example, to eight here):</p>
<div class="highlight"><pre><span></span><code><span class="kt">int</span><span class="w"> </span><span class="n">luaCallbackTable</span><span class="p">;</span>
<span class="kt">int</span><span class="w"> </span><span class="n">globalLuaFunctionRef</span><span class="p">;</span>
<span class="kt">void</span><span class="w"> </span><span class="nf">someCallback</span><span class="p">(</span><span class="kt">int</span><span class="w"> </span><span class="n">aValue</span><span class="p">,</span><span class="w"> </span><span class="kt">char</span><span class="o">*</span><span class="w"> </span><span class="n">aString</span><span class="p">)</span><span class="w"> </span><span class="p">{</span>
<span class="w"> </span><span class="c1">// Fetch a pointer to the shared Lua state object</span>
<span class="w"> </span><span class="n">lua_State</span><span class="w"> </span><span class="o">*</span><span class="n">L</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">some_shared_lua_state_provider</span><span class="p">();</span>
<span class="w"> </span><span class="c1">// Find luaCallbackTable in the Lua registry, and push it onto the stack</span>
<span class="w"> </span><span class="n">lua_rawgeti</span><span class="p">(</span><span class="n">L</span><span class="p">,</span><span class="w"> </span><span class="n">LUA_REGISTRYINDEX</span><span class="p">,</span><span class="w"> </span><span class="n">luaCallbackTable</span><span class="p">);</span>
<span class="w"> </span><span class="c1">// Find globalLuaFunctionRef in luaCallbackTable, and push it onto the stack</span>
<span class="w"> </span><span class="n">lua_rawgeti</span><span class="p">(</span><span class="n">L</span><span class="p">,</span><span class="w"> </span><span class="mi">-1</span><span class="p">,</span><span class="w"> </span><span class="n">globalLuaFunctionRef</span><span class="p">);</span>
<span class="w"> </span><span class="c1">// Remove luaCallbackTable from the stack *THIS WAS LEAKED IN THE ABOVE EXAMPLE*</span>
<span class="w"> </span><span class="n">lua_remove</span><span class="p">(</span><span class="n">L</span><span class="p">,</span><span class="w"> </span><span class="mi">-2</span><span class="p">);</span>
<span class="w"> </span><span class="c1">// Push the two arguments for the Lua function</span>
<span class="w"> </span><span class="n">lua_pushinteger</span><span class="p">(</span><span class="n">L</span><span class="p">,</span><span class="w"> </span><span class="n">aValue</span><span class="p">);</span>
<span class="w"> </span><span class="n">lua_pushstring</span><span class="p">(</span><span class="n">L</span><span class="p">,</span><span class="w"> </span><span class="n">aString</span><span class="p">);</span>
<span class="w"> </span><span class="k">if</span><span class="w"> </span><span class="p">(</span><span class="n">lua_pcall</span><span class="p">(</span><span class="n">L</span><span class="p">,</span><span class="w"> </span><span class="mi">2</span><span class="p">,</span><span class="w"> </span><span class="mi">0</span><span class="p">,</span><span class="w"> </span><span class="mi">0</span><span class="p">)</span><span class="w"> </span><span class="o">==</span><span class="w"> </span><span class="nb">false</span><span class="p">)</span><span class="w"> </span><span class="p">{</span>
<span class="w"> </span><span class="c1">// Fetch the Lua error message from the stack</span>
<span class="w"> </span><span class="kt">char</span><span class="w"> </span><span class="o">*</span><span class="n">someError</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">lua_tostring</span><span class="p">(</span><span class="n">L</span><span class="p">,</span><span class="w"> </span><span class="mi">-1</span><span class="p">);</span>
<span class="w"> </span><span class="n">printf</span><span class="p">(</span><span class="s">"ERROR: %s</span><span class="se">\n</span><span class="s">"</span><span class="p">,</span><span class="w"> </span><span class="n">someError</span><span class="p">);</span>
<span class="w"> </span><span class="c1">// Remove the Lua error message from the stack *THIS WAS LEAKED IN THE ABOVE EXAMPLE*</span>
<span class="w"> </span><span class="n">lua_pop</span><span class="p">(</span><span class="n">L</span><span class="p">,</span><span class="w"> </span><span class="mi">-1</span><span class="p">);</span>
<span class="w"> </span><span class="p">}</span>
<span class="w"> </span><span class="k">return</span><span class="p">;</span>
<span class="p">}</span>
</code></pre></div>
<p>Hammerspoon has been having problems like this for the last few months - lots of crash reports that on the surface, look like completely valid code was executing. I have to admit that it took me a lot longer than it should have, to realise that these were Lua stack overflows rather than my initial suspicion (C heap corruption), but we figured it out eventually and have hopefully fixed all of the leaks.</p>
<p>So, how did we discover that the problem was stack overflows, and how did we discover where all of the leaks were without manually auditing all of the places where we make C→Lua transitions (of which there are over 100). The answer to the first question is very simple, by defining <code>LUA_USE_APICHECK</code> when compiling Lua, it will do a little extra work to verify its consistency. Crucially, this includes calling <code>abort()</code> with a helpful message when the stack overflows. We turned this on for developers in March and then released 0.9.61 with it enabled, in early April. It's not normally recommended to have the API checker enabled in production because it calls <code>abort()</code>, but we felt that it was important to get more information about the crashes we couldn't reproduce.</p>
<p>Within a few days we started getting crash reports with the words <code>stack overflow</code> in them (as well as a few other errors, which we were able to fix), but that is only half the battle.</p>
<p>Having discovered that we did definitely have a stack leak somewhere, how did we discover where it was? This did involve a little brute force effort, but thankfully not a full manual audit of all 107 C→Lua call sites. Instead, I wrote two macros:</p>
<div class="highlight"><pre><span></span><code><span class="cp">#define _lua_stackguard_entry(L) int __lua_stackguard_entry=lua_gettop(L);</span>
<span class="cp">#define _lua_stackguard_exit(L) assert(__lua_stackguard_entry == lua_gettop(L));</span>
</code></pre></div>
<p>These are very simple to use - you call <code>_lua_stackguard_entry()</code> just after you've obtained a pointer to the Lua state object, and then you call <code>_lua_stackguard_exit()</code> at every point where the function can return after that. It records the size of the stack (<code>lua_gettop()</code>) at the entry point and <code>assert()</code>s that it's the same at the exit point (<code>assert()</code> also calls <code>abort()</code> if something is wrong, so now we would get crash logs with the crash in the actual function where the leak is happening).
These entry/exit calls were then added to all 107 call sites 4 days after the 0.9.61 was released and I spent 3 evenings testing or manually verifying every site, before releasing 0.9.65 (0.9.62-0.9.64 fixed some of the other bugs found by the API checker in the mean time).</p>
<p>At the time of writing we're only 24 hours past the release of 0.9.65, but so far things are looking good - no strange Lua segfault crash reports as yet. There was one issue found today where I'd placed a <code>_lua_stackguard_exit()</code> call after a C statement that seemed unimportant, but actually caused an important object to be freed, but that is <a href="https://github.com/Hammerspoon/hammerspoon/commit/95a13554c65568aca2ee6db040895c6345b01b50">already fixed</a> and will be included in 0.9.66.</p>
<p>Assuming we have now fixed the problem, after months of head-scratching, and a few weeks of research, testing and coding, it turns out that across the 107 call sites we only had two stack leaks - <a href="https://github.com/Hammerspoon/hammerspoon/commit/2b7abf2b33e3ddb17d87e548725959a8bba1ac40#diff-d0e4e7c56ae114494056acc9758d118fR797">one was in the code that handles tab completion in Hammerspoon's Console window</a>, and <a href="https://github.com/Hammerspoon/hammerspoon/commit/f199351538d7b81bd4a01f349ddeb2e33e76d8e7">the other was in <code>hs.notify</code></a>. Hopefully you're all enjoying a more stable Hammerspoon experience, but I think we'll be leaving both the API checker and the stack guard macros enabled since they make it very easy to find/fix these sorts of bugs. I'd rather get a smaller number of crashes sooner, than have more months of head-scratching!</p>
<p>Discuss on <a href="https://twitter.com/cmsj/status/984592229472833536">Twitter</a> | Discuss on <a href="https://news.ycombinator.com/item?id=16826199">Hacker News</a></p>Getting battery data from AirPods in macOS2017-11-27T00:00:00+00:002018-09-18T16:55:02+01:00Chris Jonestag:cmsj.net,2017-11-27:/2017/11/27/airpod-battery-ear-data.html<p>A recent <a href="https://github.com/Hammerspoon/hammerspoon/issues/1608">feature request</a> for <a href="http://www.hammerspoon.org">Hammerspoon</a> requested that we add support for reading battery information about AirPods (<a href="http://amzn.to/2zxsZSt">UK</a> <a href="http://amzn.to/2zxl2wn">US</a>).</p>
<p>Unfortunately because their battery status is quite complex (two earbuds and the case), this information is not reported via the normal IOKit APIs, but with a bit of poking around in …</p><p>A recent <a href="https://github.com/Hammerspoon/hammerspoon/issues/1608">feature request</a> for <a href="http://www.hammerspoon.org">Hammerspoon</a> requested that we add support for reading battery information about AirPods (<a href="http://amzn.to/2zxsZSt">UK</a> <a href="http://amzn.to/2zxl2wn">US</a>).</p>
<p>Unfortunately because their battery status is quite complex (two earbuds and the case), this information is not reported via the normal IOKit APIs, but with a bit of poking around in the results of <a href="http://stevenygard.com/projects/class-dump/">class-dump</a> for macOS High Sierra I was able to find some relevant methods/properties on <a href="https://developer.apple.com/documentation/iobluetooth/iobluetoothdevice">IOBluetoothDevice</a> that let you read information about the battery level of individual AirPods and the case, plus determine which of the buds are currently in an ear!</p>
<p>So, the next release of Hammerspoon should include <a href="https://github.com/Hammerspoon/hammerspoon/commit/e5738e8231b90b0506bbacf62cef6491364c5c22">this code</a> to expose all of this information neatly via <code>hs.battery.privateBluetoothBatteryInfo()</code> 😁</p>Happy 10th Birthday Terminator!2017-07-28T00:00:00+01:002018-09-18T16:55:02+01:00Chris Jonestag:cmsj.net,2017-07-28:/2017/07/28/happy-10th-terminator.html<p>Today marks 10 years since the <a href="https://launchpad.net/terminator/+milestone/0.1">very first public release</a> of <a href="https://gnometerminator.blogspot.co.uk/p/introduction.html">Terminator</a>, a multiplexing terminal emulator project.</p>
<p>I started working on Terminator as a simple way to get 4 terminals to not overlap on my laptop screen. In the following years it grew many features and attracted a userbase I …</p><p>Today marks 10 years since the <a href="https://launchpad.net/terminator/+milestone/0.1">very first public release</a> of <a href="https://gnometerminator.blogspot.co.uk/p/introduction.html">Terminator</a>, a multiplexing terminal emulator project.</p>
<p>I started working on Terminator as a simple way to get 4 terminals to not overlap on my laptop screen. In the following years it grew many features and attracted a userbase I am very proud of.</p>
<p>As much as I would like to, I cannot claim most of the credit for Terminator surviving for a decade - I stepped away from the project a few years ago and handed the reigns over to Stephen Boddy at a very crucial time - gtk2 was becoming ever more obsolete and our work on a gtk3 port was very incomplete. Stephen has driven the project forward and it now has a very good gtk3 version :)</p>
<p>So, thank you to Stephen and everybody else who contributed code/docs/translations/suggestions/bugs/etc over the last 10 years (you can see the most active folk <a href="https://launchpad.net/terminator/+topcontributors">here</a>).</p>
<p>I'd also like to note that according to Ubuntu's <a href="http://popcon.ubuntu.com">Popularity Contest</a> data, Terminator is installed on at least 56000 Ubuntu machines. Debian also has PopCon data, but the numbers there are <a href="https://qa.debian.org/popcon.php?package=terminator">a little less impressive</a> ;)</p>
<p>This was the first project of mine that reached any kind of significant audience, and is the first project of mine to have achieved a decade of active maintenance, so I am feeling pretty happy today!</p>USB Type C is awful2017-06-30T00:00:00+01:002018-09-18T16:55:02+01:00Chris Jonestag:cmsj.net,2017-06-30:/2017/06/30/usb-c-is-awful.html<p>Intentionally inflammatory title there, but there are some valid reasons to be annoyed at USB Type C.</p>
<p>Firstly, I have discovered (the hard way) that although there are many cables on sale, the majority of Type C cables do not support even USB 3.0 speeds (so 5Gb/s) let …</p><p>Intentionally inflammatory title there, but there are some valid reasons to be annoyed at USB Type C.</p>
<p>Firstly, I have discovered (the hard way) that although there are many cables on sale, the majority of Type C cables do not support even USB 3.0 speeds (so 5Gb/s) let alone USB 3.1gen2 (so 10Gb/s) speeds. They are actually USB 2.0 (so 480Mb/s) cables.</p>
<p>I can understand that some <em>devices</em> with Type C connectors may only need USB 2.0 speeds, but for the <em>cables</em> to not all be USB 3.x seems crazy to me. Even <a href="https://www.apple.com/shop/product/MLL82AM/A/usb-c-charge-cable-2-m">Apple is doing this</a> - the charging cables for MacBooks (12" and Pros) with Type C ports, only support USB 2.0 speeds. If I had to guess, I'd say it's because they wanted the cables to be thin and easily bendable, which full USB 3.1 cables tend not to be.</p>
<p>Secondly, unlike Type A, which marks USB 3.x cables by having blue plastic inside the connectors, there is no way to tell what speed a Type C cable is by looking at it.</p>
<p>Thirdly, and perhaps most importantly, USB Type C is a <em>vastly</em> more powerful beast than previous versions - a modern Type C port can be capable of:</p>
<ul>
<li>40Gb/s Thunderbolt 3</li>
<li>DisplayPort</li>
<li>10Gb/s USB 3.1gen2</li>
<li>100W of (actively negotiated) power delivery in either direction</li>
</ul>
<p>The DisplayPort "alternate mode" can deliver 4K at 60Hz with USB3.1 at the same time, or 5K with USB2.0 at the same time, or 8K (compressed). When used as Thunderbolt, Type C can carry 5K video as well as the PCI data.</p>
<p>So while one tiny <em>connector</em> can do a whole bunch of really impressive things, the <em>cables</em> are now expected to do vastly more than even USB3.0 Type A cables, let alone USB 2.0, and it seems like that advanced capability isn't currently aligned with the history of USB as being an ultra-cheap, mass market affair.</p>
<p>Various awesome folk have put together <a href="https://docs.google.com/spreadsheets/u/1/d/1vnpEXfo2HCGADdd9G2x9dMDWqENiY2kgBJUu29f_TX8/pubhtml#">a spreadsheet</a> of the chargers/cables they've tested, and it seems like a serious chunk of the Type C cables currently on the market, are junk. This is bad for everyone, especially users, who can buy what looks like the right cable, only to discover that their devices either don't work at all, or work to slowly, or won't charge properly.</p>
<p>This post exists because I needed a USB3.0, three metre, Type A to Type C cable, and I bought one on Amazon, only to discover that it only supported USB2.0. After <em>far</em> too much searching, I eventually found an Anker cable which meets my requirements:</p>
<ul>
<li>3m/10ft: <a href="http://amzn.to/2t7scp2">UK</a>|<a href="http://amzn.to/2trgUyd">US</a></li>
<li>2m/6ft: <a href="http://amzn.to/2t82PU0">UK</a>|<a href="http://amzn.to/2s8DRm8">US</a></li>
<li>1m/3ft: <a href="http://amzn.to/2stsrbV">UK</a>|<a href="http://amzn.to/2snLzwR">US</a></li>
</ul>Home networking like a pro - Part 1 - Network Storage2017-06-22T00:00:00+01:002018-09-18T16:55:02+01:00Chris Jonestag:cmsj.net,2017-06-22:/2017/06/22/home-pro-part-1-nas.html<h2>Introduction</h2>
<p>This is part one in a series of posts about some hardware I recommend (or otherwise!) for people who want to bring some semi-professional flair to their home network.</p>
<p>The first topic is storage - specifically, Network Attached Storage.</p>
<h2>Background</h2>
<p>For the last few years, I was running a Mac …</p><h2>Introduction</h2>
<p>This is part one in a series of posts about some hardware I recommend (or otherwise!) for people who want to bring some semi-professional flair to their home network.</p>
<p>The first topic is storage - specifically, Network Attached Storage.</p>
<h2>Background</h2>
<p>For the last few years, I was running a Mac Mini with two 3TB drives in a RAID1 array in a LaCie 2big Thunderbolt chassis (<a href="https://www.amazon.com/gp/product/B00KQD0HM2/ref=as_li_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=B00KQD0HM2&linkCode=as2&tag=cmsj-20&linkId=263d4ed10fb9c73f39e787d9266f3851">US</a> <a href="https://www.amazon.co.uk/gp/product/B00KYFU5YM/ref=as_li_tl?ie=UTF8&camp=1634&creative=6738&creativeASIN=B00KYFU5YM&linkCode=as2&tag=cmsj-21&linkId=9290f58318a0bfd748de49c6e53c8f2c">UK</a>), with the Mac running macOS Server to provide file sharing (AFP and SMB), and Time Machine backups for the rest of the network.</p>
<p>This was a very nice solution, in that the Mac was a regular computer, so I could make it do whatever I wanted, but it did have the drawbacks that the Thunderbolt chassis only had two drive bays, and I had trouble getting the Mac to run reliably for months at a time (I ran into GPU related kernel panics, perhaps because it was attached to a TV rather than a monitor).</p>
<p>Around the time I was selecting the Mac/LaCie, most NAS devices in a similar price range were very underpowered ARM devices, and could do little more than share files, but in 2017 almost all NAS devices are much more powerful x86 devices that often have extensive featuresets (e.g. running containers, VMs, hardware accelerated video transcoding, etc.) so I decided it was time to switch.</p>
<h2>Solution</h2>
<p>I ended up choosing a Synology DS916+ (<a href="https://www.amazon.com/gp/product/B01EMZHLZU/ref=as_li_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=B01EMZHLZU&linkCode=as2&tag=cmsj-20&linkId=4bc9ac2f2590480e49aef6993329eb39">US</a> <a href="https://www.amazon.co.uk/gp/product/B01EMZHLZU/ref=as_li_tl?ie=UTF8&camp=1634&creative=6738&creativeASIN=B01EMZHLZU&linkCode=as2&tag=cmsj-21&linkId=a5a66e7532a51fb5478e07202daa05d2">UK</a>), popped one of the 3TB drives (Western Digital Red (<a href="https://www.amazon.com/gp/product/B008JJLW4M/ref=as_li_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=B008JJLW4M&linkCode=as2&tag=cmsj-20&linkId=d2d3d7db9bcd9bdf879a213d405a343b">US</a> <a href="https://www.amazon.co.uk/gp/product/B008JJLW4M/ref=as_li_tl?ie=UTF8&camp=1634&creative=6738&creativeASIN=B008JJLW4M&linkCode=as2&tag=cmsj-21&linkId=c644f89fe12d6b3d39794ca77b65cb9a">UK</a>)) out of the LaCie and into the Synology, and set about migrating my data over. I then moved the other drive, and put in two more 3TB drives, all of which are running as a single Synology Hybrid RAID volume with a BTRFS filesystem (note that the Hybrid RAID really seems to just be RAID5).</p>
<p>I configured the Synology to serve files over both AFP and SMB, and enabled its support for Time Machine via AFP. I was also able to connect both of its Ethernet ports to my switch (a ZyXEL GS1900 24 port switch, which I will cover in an upcoming post) and enabled LACP on each end to bond the two connections into a single 2Gb link.</p>
<p>So, how did it work out?</p>
<p>The AFP file sharing is great, and works flawlessly. SMB is a little more complex, because recent versions of macOS tend to enforce encryption on SMB connections, which makes them go much slower, but this <a href="https://dpron.com/os-x-10-11-5-slow-smb/">can be disabled</a>. I tested Time Machine over SMB, which is officially supported by Synology, but is a very recent addition, and it proved to be unreliable, so that is staying on AFP for now.</p>
<p>Something I was particularly keen on, with the Synology, was that it has an "app store" and one of the available applications is Docker. I was running a few UNIX daemons on the Mac Mini which I wanted to keep, and Docker containers would be perfect for them, however, I discovered that the version of Docker provided by Synology is pretty old and I ran into a strange bug that would cause dockerd to consume all available CPU cycles.</p>
<p>For now, the containers are running on an Intel NUC (which will also be covered in an upcoming post) and the Synology is focussed on file sharing.</p>
<h2>Open Source</h2>
<p>Synology's NAS products are based on Linux, Samba, netatalk and a variety of other Open Source projects, with their custom management GUI on top. They do <a href="https://sourceforge.net/projects/dsgpl/files/Synology%20NAS%20GPL%20Source/">publish source</a>, but it's usually a little slow to arrive on the site, and it's not particularly easy (or in some cases even possible) to rebuild in a way that lets you actively customise the device.</p>
<h2>Conclusion</h2>
<p>Overall, I like the Synology, but I think if I'd known about the Docker issue, I might have built my own machine and put something like <a href="http://www.freenas.org/">FreeNAS</a> on it. More work, less support, but more flexibility.</p>
<p>The recent 5-8 drive Synologies now support running VMs, which seems like a very interesting prospect, since it ought to isolate you from Synology's choices of software versions.</p>Hyper Key in macOS Sierra with Karabiner Elements2017-06-13T00:00:00+01:002018-09-18T16:55:02+01:00Chris Jonestag:cmsj.net,2017-06-13:/2017/06/13/karabiner-elements-sierra-hyper.html<p>Over the last few years, <a href="http://brettterpstra.com/2012/12/08/a-useful-caps-lock-key/">various</a> <a href="https://www.nadeau.tv/configure-hyper-key-osx/">people</a> have used <a href="https://pqrs.org/osx/karabiner/">Karabiner</a> to remap Caps Lock to cmd+shift+opt+ctrl, which is such an unusual combination of modifier keys, that it effectively makes Caps behave as a completely new modifier (which we have collectively called "Hyper", in reference to old UNIX …</p><p>Over the last few years, <a href="http://brettterpstra.com/2012/12/08/a-useful-caps-lock-key/">various</a> <a href="https://www.nadeau.tv/configure-hyper-key-osx/">people</a> have used <a href="https://pqrs.org/osx/karabiner/">Karabiner</a> to remap Caps Lock to cmd+shift+opt+ctrl, which is such an unusual combination of modifier keys, that it effectively makes Caps behave as a completely new modifier (which we have collectively called "Hyper", in reference to old UNIX workstation keyboards).</p>
<p>And for a time, it was good.</p>
<p>Then came macOS Sierra, which changed enough of the input layers of its kernel, that Karabiner was unable to function. Thankfully, Karabiner's author, Fumihiko Takayama, began work on a complete rewrite of Karabiner, which is currently called <a href="https://github.com/tekezo/Karabiner-Elements">Karabiner Elements</a>.</p>
<p>Initially, Elements only supported very simple keyboard modifications - you could swap one key for another, but that was it. Various folk quickly got to work offering <a href="https://github.com/tekezo/Karabiner-Elements/pull/170">quick hacks</a> to get a Hyper key to work, and others started to try to <a href="http://brettterpstra.com/2016/09/29/a-better-hyper-key-hack-for-sierra/">work around</a> the missing support, with other tools.</p>
<p>I'm very glad to say that it is now possible to do a proper Hyper remap with Karabiner Elements (and to be clear, none of this is my work, all credit goes to Fumihiko).</p>
<p>Here's how you can get it:</p>
<ul>
<li>Download and install <a href="https://pqrs.org/latest/karabiner-elements-latest.dmg">https://pqrs.org/latest/karabiner-elements-latest.dmg</a></li>
<li>Launch the Karabiner Elements app, go to the Misc tab and check which version you have, if it's less than 0.91.1, click either <code>Check for updates</code> or <code>Check for beta updates</code> until you get offered 0.91.1 or higher, then install that update and re-launch the Karabiner Elements app.</li>
<li>You probably want to remove the example entry in the Simple Modifications tab.</li>
<li>Edit <code>~/.config/karabiner/karabiner.json</code></li>
<li>Find the <code>simple_modifications</code> section, and right after it, paste in:</li>
</ul>
<div class="highlight"><pre><span></span><code><span class="nt">"complex_modifications"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span>
<span class="w"> </span><span class="nt">"rules"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span>
<span class="w"> </span><span class="p">{</span>
<span class="w"> </span><span class="nt">"manipulators"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span>
<span class="w"> </span><span class="p">{</span>
<span class="w"> </span><span class="nt">"description"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Change caps_lock to command+control+option+shift."</span><span class="p">,</span>
<span class="w"> </span><span class="nt">"from"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span>
<span class="w"> </span><span class="nt">"key_code"</span><span class="p">:</span><span class="w"> </span><span class="s2">"caps_lock"</span><span class="p">,</span>
<span class="w"> </span><span class="nt">"modifiers"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span>
<span class="w"> </span><span class="nt">"optional"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span>
<span class="w"> </span><span class="s2">"any"</span>
<span class="w"> </span><span class="p">]</span>
<span class="w"> </span><span class="p">}</span>
<span class="w"> </span><span class="p">},</span>
<span class="w"> </span><span class="nt">"to"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span>
<span class="w"> </span><span class="p">{</span>
<span class="w"> </span><span class="nt">"key_code"</span><span class="p">:</span><span class="w"> </span><span class="s2">"left_shift"</span><span class="p">,</span>
<span class="w"> </span><span class="nt">"modifiers"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span>
<span class="w"> </span><span class="s2">"left_command"</span><span class="p">,</span>
<span class="w"> </span><span class="s2">"left_control"</span><span class="p">,</span>
<span class="w"> </span><span class="s2">"left_option"</span>
<span class="w"> </span><span class="p">]</span>
<span class="w"> </span><span class="p">}</span>
<span class="w"> </span><span class="p">],</span>
<span class="w"> </span><span class="nt">"type"</span><span class="p">:</span><span class="w"> </span><span class="s2">"basic"</span>
<span class="w"> </span><span class="p">}</span>
<span class="w"> </span><span class="p">]</span>
<span class="w"> </span><span class="p">}</span>
<span class="w"> </span><span class="p">]</span>
<span class="p">},</span>
</code></pre></div>
<ul>
<li>As soon as you save the file, Elements will notice it has changed, and reload its config. You should immediately have a working Hyper key 😁</li>
</ul>
<p>If you're not confident at your ability to hand-merge JSON like this, and don't need anything from Elements other than the basic defaults, plus Hyper, feel free to grab <a href="https://gist.githubusercontent.com/cmsj/23ca8a570c060e8ccb2a36cee70ed28b/raw/60cb590411193f24a56fb8f52f96093c5191ba22/karabiner.json">my config</a> and drop it in <code>~/.config/karabiner/</code>.</p>
<p><em>Supplemental note for High Sierra</em></p>
<p>I've only tested this very briefly on High Sierra, but I had to disable SIP to get the Elements <code>.kext</code> to load. I'm not quite sure what's going on, but I reported it <a href="https://github.com/tekezo/Karabiner-Elements/issues/777">on GitHub</a>. (Note that you can re-enable SIP after the kext has been loaded successfully once)</p>
<p><em>Update</em></p>
<p>Many people like to turn Caps into Hyper, but also have it behave as Escape if it is tapped on its own. As of Karabiner Elements 0.91.3 <a href="https://twitter.com/ttscoff/status/875029764377108480">this appears</a> to be possible by adding this to the manipulator:</p>
<div class="highlight"><pre><span></span><code><span class="nt">"to_if_alone"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span>
<span class="w"> </span><span class="p">{</span>
<span class="w"> </span><span class="nt">"key_code"</span><span class="p">:</span><span class="w"> </span><span class="s2">"escape"</span><span class="p">,</span>
<span class="w"> </span><span class="nt">"modifiers"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span>
<span class="w"> </span><span class="nt">"optional"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span>
<span class="w"> </span><span class="s2">"any"</span>
<span class="w"> </span><span class="p">]</span>
<span class="w"> </span><span class="p">}</span>
<span class="w"> </span><span class="p">}</span>
<span class="p">],</span>
</code></pre></div>
<p>(thanks to <a href="http://brettterpstra.com/">Brett Terpstra</a> for the sample of this</p>New blog setup2017-06-09T00:00:00+01:002018-09-18T16:55:02+01:00Chris Jonestag:cmsj.net,2017-06-09:/2017/06/09/new-blog.html<p>Bye bye Blogger, hello GitHub Pages. This means no more comments, and probably a bunch of posts have terrible formatting at the moment, but at least my blog is now just static Markdown files I can edit nicely 😁</p>
<p>(Also the domain has changed, cmsj.net is the new tenshu.net …</p><p>Bye bye Blogger, hello GitHub Pages. This means no more comments, and probably a bunch of posts have terrible formatting at the moment, but at least my blog is now just static Markdown files I can edit nicely 😁</p>
<p>(Also the domain has changed, cmsj.net is the new tenshu.net :)</p>Changing GPG key2016-08-30T00:00:00+01:002018-09-18T16:55:02+01:00Chris Jonestag:cmsj.net,2016-08-30:/2016/08/30/changing-gpg-key.html<p>It's been 15 years and my GPG key is now looking hilariously out of date - 1024bit DSA key. Yuck.
So, time to start over. Below is a statement about the change, with details of my new and old keys, signed by both keys. Since it will almost certainly not paste …</p><p>It's been 15 years and my GPG key is now looking hilariously out of date - 1024bit DSA key. Yuck.
So, time to start over. Below is a statement about the change, with details of my new and old keys, signed by both keys. Since it will almost certainly not paste properly out of a browser, I have also uploaded it to GitHub: <a href="https://gist.githubusercontent.com/cmsj/3093f03e085239de0e64ea33d1b2bfab/raw/82c2cb9110e83f0aae4a8155aec954c3aa9aab3b/gpg-migration.txt.asc.asc">here</a>.</p>
<div class="highlight"><pre><span></span><code><span class="gh">-----BEGIN PGP SIGNED MESSAGE-----</span>
<span class="na">Hash</span><span class="o">:</span><span class="w"> </span><span class="s">SHA256</span>
<span class="na">- -----BEGIN PGP SIGNED MESSAGE-----</span>
<span class="na">Hash</span><span class="o">:</span><span class="w"> </span><span class="s">SHA256</span>
<span class="na">Hello world</span>
<span class="na">My name is Chris Jones and I am changing my GPG key.</span>
<span class="na">The original key fingerprint is</span><span class="o">:</span><span class="w"> </span><span class="s">6C99 9021 9B3A EC6D 4A28 7EE7 C574 7646 7313 2D75</span>
<span class="na">The new key fingerprint is</span><span class="o">:</span><span class="w"> </span><span class="s">79D2 2E89 6591 210E 45F3 75D3 BCA2 36E2 E19F 727D</span>
<span class="na">You should find this message signed by the new key, and that combined message signed by the old key, as proof that I am doing this. I have also updated my keybase.io account and secured a couple of signatures on the new key to start rebuilding my place in the web of trust.</span>
<span class="na">Thank you for your time,</span>
<span class="na">Chris</span>
<span class="na">- -----BEGIN PGP SIGNATURE-----</span>
<span class="na">Version</span><span class="o">:</span><span class="w"> </span><span class="s">GnuPG/MacGPG2 v2.0</span>
<span class="na">iQIcBAEBCAAGBQJXxffoAAoJECwUb1eIgw1qUMEP+wfaB84vYC4tHdo9OS9szaQv</span>
<span class="na">cmNW0sIuDx/RQr4iYrcs+8QTQf2I3FPXUc6auhyF4J7Lntu67sTKI9zyfsDm0IW9</span>
<span class="na">NbIzysTP1Y35lJPA12VM9O9IRaf5G7J57BKAmAuUbpnY7icIzA0MoD4SwCgcRXtA</span>
<span class="na">QPZd8JVPDLaNpwb1O5rdQLBAdo+OUjF+bB8jZzzoORX0oVVdGhaGJFVhuq8aV1dY</span>
<span class="na">0YB/nZr71ZApUVnvSZBfj1FHsgXZ5Fai70iI+oAox/Gj6BJ0IJhBIk5hLAzAiXoR</span>
<span class="na">8EMmtqkWkKc8Jd3NonMzKRGF+qT+G3YuZIDmSptOZWjJ5volT25bYFknEBxPslwC</span>
<span class="na">TiBGSs6rKz9RhfxGxCmM350zBIVFFCn+RNCWrgn/z4OJp4xSvZJ0IwqB/CwkrmUb</span>
<span class="na">0ZhG44O2W5lpuSCDh1dhCsiryq4JeSiUy1GENyHl8eIkXjzDOjKTt6OT8wYFFPL7</span>
<span class="na">XheyIfvMbRNh86o79Skch6Qoyh7nvVALAwsLVHKSDtQRzHbVF6ED9h2ISxdABiZ2</span>
<span class="na">CkiJ95bf8JQeNVoqLJ78uwSYN96AyGPXfMQKG45SavgkzNLyqeoI1iMJE2yYuVIy</span>
<span class="na">Z9XUaKhoDI9ERLps+Fw6NY+v2BVSKTvl5MDEDHmfvjK0m3x6C4tF3/QdgwRsJmvE</span>
<span class="na">2XMWXLrBkcZx9tnYNFIG</span>
<span class="na">=9PwA</span>
<span class="na">- -----END PGP SIGNATURE-----</span>
<span class="na">-----BEGIN PGP SIGNATURE-----</span>
<span class="na">Version</span><span class="o">:</span><span class="w"> </span><span class="s">GnuPG/MacGPG2 v2.0</span>
<span class="s">iQEcBAEBCAAGBQJXxff1AAoJEPH//8xn2jUBNOcIAL4fqY/DViwNj2/Va4ePEg1w</span>
<span class="s">9uUWhGnMUcG4CsQGkhtODU6Qg45inrWnG0VE8jwhGilP4w5tQoIe+m53cUp5m/Rv</span>
<span class="s">ueRBvfDxBw/SrF6eFZ1SGkXv6kcUkOYjueKsDtxObaX9dN7PrDUljtZWpGTzE77k</span>
<span class="s">5EWPGfUT89oXa2eGwYnr6t7t9f76cO9eKFck7rIWT+p1tzmF6amm7IjoS8gsjfSb</span>
<span class="s">lPk3PoC0G71wSseh7iesIgw+vZRZ7tYg59RdpwWLmZjQJVhMzW/QpX87CPAM2m0A</span>
<span class="s">OcyivXLlbaSZ58AsHZSIA4ZjeoDnlWNsFHemBUSAOMa03b4JtgnGbTaHTZhiLc8=</span>
<span class="s">=/lkZ</span>
<span class="gh">-----END PGP SIGNATURE-----</span>
</code></pre></div>Raspberry Pi project: PiBus2015-12-08T00:00:00+00:002023-11-22T17:29:51+00:00Chris Jonestag:cmsj.net,2015-12-08:/2015/12/08/raspberry-pi-project-pibus.html<p>The premise of this project is simple - my family lives in London, there's a bus route that runs along our road, and we use it a lot to take the kids places.
The reality of using the bus is a little more complex - getting three kids ready to go out …</p><p>The premise of this project is simple - my family lives in London, there's a bus route that runs along our road, and we use it a lot to take the kids places.
The reality of using the bus is a little more complex - getting three kids ready to go out, is a nightmare and it's not made any easier by checking a travel app on your phone to see how close the bus is.</p>
<p>I don't think there is much I can do to make the preparation of the children any less manic, but I can certainly do something about the visibility of information about buses, mainly thanks to the excellent open APIs/data provided by <a href="https://tfl.gov.uk/info-for/open-data-users/">Transport For London</a>.</p>
<p>So, armed with their API, a Python 3 interpreter and a <a href="https://www.raspberrypi.org/">Raspberry Pi</a>, I set out to make a little box for the kitchen wall which will show when the next 3 buses are due to arrive outside our house.</p>
<p><a href="https://github.com/cmsj/pibus">The code itself</a> is easy enough to throw together because Python has libraries for everything (it also helps if you don't bother to design a decent architecture!). <a href="http://docs.python-requests.org/en/latest/">Requests</a> to fetch the bus data from TfL, <a href="https://docs.python.org/3/library/json.html">json</a>/<a href="https://pypi.python.org/pypi/iso8601">iso8601</a> to parse the data, <a href="https://python-pillow.github.io/">Pillow</a> to render it as an image, and <a href="https://apscheduler.readthedocs.org/en/latest/">APScheduler</a> to give it a simple run-loop.</p>
<p>The question then becomes, how to display the data. The easiest answer would be a little LCD screen, but that brings with it the downside of having a backlight in the kitchen, which would be ugly and distracting, and it also raises the question of viewing angles. Another answer would be some kind of physical indicator, but that requires skills I don't have time for. Instead, I decided to look for an E-Ink display (think <a href="https://kindle.amazon.com/">Kindle</a>) - it would let me display simple images without producing light.</p>
<p>The first option I looked at was the <a href="https://www.kickstarter.com/projects/pisupply/papirus-the-epaper-screen-hat-for-your-raspberry-p">PaPiRus</a>, but it's in the window between its crowdfunding drive having finished, and being available to buy. The only other option I could find was the <a href="http://www.percheron-electronics.uk/shop/e-paper-hat/">E-Paper HAT</a>, from Percheron Electronics, which also started life as a crowdfunding project, but is actually available to buy.</p>
<p>Unfortunately, these displays are super fragile, which I discovered by destroying the first one, but Neil at Percheron was super helpful and I quickly had a new display and some tips about how to avoid cracking it.</p>
<p>My visualisation of this data isn't going to win any awards for beauty, but it serves its purpose by showing a big number to tell us how many minutes we have, and I managed to minimise the number of times you see the white-black-white refresh cycle of the eInk display with partial screen updates.
Here are some photos of the project in various stages of construction:</p>
<p><img alt="Freshly assembled out of the box" src="http://cmsj.net/IMG_6856.JPG">
<em>Freshly assembled out of the box</em></p>
<p><img alt="The smallest USB WiFi adapter I've ever seen" src="http://cmsj.net/IMG_6857.JPG">
<em>The smallest USB WiFi adapter I've ever seen!</em></p>
<p><img alt="Modified PiBow case" src="http://cmsj.net/IMG_6858.JPG">
<em>Sadly I had to make some modifications to the PiBow case to fit this particular rPi</em></p>
<p><img alt="Running an eInk test program" src="http://cmsj.net/IMG_6864.JPG">
<em>Running one of the eInk display test programs</em></p>
<p>Initially I was rather hoping I could use the famous font that TfL (and London Transport before it) use, which is known as Johnston, but sadly they will not licence the font outside their own use and use by contracted partners. There is a third party clone of the font, but it may have legal issues, presumably because TfL values their braaaaaand. Instead, I decided to just drop the idea of shipping a font with the code, and exported Courier.ttf from my laptop to the Pi directly.</p>
<p><img alt="TfL font" src="http://cmsj.net/IMG_6897.JPG">
<em>This would have been nice, but I cannot have nice font things</em></p>
<p>I did briefly try Ubuntu Mono, which is a lovely font, but the zeros look like eyes and it freaked me out.</p>
<p><img alt="PIBUS IS WATCHING YOU" src="http://cmsj.net/Screenshot_2015-12-08_14.03.28.png">
<em>PIBUS IS WATCHING YOU</em></p>
<p>The display needs to handle various different situations, most obviously, when no data can be fetched from the API. Rather than get too bogged down in the details of whether our Internet connection is down, TfL's API servers are down, London is on fire, or it's just night time and there are no buses, I went for a simple message with a timestamp. Once this has been displayed, the code skips any further screen updates until it has valid data again. This makes it easy to see when a problem occurred.</p>
<p><img alt="Date message" src="http://cmsj.net/IMG_7010.JPG">
<em>Maybe aliens stole the Internet, maybe it's a bus strike. It doesn't matter.</em></p>
<p>I also render a small timestamp on valid data screens too, showing when the last data fetch happened. This is mostly so I can be sure that the fetching code isn't stuck somehow. Once I trust the system a bit more, this can probably come out.</p>
<p><img alt="Final design" src="http://cmsj.net/IMG_7012.JPG">
<em>The final design, showing a fallback for when there is data for 0 < x < 3 buses</em></p>
<p><img alt="Three buses" src="http://cmsj.net/IMG_7013.JPG">
<em>Data for three buses, plenty of time to get ready for the second one!</em></p>
<p>So there it is, project completed! Grab the code from <a href="https://github.com/cmsj/pibus">https://github.com/cmsj/pibus</a>, install the requirements on a Pi, give money to the awesome Percheron Electronics for the E-Paper HAT (and matching PiBow case), throw a font in the directory and edit the scripts for the bus stop and bus route that you care about!</p>Sending iMessages and SMS through Messages.app with AppleScript2015-02-20T00:00:00+00:002018-09-18T16:55:02+01:00Chris Jonestag:cmsj.net,2015-02-20:/2015/02/20/send-imessage-and-sms-with-applescript.html<p>I was searching around for ways to automate sending iMessages, so I could write a plugin for <a href="http://www.hammerspoon.org/">Hammerspoon</a>. I found various scripts lurking around the place for sending iMessages, but I also found one that can send SMS if you have SMS Relay enabled (which means you need OS X …</p><p>I was searching around for ways to automate sending iMessages, so I could write a plugin for <a href="http://www.hammerspoon.org/">Hammerspoon</a>. I found various scripts lurking around the place for sending iMessages, but I also found one that can send SMS if you have SMS Relay enabled (which means you need OS X 10.10 and an iPhone running iOS 8.1).
I figured I'd collect them as a single post, to help future searchers, so without further ado, here are two stripped down AppleScript snippets that let you control Messages.app to send either an iMessage, or an SMS.
Firstly, sending an iMessage:</p>
<div class="highlight"><pre><span></span><code><span class="k">tell</span> <span class="nb">application</span> <span class="s2">"Messages"</span>
<span class="nv">send</span> <span class="s2">"This is an iMessage"</span> <span class="k">to</span> <span class="nv">buddy</span> <span class="s2">"foo@bar.com"</span> <span class="k">of</span> <span class="p">(</span><span class="nv">service</span> <span class="mi">1</span> <span class="nb">whose</span> <span class="nv">service</span> <span class="nv">type</span> <span class="ow">is</span> <span class="nv">iMessage</span><span class="p">)</span>
<span class="k">end</span> <span class="k">tell</span>
</code></pre></div>
<p>The buddy address can be either an email or a phone number that's registered with Apple for use with iMessage.
Secondly, sending an SMS:</p>
<div class="highlight"><pre><span></span><code><span class="k">tell</span> <span class="nb">application</span> <span class="s2">"Messages"</span>
<span class="nv">send</span> <span class="s2">"This is an SMS"</span> <span class="k">to</span> <span class="nv">buddy</span> <span class="s2">"+1234567890"</span> <span class="k">of</span> <span class="nv">service</span> <span class="s2">"SMS"</span>
<span class="k">end</span> <span class="k">tell</span>
</code></pre></div>
<p>Here, the buddy address should be a phone number.
Simple!
(and for the Hammerspoon users, you'll find hs.messages available in the next release, 0.9.23)</p>The curious Moto X pricing2013-08-02T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2013-08-02:/2013/08/02/quick-thought-on-moto-x.html<p>Comparing the Moto X to the Nexus 4 is interesting in one particular respect - the price.
The Nexus 4 (made by LG, sold by Google) had very respectable specs when it was launched, but its price was surprisingly low ($300 off contract). We were told this was because it was …</p><p>Comparing the Moto X to the Nexus 4 is interesting in one particular respect - the price.
The Nexus 4 (made by LG, sold by Google) had very respectable specs when it was launched, but its price was surprisingly low ($300 off contract). We were told this was because it was being sold very close to cost price.
The Moto X (made by Motorola, which is owned by Google) has mid-range specs, but its price is surprisingly high ($200 up front *and* an expensive two year contract).
Overall Motorola is probably getting something like $400-$600 for each Moto X sold, when you factor in the carrier subsidy.
The inevitable question is why Google is happy to make almost no money off the Nexus 4, but wants to have its Motorola division make a respectable margin on the Moto X.</p>
<ul>
<li>Is it because doing otherwise would undermine the carriers' abilities to sell other phones, so they would refuse to do it?</li>
<li>Is it because Google wants the Motorola division to look good in their accounts, which is easier if you are selling mid-range phones for the kind of money that an iPhone sells for?</li>
<li>Something else?</li>
</ul>Moving on from Terminator2013-07-17T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2013-07-17:/2013/07/17/moving-on-from-terminator.html<p>Anyone who's been following Terminator knows this post has been a long time coming and should not be surprised by it.</p>
<p>As of a few days ago, I have handed over the reigns of the project to the very capable Stephen J Boddy (a name that will be no stranger …</p><p>Anyone who's been following Terminator knows this post has been a long time coming and should not be surprised by it.</p>
<p>As of a few days ago, I have handed over the reigns of the project to the very capable Stephen J Boddy (a name that will be no stranger to followers of the Terminator changelogs - he has contributed a great deal over the last few years).</p>
<p>We're still working out the actual details of the handover, so for now the website is still here and I am still technically the owner of the Launchpad team that runs the project, but going forward all code/release decisions will come from Stephen and we'll move the various administrivia over to new ownership in due course.</p>
<p>Everyone please grab your bug trackers and your python interpreters and go send Stephen patches and feature requests! :D</p>Some Keyboard Maestro macros2013-07-17T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2013-07-17:/2013/07/17/some-keyboard-maestro-macros.html<p>I've started using <a href="http://www.keyboardmaestro.com/main/">Keyboard Maestro</a> recently and it is one impressive piece of software.
Here are a couple of the macros I've written that aren't completely tied to my system:</p>
<p><a href="https://dl.dropboxusercontent.com/u/20103940/Type%20current%20Safari%20URL.kmmacros">Type current Safari URL</a>
- This will type the URL of the frontmost Safari tab/window into your current application. Handy …</p><p>I've started using <a href="http://www.keyboardmaestro.com/main/">Keyboard Maestro</a> recently and it is one impressive piece of software.
Here are a couple of the macros I've written that aren't completely tied to my system:</p>
<p><a href="https://dl.dropboxusercontent.com/u/20103940/Type%20current%20Safari%20URL.kmmacros">Type current Safari URL</a>
- This will type the URL of the frontmost Safari tab/window into your current application. Handy if you're chatting with someone and want to paste them a hilarious YouTube URL without switching apps, copying the URL to the clipboard and switching back.
- It does not use the clipboard, it actually types the URL into the current application, so any modifier keys you hold will change what is being typed. I've configured the macro to fire when the specified hotkey is released, to minimise the chances of this happening.</p>
<p><a href="https://dl.dropboxusercontent.com/u/20103940/Toggle%20Caffeine.kmmacros">Toggle Caffeine</a>
- Very simple, just toggles the state of <a href="http://lightheadsw.com/caffeine/">Caffeine</a> with a hotkey.</p>The (simplest) case against a new Mac Pro at WWDC2013-06-10T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2013-06-10:/2013/06/10/the-case-against-new-mac-pro-at-wwdc.html<p>This is pretty simple really - unless Apple wants to launch a new Mac Pro and have it be out of date almost immediately, they need to wait until Intel has released Ivy Bridge Xeons, which won't be until next month at the earliest (and given the delays with Haswell, July …</p><p>This is pretty simple really - unless Apple wants to launch a new Mac Pro and have it be out of date almost immediately, they need to wait until Intel has released Ivy Bridge Xeons, which won't be until next month at the earliest (and given the delays with Haswell, July seems unlikely). Also coming later this year on Intel's roadmap is the introduction of Thunderbolt 2.</p>
<p>Both of these things would seem like an excellent foundation for a new line of professional Macs.</p>
<p>Given the very short list of hardware model numbers that leaked ahead of today's WWDC keynote, my guess is that Apple is going to hold a pro focused event in 2-4 months and refresh MacBook Pros, Mac Pros and hopefully the surrounding halo like Thunderbolt displays (which are crying out for the new iMac style case, the newer non-glossy screen, USB3.0 and soon, Thunderbolt 2) and the pro software Apple sells.</p>
<p>Having a pro-only event would also help calm the worries that Apple has stopped caring about high-value-low-volume professional users.</p>Thoughts on a modular Mac Pro2013-06-10T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2013-06-10:/2013/06/10/thoughts-on-modular-mac-pro.html<p>There have been some rumours recently that the next iteration of the Mac Pro is going to be modular, but we have had very little information about how this modularity might be expressed.
In some ways the current Mac Pro is already quite modular - at least compared to every other …</p><p>There have been some rumours recently that the next iteration of the Mac Pro is going to be modular, but we have had very little information about how this modularity might be expressed.
In some ways the current Mac Pro is already quite modular - at least compared to every other Mac/MacBook. You have easy access to lots of RAM slots, you have multiple standards-compliant disk bays, PCI slots and CPU sockets.
This affords the machine an extremely atypical level of upgradeability and expandability, for a Mac. Normal levels for a PC though.
Even with that modularity in mind, the machine itself is fairly monolithic - if you do need more than 4 disk drives, or more PCI cards than it can take, you have limited or no expansion options. You could burn a PCI slot for a hardware disk controller and attach some disks to it externally, but you are quickly descending into an exploding mess of power supplies, cables and cooling fans.
If Apple decides to proceed along that route, the easiest and most obvious answer is that they slim down the main Pro itself and decree that all expansion shall take place over Thunderbolt (currently 10Gb/s bidirectional, but moving to 20Gb/s bidirectional later this year when the Thunderbolt 2 Falcon Ridge controllers launch). This is a reasonable option, but even though Thunderbolt is essentially an external PCI-Express bus, its available bandwidth is considerably lower than the peak levels found on an internal PCI-E bus (currently around 125Gb/s).
A much better option, it would seem to me, would be to be able to go <strong><em>radically</em></strong> modular and expand the Mac itself, but how could that be possible? How can you just snap on some more PCI slots if you want those, or some more disks if that's what you need?
I will say at this point that I have absolutely no concrete information and I am not an electronic engineer, so what you read below is poorly informed speculation and should be treated as that :)
I think the answer is Intel's QuickPath Interconnect (QPI), a high bandwidth (over 200GB/s), low latency point-to-point communication bus for connecting the main components of an Intel based computer.
If you have any Intel CPU since around 2009, you probably have a QPI bus being used in your computer. Looking at the latest iteration of their CPUs, QPI is always present - on the uniprocessor CPUs it is used on the chip package to connect the CPU core to the elements of the northbridge that have migrated into the CPU package (such as the PCI-Express controller), however, on these chips the QPI bus is not presented externally.
On the multiprocessor-capable chips, it is, and is the normal way to interconnect the CPUs themselves, but it can be used for other point-to-point links, such as additional north bridges providing PCI-Express busses.
So you could buy a central module from Apple that contains 1, 2 or 4 CPUs (assuming Ivy Bridge Xeons) and all of the associated RAM slots, with maybe two minimal disk bays for the core OS to boot from, and a few USB3.0 and Thunderbolt ports. For the very lightest of users, this would likely be a complete computer - you have some disk, some RAM, CPUs and assuming the Xeons carry integrated GPUs, the Thunderbolt ports can output video. It would not be much of a workstation, but it would essentially be a beefed up Mac Mini.
I would then envision two kinds of modules that would stack on to the central module. The simplest kind would be something like a module with a disk controller chip and a load of disk bays and, not needing the raw power of QPI, this would simply connect to the existing PCI-Express bus of the main module.
There would clearly be a limit to how many of these modules you could connect, since there are a limited number of PCI-E lanes provided by any one controller (typically around 40 lanes on current chipsets), but with the second type of module, you could then take the expansion up a considerable number of notches.
That second kind would have a large and dense connector that is a QPI. These modules could then attach whatever they wanted to the system - more CPUs (up to whatever maximum is supported by that generation of Xeon - likely 8 in Ivy Bridge), or very very powerful IO modules. My current working example of this is a module that is tasked with capturing multiple 4K video streams to disk simultaneously.
This module would provide its own PCI-Express controller (linked back to the main module over QPI), internally connected to a number of video capture chips/cards and to one or more disk controller chips/cards which would connect to a number of disk bays. It sounds a lot like what would happen inside a normal PC, just without the CPU/RAM and that's because it's exactly that.
This would allow for all of the video capture to be happening within the module. It would be controlled as normal from the software running in the main module, which would be issuing the same instructions as if the capture hardware was on the main system PCI-E bus, causing the capture cards to use DMA to write their raw video directly to the disk controller exactly as if they were on the main system PCI-E bus. The difference would be that there would be no other hardware on the PCI-E bus, so you would be able to make reasonable promises around latency and bandwidth, knowing that no user is going to have a crazy extra set of cards in PCI slots, competing for bandwidth. Even if you have two of these modules capturing a really silly amount of video simultaneously. It's a model for being able to do vast amounts of IO in parallel in a single computer.
There would almost certainly need to be a fairly low limit on the number of QPI modules that could attach to the system, but being able to snap on even two or three modules would elevate the maximum capabilities of the Pro to levels far beyond almost any other desktop workstation.
As a prospective owner of the new Mac Pro, my two reasonable fears from this are:</p>
<ul>
<li>They go for the Thunderbolt-only route and my desk looks like an awful, noisy mess</li>
<li>They go for the radical modularity and I can't afford even the core module</li>
</ul>
<p>(While I'm throwing around random predictions, I might as well shoot for a name for the radical modularity model. I would stick with the Lightning/Thunderbolt IO names and call it Super Cell)
Edit: I'd like to credit Thomas Hurst for helping to shape some of my thinking about QPI.</p>My attempt at iPad repair.2013-05-01T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2013-05-01:/2013/05/01/my-attempt-at-ipad-repair.html<p>[<a href="//storify.com/cmsj/my-attempt-at-ipad-repair">View the story "My attempt at iPad repair" on Storify</a>]</p>Alfred 2 clipboard history2013-04-30T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2013-04-30:/2013/04/30/alfred-2-clipboard-history.html<p>The toweringly awesome Alfred 2 app for OS X has a great clipboard history browser. This is how I suggest you configure and use it:</p>
<ul>
<li>Map a hotkey to the viewer (I suggest making it something involving the letter V, since Cmd-V is a normal Paste. I use Cmd-Shift-Option-Ctrl V …</li></ul><p>The toweringly awesome Alfred 2 app for OS X has a great clipboard history browser. This is how I suggest you configure and use it:</p>
<ul>
<li>Map a hotkey to the viewer (I suggest making it something involving the letter V, since Cmd-V is a normal Paste. I use Cmd-Shift-Option-Ctrl V because I have my Caps Lock key mapped to Cmd-Shift-Option-Ctrl)</li>
<li>Turn off the option to show snippets at the top of the Clipboard History, because snippets are a whole different thing and not relevant to pasting history</li>
<li>Turn on the option to auto-paste when you hit Enter on a given item</li>
</ul>
<p>With these options all configured, all you have to do is hit the hotkey, select the old clipboard item you want and hit Enter. It will then be pasted into the active window.</p>
<p>This is also useful to preview the current contents of the clipboard before pasting (which is always a good idea if you're pasting into a sensitive terminal or a work IRC channel and want to avoid spamming some random/harmful nonsense in).</p>Terminator 0.97 released!2013-04-30T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2013-04-30:/2013/04/30/terminator-097-released.html<h2>The present:</h2>
<p>It's been a very long road since Terminator 0.96 back in September 2011, but I'm very happy to announce that Terminator 0.97 was released over breakfast this morning.
There's a reasonable amount of change, but almost all of it is bug fixes and translations.
Here is …</p><h2>The present:</h2>
<p>It's been a very long road since Terminator 0.96 back in September 2011, but I'm very happy to announce that Terminator 0.97 was released over breakfast this morning.
There's a reasonable amount of change, but almost all of it is bug fixes and translations.
Here is the changelog:
- Allow font dimming in inactive terminals
- Allow URL handler plugins to override label text for URL context menus
- When copying a URL, run it through the URL handler first so the resulting URL is copied, rather than the original text
- Allow users to configure a custom URL handler, since the default Gtk library option is failling a lot of users in non-GNOME environments.
- Allow rotation of a group of terminals (Andre Hilsendeger)
- Add a keyboard shortcut to insert a terminal's number (Stephen J Boddy)
- Add a keyboard shortcut to edit the window title (Stephen J Boddy)
- Add an easy way to balance terminals by double clicking on their separator (Stephen J Boddy)
- Add a plugin to log the contents of terminals (Sinan Nalkaya)
- Support configuration of TERM and COLORTERM (John Feuerstein)
- Support reading configuration from alternate files (Pavel Khlebovich)
- Allow creation of new tabs in existing Terminator windows, using our DBus API
- Support the Solarized colour palettes (Juan Francisco Cantero Hutardo)
- Translation support for the Preferences window
- Lots of translation updates (from our fantastic translation community)
- Lots of bug fixes</p>
<p>My sincere thanks to everyone who helped out with making this release happen.</p>
<h2>The future:</h2>
<p>So. Some of you might be wondering why this release isn't called 1.0, as it was tagged for a while in the development code. The main reason is that I just wanted to get a release out, without blocking on the very few remaining bugs/features targeted for the 1.0 release. I hope we'll get to the real 1.0 before very long (and certainly a lot quicker than the gap between 0.96 and 0.97!)</p>
<p>However, I do think that the Terminator project is running out of steam. Our release cadence has slowed dramatically and I think we should acknowledge that. It's entirely my fault, but it affects all of the userbase.</p>
<p>I am planning on driving Terminator to the 1.0 release, but the inevitable question is what should happen with the project after that.</p>
<p>The fact is that, like the original projects that inspired Terminator (gnome-multi-term, quadkonsole, etc.), technology is moving under our feet and we need to keep up or we will be obsolete and unable to run on modern open source desktops.</p>
<p>There is a very large amount of work required to port Terminator to using both Gtk3 and the GObject Introspection APIs that have replaced PyGtk. Neither of these porting efforts can be done in isolation and to make matters more complicated, this also necessitates porting to Python 3.</p>
<p>I am not sure that I can commit to that level of effort in a project that has, for my personal needs, been complete for about 5 years already.</p>
<p>With that in mind, if you think you are interested in the challenge, and up to the task of taking over the project, please talk to me (email cmsj@tenshu.net or talk to Ng in #terminator on Freenode). My suggestion would be that a direct, feature-complete port to Python3/Gtk3/GObject would immediately bump the version number to 2.0 and then get back to thinking about features, bug fixes and improving what we already have.</p>Some more awesome Alfred 2 workflows2013-04-11T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2013-04-11:/2013/04/11/some-more-awesome-alfred-2-workflows.html<p>I keep finding super handy little things to do with Alfred 2 and so I thought I'd post some more:</p>
<ul>
<li><a href="http://www.alfredforum.com/topic/1582-alleyoop-update-alfred-workflows/?hl=alleyoop">Alleyoop</a> - updates installed plugins (if the workflow author supports it, which many currently do not). I hope this will be a temporary workaround until a centralised workflow repository is created …</li></ul><p>I keep finding super handy little things to do with Alfred 2 and so I thought I'd post some more:</p>
<ul>
<li><a href="http://www.alfredforum.com/topic/1582-alleyoop-update-alfred-workflows/?hl=alleyoop">Alleyoop</a> - updates installed plugins (if the workflow author supports it, which many currently do not). I hope this will be a temporary workaround until a centralised workflow repository is created.</li>
<li><a href="http://www.alfredforum.com/topic/1211-battery-view-summary-stats-about-your-laptop-battery/?hl=battery">Battery</a> - shows all the vital stats of your MacBook's battery without having to run an app or a Terminal command.</li>
<li><a href="http://www.alfredforum.com/topic/1805-share-using-mountain-lion-built-in-sharing-version-17/?hl=%2Bbuilt+%2Bsharing">Built-in Sharing</a> - lets you share files directly to all the social services that OS X supports.</li>
<li><a href="http://www.alfredforum.com/topic/1551-paste-current-url-from-safari-into-focussed-application/?hl=%2Bpaste+%2Bcurrent+%2Bsafari">Paste current Safari URL</a> - a workflow I wrote, which pastes the URL of Safari's currently visible webpage, into the application you are using. No need to flip back and forth to copy and paste the URL</li>
<li><a href="https://github.com/bevesce/unicode-symbols-search/raw/master/Symbols.alfredworkflow">Symbols</a> - very easy, visual way to search the Unicode tables for a symbol you're looking for (e.g. arrows, hearts, snowmen, biohazard warning signs, etc)</li>
<li><a href="https://github.com/LeEnno/alfred-terminalfinder/blob/master/TerminalFinder.alfredworkflow?raw=true">TerminalFinder</a> - lets you quickly get a Terminal for the Finder window you're looking at.</li>
</ul>
<p>I imagine there will be more to come, the total number of workflows is <strong>exploding</strong> at the moment!</p>Alfred 2 workflows2013-03-16T00:00:00+00:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2013-03-16:/2013/03/16/alfred-2-workflows.html<p>Since I started using OS X as my primary desktop, I've loved Spotlight for launching apps and finding files. I resisted trying any of the replacement apps, for fear of the bottomless pit of customisation that they seemed to offer.
With the very recent release of Alfred 2, I was …</p><p>Since I started using OS X as my primary desktop, I've loved Spotlight for launching apps and finding files. I resisted trying any of the replacement apps, for fear of the bottomless pit of customisation that they seemed to offer.
With the very recent release of Alfred 2, I was finally tempted to try it by the previews of their Workflow feature.
The idea is that you can add new commands to Alfred by writing scripts in bash/python/ruby/php and then neatly package them up and share them with others. I was expecting to write a few myself and share them, but the user community has been spinning up so quickly that they've already covered everything I was going to write.
Instead, I decided to use some time to write about the workflows I'm using so far:</p>
<ul>
<li><a href="http://www.alfredforum.com/topic/940-google-search-in-line-results-workflow/">Google Search</a> - get live results from Google as you type. It's not always what I want when I'm searching, but it's a very quick way to get some insight into the results available.</li>
<li><a href="http://www.alfredforum.com/topic/1041-create-new-task-in-omnifocus-inbox/">New OmniFocus Inbox Task</a> - Very quick way to create a new task for later triage</li>
<li><a href="http://www.alfredforum.com/topic/826-ssh-with-smart-hostname-autocompletion/">Open SSH</a> - This collects up all your hosts from SSH's known_host file, config file and local network, then opens terminal windows for you to ssh to the host you choose.</li>
<li><a href="http://www.alfredforum.com/topic/202-parallels-desktop-workflow/">Parallels Desktop</a> - Easy way to start/resume your Parallels virtual machines.</li>
<li><a href="http://www.alfredforum.com/topic/375-rate-itunes-track/">Rate iTunes Track</a> - does what it sounds like, rate the current iTunes track.</li>
<li><a href="http://www.alfredforum.com/topic/942-screen-sharing-with-automatic-network-discovery/">Screen Sharing</a> - quickly VNC to the hosts on your network that are advertising it (including iCloud hosts if you have Back To My Mac configured)</li>
<li><a href="http://www.alfredforum.com/topic/476-toggle-vpn/">VPN Toggle</a> - get on/off your corporate network quickly.</li>
</ul>
<p>Lots more on the Alfred 2 forums. At some point it would be nice to see this unified into some kind of integrated search/download feature of Alfred 2.</p>
<hr>
<p>Update: (2012-04-12) I've written a <a href="http://www.tenshu.net/2013/04/some-more-awesome-alfred-2-workflows.html">second post</a> that covers a few more workflows I've discovered since this one.</p>How the death of Google Reader looked in my Twitter timeline2013-03-14T00:00:00+00:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2013-03-14:/2013/03/14/how-death-of-google-reader-looked-in-my.html<p>[<a href="//storify.com/cmsj/how-the-death-of-google-reader-looked-in-my-twitte">Read this post on Storify</a>]</p>LCD and a crazy disk chassis2013-02-06T00:00:00+00:002023-11-22T17:28:51+00:00Chris Jonestag:cmsj.net,2013-02-06:/2013/02/06/lcd-and-crazy-disk-chassis.html<p>If you saw my <a href="http://www.tenshu.net/2013/01/funky-lcd4linux-python-module.html">recent post</a> on some preparatory work I'd been doing for the arrival of an LCD status panel for my HP Microserver, it's probably no surprise that there is now a post talking about its arrival :)</p>
<p>Rather than just waste the 5.25" bay behind the LCD …</p><p>If you saw my <a href="http://www.tenshu.net/2013/01/funky-lcd4linux-python-module.html">recent post</a> on some preparatory work I'd been doing for the arrival of an LCD status panel for my HP Microserver, it's probably no surprise that there is now a post talking about its arrival :)</p>
<p>Rather than just waste the 5.25" bay behind the LCD, I wanted to try and put some storage in there, particularly since the Microserver's BIOS can be <a href="http://www.avforums.com/forums/networking-nas/1521657-hp-n36l-microserver-updated-ahci-bios-support.html">modified</a> to enable full AHCI on the 5th SATA port.</p>
<p>I recently came across the Icy Box <a href="http://www.raidsonic.de/en/products/ssd.php?we_objectID=8206">IB-RD2121StS</a>, a hilarious piece of hardware. It's the size and shape of a normal 3.5" SATA disk, but the back opens up to take two 2.5" SATA disks. These disks can then be exposed either individually, or as a combined RAID volume (levels 0 or 1). Since I happen to have a couple of 1TB 2.5" disks going spare, this seemed like the perfect option, as well as being so crazy that I couldn't not buy it!</p>
<p>The LCD is a red-on-black pre-made 5.25" bay insert from <a href="http://www.lcdmodkit.com/">LCDModKit</a>. It has an <a href="http://www.harbaum.org/till/lcd2usb/index.shtml">LCD2USB</a> controller, which means it's very well supported by projects like <a href="http://ssl.bulix.org/projects/lcd4linux/">lcd4linux</a> and <a href="http://www.lcdproc.org/">lcdproc</a>. It comes with an internal USB connector (intended to connect directly to a motherboard LCD port), except the Microserver's internal USB port is a regular external Type A port. Fortunately converters are easy to come by.</p>
<p>Something I hadn't properly accounted for in my earlier simulator work is that the real hardware only has space for 8 user-definable characters and I was using way more than that (three of my own custom icons, but lcd4linux's split bars and hollow graphs use custom characters too). Rather than curtail my own custom icons, I chose to stop using hollow graphs, which seems to have worked.</p>
<p><img alt="Icy Box enclosure" src="http://cmsj.net/IMG_5588.jpg">
<em>The Icy Box enclosure</em></p>
<p><img alt="Ta-da! The back opens up" src="http://cmsj.net/IMG_5592.jpg">
<em>Ta-da! The back opens up</em></p>
<p><img alt="Selector switch" src="http://cmsj.net/IMG_5590.jpg">
<em>Selector switch for which type of volume/RAID you want</em></p>
<p><img alt="Icy Box and LCD" src="http://cmsj.net/IMG_5598.jpg">
<em>Marrying the Icy Box and the LCD. Only a small amount of metalwork required</em></p>
<p><img alt="Box and LCD installed" src="http://cmsj.net/IMG_5600.jpg">
<em>Icy Box and LCD being installed</em></p>
<p><img alt="Finished installed" src="http://cmsj.net/IMG_5606.jpg">
<em>Finished install!</em></p>Funky lcd4linux python module2013-01-26T00:00:00+00:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2013-01-26:/2013/01/26/funky-lcd4linux-python-module.html<p>I've got an LCD on the way, to put in my fileserver and show some status/health info.
Rather than wait for the thing to arrive I've gone ahead and started making the config I want with lcd4linux.
Since the LCD I'm getting is only 20 characters wide and 4 …</p><p>I've got an LCD on the way, to put in my fileserver and show some status/health info.
Rather than wait for the thing to arrive I've gone ahead and started making the config I want with lcd4linux.
Since the LCD I'm getting is only 20 characters wide and 4 lines tall, there is not very much space, so I've had to get pretty creative with how I'm displaying information.
One thing I wanted was to show the percentage used of the various disks in the machine, but since I have at least 3 mount points, that would either mean scrolling text (ugly) or consuming ¾ of the display (inefficient).
It seemed like a much nicer idea to use a single line to represent the space used as a percentage and simple display each of the mounts in turn, but unfortunately lcd4linux's "Evaluator" syntax is not sufficiently complex to be able to implement this directly, so I faced the challenge of either writing a C plugin or passing the functionality off to a Python module.
I tend to think that this feature ought to be implemented as a C plugin because it makes it easier to use, but I am unlikely to bother with that because I prefer Python, so I went with a Python module :)
The code is <a href="https://github.com/cmsj/lcd4linux_rotator">on github</a> and the included README.md covers how to use it in an lcd4linux configuration.
At some point soon I'll post my lcd4linux configuration - just as soon as I've figured out what to do with the precious 4th line. In the mean time, here is a video of the rotator plugin operating on the third line (the first line being disk activity and the second line being network activity):</p>
<p>Update: I figured out what to do with the fourth line:</p>
<p>That's another python module, this time a port of Chris Applegate's <a href="http://www.qwghlm.co.uk/toys/dailymail/">Daily Mail headline generator</a> from JavaScript to Python. Code is on <a href="https://github.com/cmsj/dailymail">github</a>.
As promised, the complete lcd4linux config is available (also on github) <a href="https://gist.github.com/4694242">here</a>.</p>Using Caps Lock as a new modifier key in OS X2012-11-27T00:00:00+00:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2012-11-27:/2012/11/27/using-caps-lock-as-new-modifier-key-in.html<p>Update: I have moved this post to its own page, see <a href="http://www.tenshu.net/p/fake-hyper-key-for-osx.html">http://www.tenshu.net/p/fake-hyper-key-for-osx.html</a> for the latest version.</p>Paperless workflow2012-07-01T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2012-07-01:/2012/07/01/paperless-workflow.html<h2><strong>Introduction</strong></h2>
<p>This is going to be quite a long post, but hopefully interesting to a particular crowd of people.
I'm going to tell you all about how I have designed and built a paperless workflow for myself.</p>
<h2><strong>Background</strong></h2>
<p>This came about some months ago when I needed to find several …</p><h2><strong>Introduction</strong></h2>
<p>This is going to be quite a long post, but hopefully interesting to a particular crowd of people.
I'm going to tell you all about how I have designed and built a paperless workflow for myself.</p>
<h2><strong>Background</strong></h2>
<p>This came about some months ago when I needed to find several important documents that were spread through the various organised files that I keep things in. The search took much longer than I would have liked, partly because I am not very efficient at putting paper into the files.</p>
<p>You could suggest that I just get better at doing that, but even if I were to do that, it still only makes me quicker at finding paperwork from the files on my shelf. If I want to really kick things up a gear, the files need to be electronic, accessible from anywhere and powerfully searchable.</p>
<h2><strong>The hardware</strong></h2>
<p>I started thinking about what I would want. Obviously a scanner was going to be the first pre-requisite of being able to digitise my papers, but what kind to get? After investigating what other people had already said about paperless workflows, it seemed like the ScanSnap range of scanners was a popular choice, but they are quite expensive and it's one more thing on my desk. Instead I decided to go for a multi-function inkjet printer - they have scanners that are good enough, and even though they're bigger than a ScanSnap, I'm also getting a printer in the bargain.</p>
<p>So which one to get? Well that depended on which features were important. My highest priority in this project was that the process of taking a document from paper to my laptop had to be as simple as possible, so in the realms of scanning devices, that means you need one which can automatically scan both sides of the paper.</p>
<p>This turns out to be quite rare in multi-function printers, but after a great deal of research, I found the Epson Stylus Office BX635FWD which has a duplex ADF (Automatic Document Feeder), is very well supported in MacOS X, and is a decent printer (which, for bonus points, supports Apple's AirPrint and Google's Cloud Print standards).</p>
<p>The setup of the Epson was extremely pleasing - it has a little LCD screen and various buttons, which meant that I could power it up and join it to my WiFi network without having to connect it to a computer via USB at all. I then added it as a printer on my laptop (which was easy since the printer was already announcing itself on the WiFi network) and OS X was happy to do both printing and scanning over WiFi.</p>
<p>I then investigated the Epson software for it and found that I didn't have to install a giant heap of drivers and applications, I could pick and choose which things I had. Specifically I was interested in whether I could react to the Scan button being pressed on the printer, even though it was not connected via USB. It turns out that this is indeed possible, via a little application called EEventManager. With that setup to process the scans to my liking (specifically, Colour, 300DPI, assembled into a PDF and saved into a particular temporary directory), the hardware stage of the project was over.</p>
<p>With the ability to turn paper into a PDF with a couple of button presses on the printer itself, I was ready to figure out what to do with it next.</p>
<h2><strong>The software</strong></h2>
<p>As people with a focus on paperless workflows (such as <a href="http://www.macsparky.com/">David Sparks</a>) have rightly pointed out, there are several stages to a paperless workflow - capture, processing and recall. At this point I had the capture stage sorted, so the next one is processing.</p>
<p>When you have a PDF with scanned images inside it, you obviously can't do anything with the text on the pages, it's not computer-readable text, it's a picture, but it turns out that it is possible to tell the PDF what the words are and where they are on the page, which makes the text selectable. So my attention turned to OCR (Optical Character Recognition) software. I didn't engage in a particularly detailed survey because I came across a great deal on Nuance's PDF Converter For Mac product and was so impressed with its trial copy that I snapped up the deal and forged ahead. I hear good things about PDFPen, but I've never tried it.</p>
<h2><strong>Automation</strong></h2>
<p>Having a directory full of scanned documents and some OCR software is a good place to be, but it's not a <em>great</em> place to be unless you can automate it. Fortunately, OS X has some pretty excellent automation tools.</p>
<p>The magic all happens in a single Automator workflow configured as a Folder Action on the directory that EEventManager is saving the PDFs into:</p>
<p><img alt="Workflow" src="http://3.bp.blogspot.com/-sLYIV-dYOiY/T_Byq6UQyMI/AAAAAAAAAKQ/m50LF3zChRg/s640/ocr-archive-workflow.png"></p>
<p>It will find any PDF files in that temporary folder, then loop over them, opening each one in Nuance PDF Converter, run the OCR function then save the PDF. The file is then moved to an archive directory and renamed to a generic date/time based filename. That's it.</p>
<h2><strong>That's it</strong></h2>
<p>Like I said, that's it. If you've been paying attention, at this point you'll say "but wait, you said there was a third part of a paperless workflow - you need tools to recall the documents later!". You would be right to say that, but the good news is that OS X solves this problem for you with zero additional effort.</p>
<p>As soon as the PDF is saved with the computer-readable text that the OCR function produces, it is indexed by the system's search system - Spotlight. Now all you need to do is hit Cmd-Space and type some keywords, you'll see all your matching documents and be able to get a preview. You can also open the search into a Finder window and see larger previews, change the sorting, edit the search terms, etc.</p>
<h2><strong>Future work</strong></h2>
<p>While that is it, there are future things I'd like to do - specifically I don't currently have an easy way to pull in attachments from emails, or downloaded PDFs, I have to go and drag them into the archived folder and optionally rename them. However, if you have your email hooked into the system email client (Mail.app) then it is being indexed by Spotlight, including attachments, so there's no immediate hurry to figure out a solution for that.</p>
<p>I do also like the idea of detecting specific keywords (e.g. company names) in the documents and using those to file the PDFs in subdirectories, but I'm not sure if I actually need/want it, so for now I'm sticking with one huge directory of everything.</p>Photo import workflow2012-07-01T00:00:00+01:002023-11-22T17:23:28+00:00Chris Jonestag:cmsj.net,2012-07-01:/2012/07/01/photo-import-workflow.html<h2>Introduction</h2>
<p>Since I'm writing about workflows today, I thought I'd also quickly chuck in a guide to how I get the photos and movies that I've taken with my iPhone, onto my laptop and specifically, imported into Aperture.
The Mechanics
This requires a few moving parts to produce a final …</p><h2>Introduction</h2>
<p>Since I'm writing about workflows today, I thought I'd also quickly chuck in a guide to how I get the photos and movies that I've taken with my iPhone, onto my laptop and specifically, imported into Aperture.
The Mechanics
This requires a few moving parts to produce a final workflow. The high-level process is:</p>
<ol>
<li>Plug iPhone into a USB port</li>
<li>Copy photos from the iPhone into a temporary directory, deleting them as they are successfully retrieved</li>
<li>Import the photos into Aperture, ensuring they are copied into its library and deleted from the temporary directory</li>
</ol>
<p>Simple, right? Well yes and no.</p>
<h2>Retrieval from iPhone</h2>
<p>This really ought to be easier than it is, but at least it is possible.
Aperture can import photos from devices, but it doesn't seem to offer the ability to delete them from the device after import. That alone makes it not even worth bothering with if you don't want to build up a ton of old photos on your phone.
OS X does ship with a tool that can import photos from camera devices and delete the photos afterwards, a tool called AutoImporter.app, but you won't find it without looking hard. It lives at:
<code>/System/Library/Image Capture/Support/Application/AutoImporter.app</code></p>
<p>If you run that tool, you will see no window, just a dock icon and some menus. Go into its Preferences and you will be able to choose a directory to import to, and choose whether or not to delete the files:
<img alt="prefs" src="http://cmsj.net/autoimporter.png"></p>
<p>Easy!</p>
<h2>Importing into Aperture</h2>
<p>This involves using Automator to build a Folder Action workflow for the directory that AutoImporter is pulling the photos into. All it does is check to see if AutoImporter is still running and if so wait, then launch Aperture and tell it to import everything from that directory into a particular Project, and then delete the source files:
<img alt="Aperture autoimport workflow" src="http://cmsj.net/aperture-autoimport-workflow.png"></p>
<h2>That's it!</h2>
<p>Really, that's all there is. Now whenever you plug in your iPhone, all of the pictures and movies you've taken recently, will get imported into Aperture for you to process, archive, touch-up, export or whatever else it is that you do with your photos and movies.</p>A sysadmin talks OpenSSH tips and tricks2012-02-07T00:00:00+00:002018-09-20T10:16:45+01:00Chris Jonestag:cmsj.net,2012-02-07:/2012/02/07/sysadmin-talks-openssh-tips-and-tricks.html<h1>My take on more advanced SSH usage</h1>
<p>I've seen a few articles recently on sites like HackerNews which claimed to cover some advanced SSH techniques/tricks. They were good articles, but for me (as a systems administrator) didn't get into the really powerful guts of OpenSSH.</p>
<p>So, I figured that …</p><h1>My take on more advanced SSH usage</h1>
<p>I've seen a few articles recently on sites like HackerNews which claimed to cover some advanced SSH techniques/tricks. They were good articles, but for me (as a systems administrator) didn't get into the really powerful guts of OpenSSH.</p>
<p>So, I figured that I ought to pony up and write about some of the more advanced tricks that I have either used or seen others use. These will most likely be relevant to people who manage tens/hundreds of servers via SSH. Some of them are about actual configuration options for OpenSSH, others are recommendations for ways of working with OpenSSH.</p>
<h2>Generate your ~/.ssh/config</h2>
<p>This isn't strictly an OpenSSH trick, but it's worth noting. If you have other sources of knowledge about your systems, automation can do a lot of the legwork for you in creating an SSH config. A perfect example here would be if you have some kind of database which knows about all your servers - you can use that to produce a fragment of an SSH config, then download it to your workstation and concatenate it with various other fragments into a final config. If you mix this with distributed version control, your entire team can share a broadly identical SSH config, with allowance for each person to have a personal fragment for their own preferences and personal hosts. I can't recommend this sort of collaborative working enough.</p>
<h2>Generate your ~/.ssh/known_hosts</h2>
<p>This follows on from the previous item. If you have some kind of database of servers, teach it the SSH host key of each (usually something like <code>/etc/ssh/ssh_host_rsa_key.pub</code>) then you can export a file with the keys and hostnames in the correct format to use as a <code>known_hosts</code> file, e.g.:</p>
<p><code>server1.company.com 10.0.0.101 ssh-rsa BLAHBLAHCRYPTOMUMBO</code></p>
<p>You can then associate this with all the relevant hosts by including something like this in your <code>~/.ssh/config</code>:</p>
<div class="highlight"><pre><span></span><code>Host *.mycompany.com
UserKnownHostsFile ~/.ssh/generated_known_hosts
StrictHostKeyChecking yes
</code></pre></div>
<p>This brings some serious advantages:</p>
<ul>
<li>Safer - because you have pre-loaded all of the host keys and specified strict host key checking, SSH will prompt you if you connect to a machine and something has changed.</li>
<li>Discoverable - if you have tab completion, your shell will let you explore your infrastructure just by prodding the Tab key.</li>
</ul>
<h2>Keep your private keys, private, private</h2>
<p>This seems like it ought to be more obvious than it perhaps is... the private halves of your SSH keys are very privileged things. You should treat them with a great deal of respect. Don't put them on multiple machines (SSH keys are cheap to generate and revoke) and don't back them up.</p>
<h2>Know your limits</h2>
<p>If you're going to write a config snippet that applies to a lot of hosts you can't match with a wildcard, you may end up with a very long <code>Host</code> line in your ssh config. It's worth remembering that there is a limit to the length of lines: 1024 characters. If you're going to need to exceed that, you will have to just have multiple <code>Host</code> sections with the same options.</p>
<h2>Set sane global defaults</h2>
<div class="highlight"><pre><span></span><code>HashKnownHosts no
Host *
GSSAPIAuthentication no
ForwardAgent no
</code></pre></div>
<p>These are very sane global defaults:</p>
<ul>
<li>Known hosts hashing is good for keeping your hostnames secret from people who obtain your <code>known_hosts</code> file, but is also really very inconvenient as you are also unable to get any useful information out of the file yourself (such as tab completion). If you're still feeling paranoid you might consider tightening the permissions on your <code>known_hosts</code> file as it may be readable by other users on your workstation.</li>
<li>GSSAPI is very unlikely to be something you need, it's just slowing things down if it's enabled.</li>
<li>Agent forwarding can be tremendously dangerous and should, I think, be actively and passionately discouraged. It ought to be a nice feature, but it requires that you trust remote hosts unequivocally as if they had your private keys, because functionally speaking, they do. They don't actually have the private key material, but any sufficiently privileged process on the remote server can connect back to the SSH agent running on your workstation and request it respond to challenges from an SSH server. If you keep your keys unlocked in an SSH agent, this gives any privileged attacker on a server you are logged into, trivial access to any other machine your keys can SSH into. If you somehow depend on using agent forwarding with Internet facing servers, please re-consider your security model (unless you are able to robustly and accurately argue why your usage is safe, but if that is the case then you don't need to be reading a post like this!)</li>
</ul>
<h2>Notify useful metadata</h2>
<p>If you're using a Linux or OSX desktop, you either have something like <code>notify-send(1)</code> or Growl for desktop notifications. You can hook this into your SSH config to display useful metadata to yourself. The easiest way to do this is via the <code>LocalCommand</code> option:</p>
<div class="highlight"><pre><span></span><code>Host *
PermitLocalCommand yes
LocalCommand /home/user/bin/ssh-notify.sh %h
</code></pre></div>
<p>This will call the <code>ssh-notify.sh</code> script every time you SSH to a host, passing the hostname you gave, as an argument. In the script you probably want to ensure you're actually in an interactive terminal and not some kind of backgrounded batch session - this can be done trivially by ensuring that <code>tty -s</code> returns zero. Now the script just needs to go and fetch some metadata about the server you're connecting to (e.g. its physical location, the services that run on it, its hardware specs, etc.) and format them into a command that will display a notification.</p>
<h2>Sidestep overzealous key agents</h2>
<p>If you have a lot of SSH keys in your ssh-agent (e.g. more than about 5) you may have noticed that SSHing to machines which want a password, or those which you wish to use a specific key that isn't in your agent, can be quite tricky. The reason for this is that OpenSSH currently seems to talk to the agent in preference to obeying command line options (i.e. <code>-i</code>) or config file directives (i.e. <code>IdentityFile</code> or PreferredAuthentications). You can force the behaviour you are asking for with the <code>IdentitiesOnly</code> option, e.g.:</p>
<div class="highlight"><pre><span></span><code>Host server1.company.com
IdentityFile /some/rarely/used/ssh.key
IdentitiesOnly yes
</code></pre></div>
<p>(on a command line you would add this with <code>-o IdentitiesOnly=yes</code>)</p>
<h2>Match hosts with wildcards</h2>
<p>Sometimes you need to talk to a lot of almost identically-named servers. Obviously SSH has a way to make this easier or I wouldn't be mentioning this. For example, if you needed to ssh to a cluster of remote management devices:</p>
<div class="highlight"><pre><span></span><code>Host *.company.com management-rack-??.company.com
User root
PreferredAuthentications password
</code></pre></div>
<p>This will match anything ending in <code>.company.com</code> and also anything that starts with <code>management-rack-</code> and then has two characters, followed by <code>.company.com</code>.</p>
<h2>Per-host SSH keys</h2>
<p>You may have some machines where you have a different key for each machine. By naming them after the fully qualified domain names of the hosts they relate to, you can skip over a more tedious SSH config with something like the following:</p>
<div class="highlight"><pre><span></span><code>Host server-??.company.com
IdentityFile /some/path/id_rsa-%h
</code></pre></div>
<p>(the <code>%h</code> will be substituted with the FQDN you're SSHing to. The <code>ssh_config</code> man page lists a few other available substitutions.)</p>
<h2>Use fake, per-network port forwarding hosts</h2>
<p>If you have network management devices which require web access that you normally forward ports for with the <code>-L</code> option, consider constructing a fake host in your SSH config which establishes all of the port forwards you need for that network/datacentre/etc:</p>
<div class="highlight"><pre><span></span><code>Host port-forwards-site1.company.com
Hostname server1.company.com
LocalForward 1234 10.0.0.101:1234
</code></pre></div>
<p>This also means that your forwards will be on the same port each time, which makes saving certificates in your browser a reasonable undertaking. All you need to do is <code>ssh port-forwards-site1.company.com</code> (using nifty Tab completion of course!) and you're done. If you don't want it tying up a terminal you can add the options <code>-f</code> and <code>-N</code> to your command line, which will establish the ssh connection in the background.</p>
<p>If you're using programs which support SOCKS (e.g. Firefox and many other desktop Linux apps) you can use the <code>DynamicForward</code> option to send traffic over the SSH connection without having to add <code>LocalForward</code> entries for each port you care about. Used with a browser extension such as FoxyProxy (which lets you configure multiple proxies based on wildcard/regexp URL matches) makes for a very flexible setup.</p>
<h2>Use an SSH jump host</h2>
<p>Rather than have tens/dozens/hundreds/etc of servers holding their SSH port open to the Internet and being battered with brute force password cracking attempts, you might consider having a single host listening (or a single host per network perhaps), which you can proxy your SSH connections through.</p>
<p>If you do consider something like this, you must resist the temptation to place private keys on the jump host - to do so would utterly defeat the point.</p>
<p>Instead, you can use an old, but very nifty trick that completely hides the jump host from your day-to-day usage:</p>
<div class="highlight"><pre><span></span><code>Host jumphost.company.com
ProxyCommand none
Host *.company.com
ProxyCommand ssh jumphost.company.com nc -q0 %h %p
</code></pre></div>
<p>You might wonder what on earth that is doing, but it's really quite simple. The first <code>Host</code> stanza just means we won't use any special commands to connect to the jump host itself. The second <code>Host</code> stanza says that in order to connect to anything ending in <code>.company.com</code> (but excluding <code>jumphost.company.com</code> because it just matched the previous stanza) we will first SSH to the jump host and then use <code>nc(1)</code> (i.e. netcat) to connect to the relevant port (<code>%p</code>) on the host we originally asked for (<code>%h</code>). Your local SSH client now has a session open to the jump host which is acting like it's a socket to the SSH port on the host you wanted to talk to, so it just uses that connection to establish an SSH session with the machine you wanted. Simple!</p>
<p>For those of you lucky enough to be connecting to servers that have OpenSSH 5.4 or newer, you can replace the jump host <code>ProxyCommand</code> with:</p>
<p><code>ProxyCommand ssh -W %h:%p jumphost.company.com</code></p>
<h2>Re-use existing SSH connections</h2>
<p>Some people swear by this trick, but because I'm very close to my servers and have a decent CPU, the setup time for connections doesn't bother me. Folks who are many milliseconds from their servers, or who don't have unquenchable techno-lust for new workstations, may appreciate saving some time when establishing SSH connections.</p>
<p>The idea is that OpenSSH can place connections into the background automatically, and re-use those existing secure channels when you ask for a new <code>ssh(1)</code>, <code>scp(1)</code> or <code>sftp(1)</code> connections to hosts you have already spoken to. The configuration I would recommend for this, would be:</p>
<div class="highlight"><pre><span></span><code>Host *
ControlMaster auto
ControlPath ~/.ssh/control/%h-%l-%p
ControlPersist 600
</code></pre></div>
<p>This will do several things:</p>
<ul>
<li><code>ControlMaster auto</code> will cause OpenSSH to establish the "master" connection sockets as needed, falling back to normal connections if something is wrong.</li>
<li>The <code>ControlPath</code> option specifies where the connection sockets will live. Here we are placing them in a directory and giving them filenames that consist of the hostname, login username and port, which ought to be sufficient to uniquely identify each connection. If you need to get more specific, you can place this section near the end of your config and have explicit <code>ControlPath</code> entries in earlier <code>Host</code> stanzas.</li>
<li><code>ControlPersist 600</code> causes the master connections to die if they are idle for 10 minutes. The default is that they live on as long as your network is connected - if you have hundreds of servers this will add up to an awful lot of <code>ssh(1)</code> processes running on your workstation! Depending on your needs, 10 minutes may not be long enough.</li>
</ul>
<p><em>Note:</em> You should make the <code>~/.ssh/control</code> directory ahead of time and ensure that only your user can access it.</p>
<h2>Cope with old/buggy SSH devices</h2>
<p>Perhaps you have a bunch of management devices in your infrastructure and some of them are a few years old already. Should you find yourself trying to SSH to them, you might find that your connections don't work very well. Perhaps your SSH client is too new and is offering algorithms their creaky old SSH servers can't abide. You can strip down the long default list of algorithms to this to ones that a particular device supports, e.g.:</p>
<div class="highlight"><pre><span></span><code>Host power-device-1.company.com
HostkeyAlgorithms ssh-rsa,ssh-dss
</code></pre></div>
<h2>That's all folks</h2>
<p>Those are the most useful tips and tricks I have for now. Hopefully someone will read this and think "hah! I can do <strong><em>much</em></strong> more advanced stuff than that!" and one-up me :)</p>
<p>Do feel free to comment if you do have something sneaky to add, I'll gladly steal your ideas!</p>Evil shell genius2012-01-23T00:00:00+00:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2012-01-23:/2012/01/23/evil-shell-genius.html<p>Jono Lange was committing acts of great evil in Bash earlier today. I gave him a few pointers and we agreed that it was sufficiently evil that it deserved a blog post.
So, if you find yourself wishing you could get pretty desktop notifications when long-running shell commands complete, see …</p><p>Jono Lange was committing acts of great evil in Bash earlier today. I gave him a few pointers and we agreed that it was sufficiently evil that it deserved a blog post.
So, if you find yourself wishing you could get pretty desktop notifications when long-running shell commands complete, see his post <a href="http://code.mumak.net/2012/01/undistract-me.html">here</a> for the details.</p>HP Microserver Remote Access helper2012-01-06T00:00:00+00:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2012-01-06:/2012/01/06/hp-microserver-remote-access-helper.html<p>I've only had the Remote Access card installed in my HP Microserver for a few hours and already I am bored of accessing it by first logging into the web UI, then navigating to the right bit of the UI, then clicking a button to download a .jnlp file and …</p><p>I've only had the Remote Access card installed in my HP Microserver for a few hours and already I am bored of accessing it by first logging into the web UI, then navigating to the right bit of the UI, then clicking a button to download a .jnlp file and then running that with javaws(1).
Instead, I have written some Python that will login for you, fetch the file and execute javaws. Much better!
You can find the code: <a href="http://bazaar.launchpad.net/~cmsj/+junk/microserver/view/head:/vkvm.py">here</a> and you'll want to have python-httplib2 installed.</p>HP Microserver Remote Access Card2012-01-05T00:00:00+00:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2012-01-05:/2012/01/05/hp-microserver-remote-access-card.html<p>I've been using an HP ProLiant Microserver (N36L) as my fileserver at home, for about a year and it's been a really reliable little workhorse.
Today I gave it a bit of a spruce up with 8GB of RAM and the Remote Access Card option.
Since it came with virtually …</p><p>I've been using an HP ProLiant Microserver (N36L) as my fileserver at home, for about a year and it's been a really reliable little workhorse.
Today I gave it a bit of a spruce up with 8GB of RAM and the Remote Access Card option.
Since it came with virtually no documentation, and since I can't find any reference online to anyone else having had the same issue I had, I'm writing this post so Google can help future travellers.
When you are installing the card, check in the BIOS's PCI Express options that you have set it to automatically choose the right graphics card to use. I had hard coded it to use the onboard VGA controller.
The reason for this is that the RAC card is actually a graphics card, so the BIOS needs to be able to activate it as the primary card.
If you don't change this setting, what you will see is the RAC appear to work normally, but its vKVM remote video feature will only ever show you a green screen window, with the words "OUT OF RANGE" in yellow letters.
Annoyingly, I thought this was my 1920x1080 monitor confusing things, so it took me longer to fix this than it should have, but there we go.</p>What is the value of negative feedback on the Internet?2011-10-11T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2011-10-11:/2011/10/11/what-is-value-of-negative-feedback-on.html<p>I'm sure we've all been there - you buy something on eBay or from a third party on Amazon, and what you get is either rubbish or not what you asked for.
The correct thing to do is to talk to the seller first to try and resolve your problem, and …</p><p>I'm sure we've all been there - you buy something on eBay or from a third party on Amazon, and what you get is either rubbish or not what you asked for.
The correct thing to do is to talk to the seller first to try and resolve your problem, and then when everything is said and done, leave feedback rating the overall experience.
Several times in the last year I have gone through this process and ended up feeling the need to leave negative feedback. The most obvious case was some bluetooth headphones I'd bought from an eBay seller in China that were so obviously fake that it was hilarious he was even trying to convince me I was doing something wrong.
In each of these cases, I have been contacted shortly after the negative feedback to ask if I will remove the feedback in return for a full/partial refund.
This has tickled the curious side of my brain into wanting to know what the value of negative feedback is. The obvious way to find out would be to buy items of various different price and then leave negative feedback and see how far the sellers are prepared to go to preserve their reputations.
The obvious problem here is that this would be an unethical and unfair way to do science. Perhaps it would be possible to crowd-source anecdotes until they count as data?</p>Dear Apple2011-10-06T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2011-10-06:/2011/10/06/dear-apple.html<p>I just woke up here in London and saw the news about Steve Jobs. It's early and, as usual for this time of day, my seven month old son is playing next to me. He has no concept of what my iPhone is, but it holds his fascination like none …</p><p>I just woke up here in London and saw the news about Steve Jobs. It's early and, as usual for this time of day, my seven month old son is playing next to me. He has no concept of what my iPhone is, but it holds his fascination like none of his brightly coloured toys do. Only iPad can cause him to abandon his toys and crawl faster.
I'd like to thank you all, including Steve, for your work. You have brought technology to ordinary people in a way that delights them without them having to know why.
Please keep doing that for a very long time</p>Terminator 0.96 released2011-09-23T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2011-09-23:/2011/09/23/terminator-096-released.html<p>I've just pushed up the release tarball and PPA uploads for Terminator 0.96. It's mainly a bug fix release, but it does include a few new features. Many thanks to the various community folks who have contributed fixes, patches, bugs, translations and branches to this release. The changelog is …</p><p>I've just pushed up the release tarball and PPA uploads for Terminator 0.96. It's mainly a bug fix release, but it does include a few new features. Many thanks to the various community folks who have contributed fixes, patches, bugs, translations and branches to this release. The changelog is below:</p>
<p>terminator 0.96:</p>
<p>* Unity support for opening new windows (Lucian Adrian Grijincu)
* Fix searching with infinite scrollback (Julien Thewys #755077)
* Fix searching on Ubuntu 10.10 and 11.04, and implement searching by regular expression (Roberto Aguilar #709018)
* Optimise various low level components so they are dramatically faster (Stephen Boddy)
* Fix various bugs (Stephen Boddy)
* Fix cursor colours (#700969) and a cursor blink issue (Tony Baker)
* Improve and extend drag&drop support to include more sources of text, e.g. Gtk file chooser path buttons (#643425)
* Add a plugin to watch a terminal for inactvity (i.e. silence)
* Fix loading layouts with more than two tabs (#646826)
* Fix order of tabs created from saved layouts (#615930)
* Add configuration to remove terminal dimensions from titlebars (patch from João Pinto #691213)
* Restore split positions more accurately (patch from Glenn Moss #797953)
* Fix activity notification in active terminals. (patch from Chris Newton #748681)
* Stop leaking child processes if terminals are closed using the context menu (#308025)
* Don't forget tab order and custom labels when closing terminals in them (#711356)
* Each terminal is assigned a unique identifier and this is exposed to the processes inside the terminal via the environment variable TERMINATOR_UUID
* Expand dbus support to start covering useful methods. Also add a commandline tool called 'remotinator' that can be used to control Terminator from a terminal running inside it.
* Fix terminal font settings for users of older Linux distributions</p>Migrations2011-07-16T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2011-07-16:/2011/07/16/migrations.html<p>To the cloud!
I'm officially done hosting my own Wordpress blog. Not because it's particularly hard, but because it's quite boring. I would have done a straight export/import into a wordpress.com blog, but their options for hosting on a personal domain are pretty insane - if you want to …</p><p>To the cloud!
I'm officially done hosting my own Wordpress blog. Not because it's particularly hard, but because it's quite boring. I would have done a straight export/import into a wordpress.com blog, but their options for hosting on a personal domain are pretty insane - if you want to host your blog on domain.com or www.domain.com you have to just point the entire domain at the wordpress.com DNS servers.
I'm not prepared to trust my domain to a bunch of PHP bloggers, so instead I've shoved the blog over to Blogger (by way of a very helpful <a href="http://wordpress2blogger.appspot.com/">online conversion tool</a>), but this still presents a few niggles around URLs.
You can have Blogger send 404s to another vhost, so for now I just have a tiny little vhost somewhere else which uses mod_rewrite to catch the old page names and attempt to catch the blog post names. Ideally I'd fetch all the old post URLs and make a proper map to the new ones, but I can't really be bothered to do that, so I just went for the approximate:</p>
<div class="highlight"><pre><span></span><code>RewriteRule ^/archives/(\[0-9\]{4})/(\[0-9\]{2})/(\[0-9\]{2})/(\[a-zA-Z0-9\\-\]{1,39}).\*$ http://www.tenshu.net/$1/$2/$4.html \[R=301,L\]
</code></pre></div>
<p>Another obvious sticking point is that Wordpress categories become Blogger labels, so another rewrite rule can take care of them (although not so much if you've used nested categories, but again I can't really be bothered to account for that):</p>
<div class="highlight"><pre><span></span><code>RewriteRule<span class="w"> </span>^/archives/category/(.)(.\*)<span class="w"> </span>http://www.tenshu.net/search/label/<span class="cp">${</span><span class="n">upmap</span><span class="p">:</span><span class="err">$</span><span class="mi">1</span><span class="cp">}</span>$2<span class="w"> </span>\[R=301,L\]
</code></pre></div>
<p>Also cloudified so far is the DNS for tenshu.net - I'm trying out Amazon's Route53 and it seems to be pretty good so far. Next up will be email and then I can pretty much entirely stop faffing around running my own infrastructure :)</p>Monitoring an Apple Airport Express/Extreme with Munin2011-01-29T00:00:00+00:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2011-01-29:/2011/01/29/monitoring-apple-airport-expressextreme.html<p>So you have an Apple Airport (Express or Extreme), or a Time Capsule, and you want to monitor things like the signal levels of the connected clients? I thought so! That's why I wrote this post, because I'm thoughtful like that.
While it's not necessary, I'd like to mention that …</p><p>So you have an Apple Airport (Express or Extreme), or a Time Capsule, and you want to monitor things like the signal levels of the connected clients? I thought so! That's why I wrote this post, because I'm thoughtful like that.
While it's not necessary, I'd like to mention that this was made possible by virtue of Apple having put out an <a href="http://support.apple.com/kb/DL1186" title="Apple's SNMP MIB file for their Airport products">SNMP MIB file</a>. Without that, finding the relevant OIDs would have been sufficiently boring that I wouldn't have bothered with this, so yay for that (even if the MIB is suspiciously ancient).
So if you don't need the MIB file, what do you need?</p>
<ul>
<li><a href="http://munin-monitoring.org" title="Munin. Monitoring for lazy people!">Munin</a></li>
<li>python</li>
<li>Net-SNMP's python bindings (in Debian/Ubuntu the package name is <a href="apt:libsnmp-python" title="Click here to install libsnmp-python">libsnmp-python</a>)</li>
<li>my <a href="http://bazaar.launchpad.net/~cmsj/+junk/munin-plugins/view/head:/snmp__airport" title="Airport SNMP plugin for Munin">airport munin plugin</a></li>
</ul>
<p>Having all of those things, how do you use it? Simple!
- Place the munin plugin somewhere (doesn't really matter where, but the munin package probably put the other plugins in /usr/share/munin/plugins/)
- Make sure you have a hostname or IP address for your Airport(s). If you have more than one you should either make sure they have static IPs configured, or that the one doing DHCP has static leases configured for all the other Airports.
- Create a symlink for each of the types of graph for each of your Airports. Assuming that your Munin machine can resolve your Airport as 'myairport' you'd want to make the following symlinks:
- cd /etc/munin/plugins/
- ln -s /path/to/snmp__airport snmp_myairport_airport_clients
- ln -s /path/to/snmp__airport snmp_myairport_airport_signal
- ln -s /path/to/snmp__airport snmp_myairport_airport_noise
- ln -s /path/to/snmp__airport snmp_myairport_airport_rate</p>
<p>There is an explicit assumption that your SNMP community is the default of 'public'. If it's not then you'll need to hack the script. Otherwise, you're done! Now you win pretty graphs showing lots of juicy information about your Airport. Yay! You're welcome ;)</p>Old and new: Mixing irssi and iPhones for fun and no profit2010-12-14T00:00:00+00:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2010-12-14:/2010/12/14/old-and-new-mixing-irssi-and-iphones.html<h1>Introduction</h1>
<p>I use irssi for IRC and an iPhone for pocket Internets; These two choices are both excellent, but they're not terribly compatible - typing in irssi on an iPhone via SSH is quite slow and annoying.</p>
<p>Obviously the thing to do is run an iPhone IRC client, but then I'm …</p><h1>Introduction</h1>
<p>I use irssi for IRC and an iPhone for pocket Internets; These two choices are both excellent, but they're not terribly compatible - typing in irssi on an iPhone via SSH is quite slow and annoying.</p>
<p>Obviously the thing to do is run an iPhone IRC client, but then I'm signing in and out all the time and I have multiple nicknames - what I want is a way to be connected to the same IRC session as normal, but from my phone using the excellent IRC client Colloquy. By taking advantage of several different pieces of Free Software, this is entirely doable! When we're not connected to IRC, messages which trigger irssi's highlight will be forwarded to the iPhone as a Push Notification.</p>
<h1>Preparation</h1>
<p>These are the tools we are going to use to make this happen:
- A patched irssi (don't worry though, the patch is <em>tiny</em>)
- An irssi script (just some Perl really)
- irssi-proxy
- stunnel
- Colloquy Mobile (from the iPhone App Store)</p>
<p>Throughout I will be assuming that you're running Ubuntu 10.04 (Lucid Lynx) as this is the currently most recent LTS release and thus most suited to servers. Also it's what I run, and this is my fun evening project :)</p>
<p>Although it will not be necessary to download it, I would like to note the original location of the patch and script that this method relies on. You can obtain both <a href="http://static.ssji.net/colloquy_push.pl.txt" title="Colloquy Push script">here</a>.</p>
<p>Instead, we are going to install a patched irssi from one of my PPAs, but if you do not care for this idea, the above URL will let you build your own patched irssi and contains the colloquy_push.pl script.</p>
<h1>Installation</h1>
<p>These commands will install the patched irssi and the colloquy_push.pl script:</p>
<div class="highlight"><pre><span></span><code>sudo add-apt-repository ppa:cmsj/irssi-colloquy-push
sudo apt-get update
sudo apt-get install irssi
</code></pre></div>
<p>(If you don't have <code>add-apt-repository</code> available, it's in the <code>python-software-properties</code> package).</p>
<h1>Configuration</h1>
<h2>irssi-proxy</h2>
<p>The first step is to load irssi-proxy. This is distributed as a plugin library in the irssi package, you can load it with:</p>
<div class="highlight"><pre><span></span><code><span class="o">/</span><span class="nb">load</span><span class="w"> </span><span class="n">proxy</span>
<span class="o">/</span><span class="n">set</span><span class="w"> </span><span class="n">irssiproxy_bind</span><span class="w"> </span><span class="mf">127.0</span><span class="o">.</span><span class="mf">0.1</span>
<span class="o">/</span><span class="n">set</span><span class="w"> </span><span class="n">irssiproxy_password</span><span class="w"> </span><span class="n">PICKAGOODPASSWORD</span>
<span class="o">/</span><span class="n">set</span><span class="w"> </span><span class="n">irssiproxy_ports</span><span class="w"> </span><span class="n">network1</span><span class="o">=</span><span class="mi">31337</span><span class="w"> </span><span class="n">network2</span><span class="o">=</span><span class="mi">31338</span><span class="w"> </span><span class="n">network3</span><span class="o">=</span><span class="mi">31339</span>
</code></pre></div>
<p>Obviously you'll need to replace <code>PICKAGOODPASSWORD</code> with a password, preferably a good one. Also you'll need to replace <code>network1</code>/<code>network2</code>/<code>network3</code> with the names of the networks you've configured in irssi (which you can see with the command <code>/network list</code>) and switch them to different ports if you want.</p>
<p>Finally you should run <code>/save</code> so irssi writes out its config file with all of these changes. Et voila, we have a running proxy, but as you noticed, we forced it to listed on 127.0.0.1, so we can't yet connect to it from the Internet. The reason we've done this is that irssi_proxy is not able to directly offer encrypted connections. It would be a bad idea to allow all our proxy password and general IRC traffic to flow around unencrypted (even though many IRC server connections are unencrypted).</p>
<h2>Stunnel</h2>
<p>Stunnel is a very simple tool that lets you add SSL support to anything listening on a TCP socket. To get started, install the <code>stunnel4</code> package and edit <code>/etc/default/stunnel4</code> and change <code>ENABLED=0</code> to <code>ENABLED=1</code>.</p>
<p>Now we need to construct /etc/stunnel/stunnel.conf. The default contains various options we don't really care about, but one important one is the <code>cert =</code> line - we need an SSL certificate for this to work. You can either buy one or generate your own (a so-called "snake-oil" certificate). There are many guides to generating a .crt file and this is left as an exercise for the reader. With that file in place somewhere, edit stunnel.conf to point at it.</p>
<p>The final step for stunnel is to add port configurations. Jump to the bottom of the file and add a section like this for each of the ports irssi_proxy is listening on:</p>
<div class="highlight"><pre><span></span><code><span class="k">[myfirststunnel]</span>
<span class="na">accept</span><span class="o">=</span><span class="s">123.123.123.123:31337</span>
<span class="na">connect</span><span class="o">=</span><span class="s">127.0.0.1:31337</span>
</code></pre></div>
<p>What we have done here is told stunnel to listen on our public IP on the same port that it will then connect to on 127.0.0.1. This might seem confusing, but I think it makes sense that the port numbers stay directly mapped between tunnels and proxy ports. Restart the stunnel4 service and you should see the appropriate ports being listened on.</p>
<h2>colloquy_push.pl</h2>
<p>This is the irssi script which glues all the magic together - it receives special commands from the iPhone version of Colloquy and uses those to pass on Push Notifications when necessary. To load it, type <code>/script load colloquy_push.pl</code> and you probably want to symlink <code>/usr/share/irssi/scripts/colloquy_push.pl</code> into <code>~/.irssi/scripts/autorun/</code>.</p>
<h2>Colloquy</h2>
<p>Now configure a new IRC Connection in Colloquy on your iPhone. Enter its hostname/IP and the port you have stunnel listening on (the port settings are in Advanced) and enable SSL. Finally, set Push Notifications to On and you're done.</p>
<h1>Shortcomings</h1>
<p>The script, while excellent, has one or two drawbacks - it's not yet able to detect when you're watching irssi, so it may well send lots of notifications to your phone unnecessarily (I'm looking into expanding it to detect if you're running in screen/tmux and are attached), also it doesn't have any concept of sleeping hours, so you may get woken up by notifications! Nonetheless, this is an excellent way to use your awesome iPhone and not sacrifice the magnificence of irssi!</p>GStreamer thread oddness2010-10-28T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2010-10-28:/2010/10/28/gstreamer-thread-oddness.html<p>I sometimes find myself in a place where there are a number of Icecast streams going out at once and I'm interested in finding better ways of monitoring these. It seems like a nice option would be a window showing a visualisation of each stream.
I quickly whipped up some …</p><p>I sometimes find myself in a place where there are a number of Icecast streams going out at once and I'm interested in finding better ways of monitoring these. It seems like a nice option would be a window showing a visualisation of each stream.
I quickly whipped up some python to do this, but it almost always locks up when I run it, but I'm not sure if I've done something fundementally wrong or if I've found a bug somewhere.
If you are a gstreamer expert, please take a look a <a href="http://bazaar.launchpad.net/~cmsj/%2Bjunk/icecastvis/annotate/head%3A/icecastvisualiser.py" title="Some python">this code</a> and let me know what I should do next! If you know a gstreamer expert, please try and bribe them to read this post ;)</p>Lifesaver for Maverick2010-09-21T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2010-09-21:/2010/09/21/lifesaver-for-maverick.html<p>I think that enough of the planets have aligned in the shape of a failboat that I have been able to successfully upload a source package of Lifesaver to its PPA for Maverick.
I might be wrong though, we'll find out shortly when Launchpad processes the ridiculous output of several …</p><p>I think that enough of the planets have aligned in the shape of a failboat that I have been able to successfully upload a source package of Lifesaver to its PPA for Maverick.
I might be wrong though, we'll find out shortly when Launchpad processes the ridiculous output of several ridiculous tools.
Seriously Debian/Ubuntu developers, <em>please</em> sort this out. I really don't care about the intricacies of your workflow - just make it easy for me to be an upstream developer pushing packages into a PPA. Don't make me wade through a sea of hundreds of build tools, dscs, origs, diffs, etc. Just make a bundle and shove it into Launchpad. One command. bzr2ppa in a working directory. Done.
I'm quite sure the failures I had were due either to my incorrect use of some tool or other, or an incorrect setup, but I contend that I shouldn't have to care. Such a tool just needs to know that there's a debian/ that works and a PPA waiting. Make it happen. Go. Now. Are we there yet?
GRRRRRRRRRRRRRR!
(Rant over, the package uploaded and will presumably build shortly, enjoy!)</p>Terminator 0.95 released!2010-08-24T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2010-08-24:/2010/08/24/terminator-095-released.html<p>This release is mostly to bring a couple of important compatibility fixes with the newest pre-release of VTE, but we also have some updated translations, improved error handling and two new features for you. The features are a URL handler plugin for Maven by Julien Nicolaud and a DBus server …</p><p>This release is mostly to bring a couple of important compatibility fixes with the newest pre-release of VTE, but we also have some updated translations, improved error handling and two new features for you. The features are a URL handler plugin for Maven by Julien Nicolaud and a DBus server that was the result of some work with Andrea Corbellini - for now the only thing this is useful for is opening additional Terminator windows without spawning a new process, but we'll be exploring options in the future to allow more control and interaction with Terminator processes.</p>Adventures in Puppet2010-08-04T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2010-08-04:/2010/08/04/adventures-in-puppet.html<p>I'm very slowly learning and exploring the fascinating world of Puppet for configuration management. As I go I'm going to try and blog about random things I discover. Partially for my own future reference, partially to help me crystalise my knowledge and partially to help you.
The first post is …</p><p>I'm very slowly learning and exploring the fascinating world of Puppet for configuration management. As I go I'm going to try and blog about random things I discover. Partially for my own future reference, partially to help me crystalise my knowledge and partially to help you.
The first post is coming up immediately, I'm just writing this post as an opening bookend :)</p>Adventures in Puppet: concat module2010-08-04T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2010-08-04:/2010/08/04/adventures-in-puppet-concat-module.html<p>R.I. Pienaar has a Puppet module on github called "concat". Its premise is very simple, it just concatenates fragments of text together into a particular file.
I'm sure that a more seasoned Puppet veteran would have had this running in no time, but since it introduced some new concepts …</p><p>R.I. Pienaar has a Puppet module on github called "concat". Its premise is very simple, it just concatenates fragments of text together into a particular file.
I'm sure that a more seasoned Puppet veteran would have had this running in no time, but since it introduced some new concepts for me, I thought I'd throw up some notes of how I'm using it. I was particularly interested in an example usage I saw which lists the puppet modules a system is using in its /etc/motd, but because of the way Ubuntu handles constructing the motd, I needed to slightly rework the example. In Ubuntu, the /etc/motd file is constructed dynamically when you log in - this is done by pam_motd which executes the scripts in /etc/update-motd.d/. One of those scripts (99-footer) will simply append the contents of /etc/motd.tail to /etc/motd after everything else - my example will take advantage of this. If you are already using motd.tail, you could just have this puppet system write to a different file and then drop another script into /etc/update-motd.d/ to append the contents of that different file.
This is what I did:</p>
<ul>
<li>git clone http://github.com/ripienaar/puppet-concat.git</li>
<li>Move the resulting git branch to /etc/puppet/modules/concat and add it to my top-level site manifest that includes modules</li>
<li>Create a class to manage /etc/motd.tail. In my setup this ends up being /etc/puppet/manifests/classes/motd.pp, which is included by my default node, but your setup is probably different. This is what my class looks like:</li>
</ul>
<div class="highlight"><pre><span></span><code><span class="w"> </span><span class="k">class</span><span class="w"> </span><span class="na">motd</span><span class="w"> </span><span class="p">{</span>
<span class="w"> </span><span class="k">include</span><span class="w"> </span><span class="na">concat</span><span class="p">::</span><span class="na">setup</span>
<span class="w"> </span><span class="nv">$motdfile</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s">"/etc/motd.tail"</span>
<span class="w"> </span><span class="na">concat</span><span class="p">{</span><span class="nv">$motdfile:</span>
<span class="w"> </span><span class="na">owner</span><span class="w"> </span><span class="o">=></span><span class="w"> </span><span class="na">root</span><span class="p">,</span>
<span class="w"> </span><span class="na">group</span><span class="w"> </span><span class="o">=></span><span class="w"> </span><span class="na">root</span><span class="p">,</span>
<span class="w"> </span><span class="na">mode</span><span class="w"> </span><span class="o">=></span><span class="w"> </span><span class="mi">644</span>
<span class="w"> </span><span class="p">}</span>
<span class="w"> </span><span class="na">concat</span><span class="p">::</span><span class="na">fragment</span><span class="p">{</span><span class="s">"motd_header"</span><span class="p">:</span>
<span class="w"> </span><span class="na">target</span><span class="w"> </span><span class="o">=></span><span class="w"> </span><span class="nv">$motdfile,</span>
<span class="w"> </span><span class="na">content</span><span class="w"> </span><span class="o">=></span><span class="w"> </span><span class="s">"\nPuppet modules: "</span><span class="p">,</span>
<span class="w"> </span><span class="na">order</span><span class="w"> </span><span class="o">=></span><span class="w"> </span><span class="mi">10</span><span class="p">,</span>
<span class="w"> </span><span class="p">}</span>
<span class="w"> </span><span class="na">concat</span><span class="p">::</span><span class="na">fragment</span><span class="p">{</span><span class="s">"motd_footer"</span><span class="p">:</span>
<span class="w"> </span><span class="na">target</span><span class="w"> </span><span class="o">=></span><span class="w"> </span><span class="nv">$motdfile,</span>
<span class="w"> </span><span class="na">content</span><span class="w"> </span><span class="o">=></span><span class="w"> </span><span class="s">"\n\n"</span><span class="p">,</span>
<span class="w"> </span><span class="na">order</span><span class="w"> </span><span class="o">=></span><span class="w"> </span><span class="mi">90</span><span class="p">,</span>
<span class="w"> </span><span class="p">}</span>
<span class="w"> </span><span class="p">}</span>
<span class="w"> </span><span class="c"># used by other modules to register themselves in the motd</span>
<span class="w"> </span><span class="k">define</span><span class="w"> </span><span class="na">motd</span><span class="p">::</span><span class="na">register</span><span class="p">(</span><span class="nv">$content="",</span><span class="w"> </span><span class="nv">$order=20)</span><span class="w"> </span><span class="p">{</span>
<span class="w"> </span><span class="k">if</span><span class="w"> </span><span class="nv">$content</span><span class="w"> </span><span class="o">==</span><span class="w"> </span><span class="s">""</span><span class="w"> </span><span class="p">{</span>
<span class="w"> </span><span class="nv">$body</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nv">$name</span>
<span class="w"> </span><span class="p">}</span><span class="w"> </span><span class="k">else</span><span class="w"> </span><span class="p">{</span>
<span class="w"> </span><span class="nv">$body</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nv">$content</span>
<span class="w"> </span><span class="p">}</span>
<span class="w"> </span><span class="na">concat</span><span class="p">::</span><span class="na">fragment</span><span class="p">{</span><span class="s">"motd_fragment_$name"</span><span class="p">:</span>
<span class="w"> </span><span class="na">target</span><span class="w"> </span><span class="o">=></span><span class="w"> </span><span class="s">"/etc/motd.tail"</span><span class="p">,</span>
<span class="w"> </span><span class="na">content</span><span class="w"> </span><span class="o">=></span><span class="w"> </span><span class="s">"$body "</span><span class="p">,</span>
<span class="w"> </span><span class="na">order</span><span class="w"> </span><span class="o">=></span><span class="w"> </span><span class="nv">$order</span>
<span class="w"> </span><span class="p">}</span>
<span class="w"> </span><span class="p">}</span>
</code></pre></div>
<p>So that's quite a mouthful. Let's break it down:
* We have to include concat::setup so the concat module can...set... up :)
* We then set a variable pointing at the location of the file we want to manage
* We then instantiate the concat module for the file we want to manage and set properties like the ownership/mode
* We then call the concat::fragment function for two specific fragments we want in the output - a header and a footer (although I do this on a single line, so it's the phrase "Puppet modules" and "\n\n" respectively). They're forced to be header/footer by the "order" parameter - by making sure we use a low number for the header and a high number for the footer, we get the layout we expect.
* Outsite this class we define a function motd::register which other modules will call and the content they supply will be handed to concat::fragment with a default order parameter of 20 (which is higher than the value we used for the header and lower than the footer one).</p>
<p>Finally, in each of my modules I include the line:
<code>motd::register{"someawesomemodule":}</code></p>
<p>and now when I ssh to a node, I see a line like:
Puppet modules: web ssh</p>
<p>It's a fairly simple little thing, but quite pleasing and from here out it's almost zero effort - just adding the motd::register calls to each module.</p>Adventures in Puppet: Tangled Strings2010-08-04T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2010-08-04:/2010/08/04/adventures-in-puppet-tangled-strings.html<p>I am trying to do as much management on my new VM servers as possible with Puppet, but these are machines I still frequently log on to, and not everything is managed by Puppet, so it's entirely possible that in a fit of forgetfulness I will start editing a file …</p><p>I am trying to do as much management on my new VM servers as possible with Puppet, but these are machines I still frequently log on to, and not everything is managed by Puppet, so it's entirely possible that in a fit of forgetfulness I will start editing a file that Puppet is managing and then be annoyed when my changes are lost next time Puppet runs.
Since prior preparation and planning prevents pitifully poor performance, I decided to do something about this.
Thus, I present a VIM plugin called TangledStrings, which I'm distributing as a Vimball (.vba) you can download from its <a href="http://launchpad.net/tangledstrings" title="TangledStrings">project page</a> on Launchpad. For more information on Vimball formatted plugins, see <a href="http://vimdoc.sourceforge.net/htmldoc/pi_vimball.html" title="Vimball Documentation">this page</a>. To install the plugin, simply:</p>
<ul>
<li>vim tangledstrings.vba</li>
<li>Follow the instructions from Vimball to type: :so %</li>
</ul>
<p>By default, TangledStrings will show a (configurable) warning message when you load a Puppet-owned file:
<a href="http://www.tenshu.net/wp-content/uploads/2010/08/puppetstrings_alert.png"><img src="http://www.tenshu.net/wp-content/uploads/2010/08/puppetstrings_alert.png" title="tangledstrings_alert" class="aligncenter size-full wp-image-11573" width="403" height="127" /></a>
This message can be disabled, and you can choose to enable a persistent message in the VIM status line instead:
<a href="http://www.tenshu.net/wp-content/uploads/2010/08/tangledstrings_statusline.png"><img src="http://www.tenshu.net/wp-content/uploads/2010/08/tangledstrings_statusline.png" title="tangledstrings_statusline" class="aligncenter size-full wp-image-11574" width="403" height="127" /></a>
(or you could choose to enable both of these methods).
For more information, see the documentation included in the Vimball which you can display with the VIM command:</p>
<div class="highlight"><pre><span></span><code>:help TangledStrings
</code></pre></div>
<p>Suggestions, improvements, patches, etc. are most welcome! Email me or use Launchpad to file bugs and propose merges.</p>TangledStrings 1.0 released!2010-08-04T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2010-08-04:/2010/08/04/tangledstrings-10-released.html<p>I'm very pleased to announce the release of 1.0 of TangledStrings, a VIM plugin to help Puppet users avoid the confusion and frustration of editing a file that Puppet is managing and subsequently losing ones changes as it is replaced by Puppet's version.</p>Delightful Hybridisation2010-07-29T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2010-07-29:/2010/07/29/delightful-hybridisation.html<p>I've probably mentioned before that I really like the music of <a href="http://www.hybridsoundsystem.com/" title="Hybrid">Hybrid</a>, so I figured I'd pimp some stuff of theirs that's floated onto the web recently:</p>
<ul>
<li>A <a href="http://www.hybridsoundsystem.com/2010/07/19/glastonbury-recordings/" title="Hybrid at Glastonbury 2010">couple of videos</a> from their live set at Glastonbury this year (I'm very gutted that I didn't make it to see them …</li></ul><p>I've probably mentioned before that I really like the music of <a href="http://www.hybridsoundsystem.com/" title="Hybrid">Hybrid</a>, so I figured I'd pimp some stuff of theirs that's floated onto the web recently:</p>
<ul>
<li>A <a href="http://www.hybridsoundsystem.com/2010/07/19/glastonbury-recordings/" title="Hybrid at Glastonbury 2010">couple of videos</a> from their live set at Glastonbury this year (I'm very gutted that I didn't make it to see them)</li>
<li>Their latest Frisky Radio mixtape just hit Soundcloud, <a href="http://soundcloud.com/hybridsoundsystem/hybrid-june-2010" title="Hybrid June 2010 Frisky Radio Mix">here</a>.</li>
</ul>
<p>Enjoy!</p>Dream a little dream of me2010-07-17T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2010-07-17:/2010/07/17/dream-little-dream-of-me.html<p>Last night I had a lovely meal out and then saw Inception with Rike and some friends.
I've really enjoyed all of Christopher Nolan's previous films and I think he does an excellent job of creating surprising and compelling stories.
I'm not really going to say anything about the plot …</p><p>Last night I had a lovely meal out and then saw Inception with Rike and some friends.
I've really enjoyed all of Christopher Nolan's previous films and I think he does an excellent job of creating surprising and compelling stories.
I'm not really going to say anything about the plot, other than to advise you avoid reading anything about it until you've seen it - not because there's anything particularly secret, but because it's nice to not have any preconceptions about what might happen.
For me, the best films have me leaving the cinema totally caught up in their world, my mind reeling with the possibilities of what they have explored. Inception achieved this, and I want to see it again, although preferably at home on Bluray so I can hear every word of dialogue properly - a surprising shortcoming of one of London's flagship cinemas.</p>Random puppetry2010-07-14T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2010-07-14:/2010/07/14/random-puppetry.html<p>I was talking to a colleague earlier about Puppet and its ability to install packages. I'd not really given it much thought beyond using it to install packages on classes of machines, but he mentioned one particular package which gets updated quite frequently, but is extremely low risk to update …</p><p>I was talking to a colleague earlier about Puppet and its ability to install packages. I'd not really given it much thought beyond using it to install packages on classes of machines, but he mentioned one particular package which gets updated quite frequently, but is extremely low risk to update - tzdata. By setting this to "ensure => latest" rather than "ensure => present" I can forget about ever having to upgrade that package again \o/
Simple really, but it hadn't occurred to me.</p>Pick a letter, any letter2010-07-07T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2010-07-07:/2010/07/07/pick-letter-any-letter.html<p>Earlier on my laptop suffered a slight mishap which resulted in a key popping off. I examined the mechanism and it didn't obviously go back on by itself, so I googled around a little and landed on the helpful chaps at <a href="http://www.laptopkey.com/installation_guides.php">laptopkey.com</a>. I watched the video that pertains to …</p><p>Earlier on my laptop suffered a slight mishap which resulted in a key popping off. I examined the mechanism and it didn't obviously go back on by itself, so I googled around a little and landed on the helpful chaps at <a href="http://www.laptopkey.com/installation_guides.php">laptopkey.com</a>. I watched the video that pertains to my exact model, figured out which bits of metal had been slightly bent and a few minutes later I had everything back in working order.
It's almost a shame I didn't need to buy anything from them in return for using their helpful video ;)</p>My python also spins webs2010-07-06T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2010-07-06:/2010/07/06/my-python-also-spins-webs.html<p>With Terminator 0.94 released I'm turning my little brain onto an idea I have for a web service and obviously I'm sticking with python.
Clearly writing all the web gubbins by hand is mental, so I'm playing with Flask, a microframework for web apps. So far I'm really liking …</p><p>With Terminator 0.94 released I'm turning my little brain onto an idea I have for a web service and obviously I'm sticking with python.
Clearly writing all the web gubbins by hand is mental, so I'm playing with Flask, a microframework for web apps. So far I'm really liking it, but it's taken a while to figure it and sqlalchemy out.
I'm not at all convinced that this is going to be in any way scalable, but it's a nice way to test my idea :)</p>Who wants to see something really ugly?2010-07-06T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2010-07-06:/2010/07/06/who-wants-to-see-something-really-ugly.html<p>I think it should be abundantly clear from my postings here that I'm not a very good programmer, and this means I give myself a lot of free rope to do some very stupid things.
I'm in constant need of debugging information and in Terminator particularly where we have lots …</p><p>I think it should be abundantly clear from my postings here that I'm not a very good programmer, and this means I give myself a lot of free rope to do some very stupid things.
I'm in constant need of debugging information and in Terminator particularly where we have lots of objects all interacting and reparenting all the time. We've had a simple dbg() method for a long time, but I was getting very bored of typing out dbg('Class::method:: Some message about %d' % foo), so I decided to see what could be done about inferring the Class and method parts of the message.
It turns out that python is very good at introspecting its own runtime, so back in January, armed with my own stupidity and some help from various folks on the Internet, I came up with the following:</p>
<div class="highlight"><pre><span></span><code><span class="err">#</span><span class="w"> </span><span class="k">set</span><span class="w"> </span><span class="n">this</span><span class="w"> </span><span class="k">to</span><span class="w"> </span><span class="k">true</span><span class="w"> </span><span class="k">to</span><span class="w"> </span><span class="n">enable</span><span class="w"> </span><span class="n">debugging</span><span class="w"> </span><span class="k">output</span>
<span class="n">DEBUG</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="k">False</span>
<span class="err">#</span><span class="w"> </span><span class="k">set</span><span class="w"> </span><span class="n">this</span><span class="w"> </span><span class="k">to</span><span class="w"> </span><span class="k">true</span><span class="w"> </span><span class="k">to</span><span class="w"> </span><span class="n">additionally</span><span class="w"> </span><span class="n">list</span><span class="w"> </span><span class="n">filenames</span><span class="w"> </span><span class="ow">in</span><span class="w"> </span><span class="n">debugging</span>
<span class="n">DEBUGFILES</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="k">False</span>
<span class="err">#</span><span class="w"> </span><span class="n">list</span><span class="w"> </span><span class="k">of</span><span class="w"> </span><span class="n">classes</span><span class="w"> </span><span class="k">to</span><span class="w"> </span><span class="n">show</span><span class="w"> </span><span class="n">debugging</span><span class="w"> </span><span class="k">for</span><span class="p">.</span><span class="w"> </span><span class="n">empty</span><span class="w"> </span><span class="n">list</span><span class="w"> </span><span class="n">means</span><span class="w"> </span><span class="n">show</span><span class="w"> </span><span class="ow">all</span><span class="w"> </span><span class="n">classes</span>
<span class="n">DEBUGCLASSES</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="err">[]</span>
<span class="err">#</span><span class="w"> </span><span class="n">list</span><span class="w"> </span><span class="k">of</span><span class="w"> </span><span class="n">methods</span><span class="w"> </span><span class="k">to</span><span class="w"> </span><span class="n">show</span><span class="w"> </span><span class="n">debugging</span><span class="w"> </span><span class="k">for</span><span class="p">.</span><span class="w"> </span><span class="n">empty</span><span class="w"> </span><span class="n">list</span><span class="w"> </span><span class="n">means</span><span class="w"> </span><span class="n">show</span><span class="w"> </span><span class="ow">all</span><span class="w"> </span><span class="n">methods</span>
<span class="n">DEBUGMETHODS</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="err">[]</span>
<span class="n">def</span><span class="w"> </span><span class="n">dbg</span><span class="p">(</span><span class="nf">log</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="ss">""</span><span class="p">)</span><span class="err">:</span>
<span class="w"> </span><span class="ss">"""Print a message if debugging is enabled"""</span>
<span class="w"> </span><span class="k">if</span><span class="w"> </span><span class="nl">DEBUG</span><span class="p">:</span>
<span class="w"> </span><span class="n">stackitem</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">inspect</span><span class="p">.</span><span class="n">stack</span><span class="p">()</span><span class="o">[</span><span class="n">1</span><span class="o">]</span>
<span class="w"> </span><span class="n">parent_frame</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">stackitem</span><span class="o">[</span><span class="n">0</span><span class="o">]</span>
<span class="w"> </span><span class="k">method</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">parent_frame</span><span class="p">.</span><span class="n">f_code</span><span class="p">.</span><span class="n">co_name</span>
<span class="w"> </span><span class="k">names</span><span class="p">,</span><span class="w"> </span><span class="n">varargs</span><span class="p">,</span><span class="w"> </span><span class="n">keywords</span><span class="p">,</span><span class="w"> </span><span class="n">local_vars</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">inspect</span><span class="p">.</span><span class="n">getargvalues</span><span class="p">(</span><span class="n">parent_frame</span><span class="p">)</span>
<span class="w"> </span><span class="k">try</span><span class="err">:</span>
<span class="w"> </span><span class="n">self_name</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="k">names</span><span class="o">[</span><span class="n">0</span><span class="o">]</span>
<span class="w"> </span><span class="n">classname</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">local_vars</span><span class="o">[</span><span class="n">self_name</span><span class="o">]</span><span class="p">.</span><span class="n">__class__</span><span class="p">.</span><span class="n">__name__</span>
<span class="w"> </span><span class="ow">except</span><span class="w"> </span><span class="nl">IndexError</span><span class="p">:</span>
<span class="w"> </span><span class="n">classname</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="ss">"noclass"</span>
<span class="w"> </span><span class="k">if</span><span class="w"> </span><span class="nl">DEBUGFILES</span><span class="p">:</span>
<span class="w"> </span><span class="n">line</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">stackitem</span><span class="o">[</span><span class="n">2</span><span class="o">]</span>
<span class="w"> </span><span class="n">filename</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">parent_frame</span><span class="p">.</span><span class="n">f_code</span><span class="p">.</span><span class="n">co_filename</span>
<span class="w"> </span><span class="n">extra</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="ss">" (%s:%s)"</span><span class="w"> </span><span class="o">%</span><span class="w"> </span><span class="p">(</span><span class="n">filename</span><span class="p">,</span><span class="w"> </span><span class="n">line</span><span class="p">)</span>
<span class="w"> </span><span class="k">else</span><span class="err">:</span>
<span class="w"> </span><span class="n">extra</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="ss">""</span>
<span class="w"> </span><span class="k">if</span><span class="w"> </span><span class="n">DEBUGCLASSES</span><span class="w"> </span><span class="o">!=</span><span class="w"> </span><span class="err">[]</span><span class="w"> </span><span class="ow">and</span><span class="w"> </span><span class="n">classname</span><span class="w"> </span><span class="ow">not</span><span class="w"> </span><span class="ow">in</span><span class="w"> </span><span class="nl">DEBUGCLASSES</span><span class="p">:</span>
<span class="w"> </span><span class="k">return</span>
<span class="w"> </span><span class="k">if</span><span class="w"> </span><span class="n">DEBUGMETHODS</span><span class="w"> </span><span class="o">!=</span><span class="w"> </span><span class="err">[]</span><span class="w"> </span><span class="ow">and</span><span class="w"> </span><span class="k">method</span><span class="w"> </span><span class="ow">not</span><span class="w"> </span><span class="ow">in</span><span class="w"> </span><span class="nl">DEBUGMETHODS</span><span class="p">:</span>
<span class="w"> </span><span class="k">return</span>
<span class="w"> </span><span class="k">try</span><span class="err">:</span>
<span class="w"> </span><span class="k">print</span><span class="w"> </span><span class="o">>></span><span class="w"> </span><span class="n">sys</span><span class="p">.</span><span class="n">stderr</span><span class="p">,</span><span class="w"> </span><span class="ss">"%s::%s: %s%s"</span><span class="w"> </span><span class="o">%</span><span class="w"> </span><span class="p">(</span><span class="n">classname</span><span class="p">,</span><span class="w"> </span><span class="k">method</span><span class="p">,</span><span class="w"> </span><span class="nf">log</span><span class="p">,</span><span class="w"> </span><span class="n">extra</span><span class="p">)</span>
<span class="w"> </span><span class="ow">except</span><span class="w"> </span><span class="nl">IOError</span><span class="p">:</span>
<span class="w"> </span><span class="n">pass</span>
</code></pre></div>
<p>How's about that for shockingly bad? ;)
It also adds a really impressive amount of overhead to the execution time.
I added the DEBUGCLASSES and DEBUGMETHODS lists so I could cut down on the huge amount of output - these are hooked up to command line options, so you can do something like "terminator -d --debug-classes=Terminal" and only receive debugging messages from the Terminal module.
I'm not exactly sure what I hope to gain from this post, other than ridicule on the Internet, but maybe, just maybe, someone will pop up and point out how stupid I am in a way that turns this into a 2 line, low-overhead function :D</p>A good day2010-07-04T00:00:00+01:002023-11-22T17:32:03+00:00Chris Jonestag:cmsj.net,2010-07-04:/2010/07/04/good-day-2.html<p>Today has been about creating, not consuming. Apart from half-watching Primal Fear with Rike, I have spent the day fixing bugs in Terminator and playing with the Akai Synthstation app on my iPad. I suspect I'm not going to be ruling the clubs anytime soon, and the UI is pretty …</p><p>Today has been about creating, not consuming. Apart from half-watching Primal Fear with Rike, I have spent the day fixing bugs in Terminator and playing with the Akai Synthstation app on my iPad. I suspect I'm not going to be ruling the clubs anytime soon, and the UI is pretty dreadful for composing music, but it has a good library of sounds and synth mangling knobs :)
I even filmed myself playing some of the parts and edited them together into a little music video, but it's really very poor ;)
Rike's going to be out for most of tomorrow, so I have to decide between doing more of what I've been doing today, playing PS3 games or going out myself. Tricky!</p>Terminator 0.94 released!2010-07-04T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2010-07-04:/2010/07/04/terminator-094-released.html<p>Lots of bug fixes and some improvements to the preferences are in this release, as well as a couple of new plugins for watching terminals for activity, or taking screenshots of individual terminals.
See the changelog for full details.</p>The Lawnmower Man2010-06-08T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2010-06-08:/2010/06/08/lawnmower-man.html<h1>Introduction</h1>
<p>This website shares a server with various other network services that form the foundation of my online life (i.e. IRC and Email) and I've been running into capacity issues in the last few months, so I'm running an experiment whereby I upgrade to brand new hardware (Quad Core …</p><h1>Introduction</h1>
<p>This website shares a server with various other network services that form the foundation of my online life (i.e. IRC and Email) and I've been running into capacity issues in the last few months, so I'm running an experiment whereby I upgrade to brand new hardware (Quad Core i7, 8GB of RAM) and partition the available resources across virtual machines so the various network services are isolated into logical security zones.</p>
<h1>Whining</h1>
<p>I have plenty of experience using Xen for this sort of thing, but that's becoming more and more irrelevant in newer kernels/distributions. As much as I think that's a shame and a stupid upstream decision, I can't change it, so I need to move on to KVM and libvirt.</p>
<h1>Resolution</h1>
<p>So, with the beefy new server booted up in a -server kernel and a big, empty LVM Volume Group I got to work creating some virtual machines. This post is mainly a reminder to myself of the things I need to do for each VM :)</p>
<h1>Action</h1>
<p>These are the steps I used to make a VM with 1GB of RAM, 10GB / and 1GB of swap:</p>
<h3>Create an LVM Logical Volume</h3>
<div class="highlight"><pre><span></span><code>lvcreate -L11G -n somehostname VolumeGroup
</code></pre></div>
<h3>Create a VM image and libvirt XML definition</h3>
<div class="highlight"><pre><span></span><code>ubuntu-vm-builder kvm lucid --arch amd64 --mem=1024 --cpus=1 \
--raw=/dev/VolumeGroup/somehostname --rootsize=10240 --swapsize=1024 \
--kernel-flavour=server --hostname=somehostname \
--mirror=http://archive.ubuntu.com/ubuntu/ --components=main,universe \
--name 'Chris Jones' --user cmsj --pass 'ubuntu' --bridge virbr0 \
--libvirt qemu:///system --addpkg vim --addpkg ssh --addpkg ubuntu-minimal
</code></pre></div>
<p>Catchy command, huh? ;)</p>
<h3>Wait</h3>
<p>(building the VM will take a few minutes)</p>
<h3>Modify the libvirt XML definition for performance</h3>
<p>The best driver for disk/networking is the paravirtualised "virtio" driver. I found that ubuntu-vm-builder had already configured the networking to use this, but not the disk, so I modified the disk section to look like this:</p>
<div class="highlight"><pre><span></span><code><span class="nt"><disk</span><span class="w"> </span><span class="na">type=</span><span class="s">'block'</span><span class="w"> </span><span class="na">device=</span><span class="s">'disk'</span><span class="nt">></span>
<span class="w"> </span><span class="nt"><source</span><span class="w"> </span><span class="na">dev=</span><span class="s">'/dev/VolumeGroup/somehostname'</span><span class="nt">/></span>
<span class="w"> </span><span class="nt"><target</span><span class="w"> </span><span class="na">dev=</span><span class="s">'vda'</span><span class="w"> </span><span class="na">bus=</span><span class="s">'virtio'</span><span class="nt">/></span>
<span class="nt"></disk></span>
</code></pre></div>
<h3>Modify the libvirt XML definition for emulated serial console</h3>
<p>I don't really want to use VNC to talk to the console of my VMs, so I add the following to the <devices> section of the XML definition to make a virtualised serial port and consider it a console:</p>
<div class="highlight"><pre><span></span><code><span class="w"> </span><span class="nt"><serial</span><span class="w"> </span><span class="na">type=</span><span class="s">'pty'</span><span class="nt">></span>
<span class="w"> </span><span class="nt"><target</span><span class="w"> </span><span class="na">port=</span><span class="s">'0'</span><span class="nt">/></span>
<span class="w"> </span><span class="nt"></serial></span>
<span class="w"> </span><span class="nt"><console</span><span class="w"> </span><span class="na">type=</span><span class="s">'pty'</span><span class="nt">></span>
<span class="w"> </span><span class="nt"><target</span><span class="w"> </span><span class="na">port=</span><span class="s">'0'</span><span class="nt">/></span>
<span class="w"> </span><span class="nt"></console></span>
</code></pre></div>
<h3>Modify the libvirt XML definition for a better CPU</h3>
<p>I'm running this on an Intel Core i7 (Nehalem), but libvirt's newest defined CPU type is a Core2Duo, so we'll go with that in the root of the <domain> section:</p>
<div class="highlight"><pre><span></span><code><span class="nt"><cpu</span><span class="w"> </span><span class="na">match=</span><span class="s">'minimum'</span><span class="nt">></span>
<span class="w"> </span><span class="nt"><model></span>core2duo<span class="nt"></model></span>
<span class="nt"></cpu></span>
</code></pre></div>
<h3>Import the XML definition into the running libvirt daemon</h3>
<div class="highlight"><pre><span></span><code><span class="n">virsh define /etc/libvirt/qemu/somehostname.xml</span>
</code></pre></div>
<h3>Mount the VM's root filesystem</h3>
<p>The Logical Volume we created should be considered as a whole disk, not a mountable partition, but dmsetup can present the partitions within it, and these should still be present after running ubuntu-vm-builder:</p>
<div class="highlight"><pre><span></span><code>mkdir /mnt/tmpvmroot
mount /dev/mapper/VolumeGroup-somehostnamep1 /mnt/tmpvmroot
</code></pre></div>
<h3>Fix fstab in the VM</h3>
<p>Edit /mnt/tmpvmroot/etc/fstab and s/hda/vda/</p>
<h3>Configure serial console in the VM</h3>
<p>Edit /etc/init/ttyS0.conf and place the following in it:</p>
<div class="highlight"><pre><span></span><code><span class="w"> </span>#<span class="w"> </span><span class="nv">ttyS0</span><span class="w"> </span><span class="o">-</span><span class="w"> </span><span class="nv">getty</span>
<span class="w"> </span>#
<span class="w"> </span>#<span class="w"> </span><span class="nv">This</span><span class="w"> </span><span class="nv">service</span><span class="w"> </span><span class="nv">maintains</span><span class="w"> </span><span class="nv">a</span><span class="w"> </span><span class="nv">getty</span><span class="w"> </span><span class="nv">on</span><span class="w"> </span><span class="nv">ttyS0</span><span class="w"> </span><span class="nv">from</span><span class="w"> </span><span class="nv">the</span><span class="w"> </span><span class="nv">point</span><span class="w"> </span><span class="nv">the</span><span class="w"> </span><span class="nv">system</span><span class="w"> </span><span class="nv">is</span>
<span class="w"> </span>#<span class="w"> </span><span class="nv">started</span><span class="w"> </span><span class="k">until</span><span class="w"> </span><span class="nv">it</span><span class="w"> </span><span class="nv">is</span><span class="w"> </span><span class="nv">shut</span><span class="w"> </span><span class="nv">down</span><span class="w"> </span><span class="nv">again</span>.
<span class="w"> </span><span class="nv">start</span><span class="w"> </span><span class="nv">on</span><span class="w"> </span><span class="nv">stopped</span><span class="w"> </span><span class="nv">rc</span><span class="w"> </span><span class="nv">RUNLEVEL</span><span class="o">=</span>[<span class="mi">2345</span>]
<span class="w"> </span><span class="nv">stop</span><span class="w"> </span><span class="nv">on</span><span class="w"> </span><span class="nv">runlevel</span><span class="w"> </span>[<span class="o">!</span><span class="mi">2345</span>]
<span class="w"> </span><span class="nv">respawn</span>
<span class="w"> </span><span class="k">exec</span><span class="w"> </span><span class="o">/</span><span class="nv">sbin</span><span class="o">/</span><span class="nv">getty</span><span class="w"> </span><span class="o">-</span><span class="nv">L</span><span class="w"> </span><span class="mi">115200</span><span class="w"> </span><span class="nv">ttyS0</span><span class="w"> </span><span class="nv">xterm</span>
</code></pre></div>
<p>Edit /boot/grub/menu.lst and look for the commented "defoptions" line. Change it to:</p>
<div class="highlight"><pre><span></span><code> # defoptions=console=ttyS0 console=tty0
</code></pre></div>
<p>(the default "quiet splash" is not useful for servers IMHO)</p>
<h3>Unmount the VM's root filesystem</h3>
<div class="highlight"><pre><span></span><code>umount /mnt/tmpvmroot
rmdir /mnt/tmpvmroot
</code></pre></div>
<h3>Start the VM</h3>
<div class="highlight"><pre><span></span><code>virsh start somehostname
</code></pre></div>
<h3>SSH into the VM</h3>
<p>I didn't specify any networking details to ubuntu-vm-builder, so the machine will boot and try to get an address from DHCP. By default you'll have a bridge device for libvirt called virbr0 and dnsmasq will be running, so watch syslog for the VM getting its address.</p>
<div class="highlight"><pre><span></span><code><span class="n">ssh</span><span class="w"> </span><span class="n">cmsj</span><span class="mf">@192.168.122</span><span class="p">.</span><span class="n">xyz</span>
</code></pre></div>
<p>you should now be in your VM! Now all you need to do is configure it to do things and then fix its networking. My plan is to switch the VMs to static IPs and then use NAT to forward connections from public IPs to the VMs, but you could bridge them onto the host's main ethernet device and assign public IPs directly to the VMs.</p>Python decisions2010-06-03T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2010-06-03:/2010/06/03/python-decisions.html<p>Every time I find myself hacking on some Python I find myself second guessing all sorts of tiny design decisions and so I figure the only way to get any kind of perspective on them is to talk about them. Either I'll achieve more clarity through constructing explanations of what …</p><p>Every time I find myself hacking on some Python I find myself second guessing all sorts of tiny design decisions and so I figure the only way to get any kind of perspective on them is to talk about them. Either I'll achieve more clarity through constructing explanations of what I was thinking, or people will comment with useful insights. Hopefully the latter, but this is hardly the most popular blog in the world ;)
So. What shall we look at first. Well, I just hacked up a tiny script last night to answer a simple question:</p>
<p><em>Is most of my music collection from the 90s?</em></p>
<p>Obviously what I want to do here is examine the ID3 tags of the files in my music collection and see how they're distributed. A quick search with apt showed that Ubuntu 10.04 has two python libraries for dealing with ID3 tags and a quick play with each suggested that the one with the API most relevant to my interests was <a href="apt:python-eyed3" title="Install eyeD3">eyeD3</a>. After a few test iterations of the script I was getting bored of waiting for it to silently scan the roughly 4000 MP3s I have, so I did another quick search and found a <a href="apt:python-progressbar" title="Install progressbar">progress bar</a>library.
So that's all of the motive and opportunity established, now let's examine the means to the end. If you want to follow along at home, the whole script is <a href="/wp-content/uploads/2010/06/musicdecades.py_.txt" title="musicdecades.py.txt">here</a>.</p>
<div class="highlight"><pre><span></span><code><span class="k">try</span><span class="p">:</span>
<span class="kn">import</span> <span class="nn">eyeD3</span>
<span class="kn">import</span> <span class="nn">progressbar</span> <span class="k">as</span> <span class="nn">pbar</span>
<span class="k">except</span> <span class="ne">ImportError</span><span class="p">:</span>
<span class="nb">print</span><span class="p">(</span><span class="s2">"You should make sure python-eyed3 and python-progressbar are installed"</span><span class="p">)</span>
<span class="n">sys</span><span class="o">.</span><span class="n">exit</span><span class="p">(</span><span class="mi">1</span><span class="p">)</span>
</code></pre></div>
<p>First off this is the section where I'm importing the two non-default python libraries that I depend on. I want to provide a good experience when they're not installed, so I catch the exception and tell people the Debian/Ubuntu package names they need, and exit gracefully. I rename the progressbar module as I import it just because "progressbar" is annoyingly long as a name, and I don't like doing "from foo import *".
Skipping further on, we find the code that extracts the ID3 year tag:</p>
<div class="highlight"><pre><span></span><code><span class="n">year</span> <span class="o">=</span> <span class="n">tag</span><span class="o">.</span><span class="n">getYear</span><span class="p">()</span> <span class="ow">or</span> <span class="s1">'Unknown'</span>
</code></pre></div>
<p>This is something I'm really not sure about the "correctness" of; One of the reasons I went with the eyeD3 library was that the getYear() method returns None if it can't find any data, but I don't really want to capture the result, then test the result explicitly and if it's None set the value to "Unknown", so I went with the above code which only needs a single line and is (IMHO) highly readable.
This is ultimately the crux of the entire program - we've now collected the year, so we can work out which decade it's from:</p>
<div class="highlight"><pre><span></span><code><span class="k">if</span> <span class="n">year</span> <span class="ow">is</span> <span class="ow">not</span> <span class="s1">'Unknown'</span><span class="p">:</span>
<span class="n">year</span> <span class="o">=</span> <span class="s2">"</span><span class="si">%s</span><span class="s2">0s"</span> <span class="o">%</span> <span class="nb">str</span><span class="p">(</span><span class="n">year</span><span class="p">)[:</span><span class="mi">3</span><span class="p">]</span>
</code></pre></div>
<p>If this isn't an unknown year we chop the final digit off the year and replace it with a zero. Job done!
Next up, another style question. Rather than store the year we just processed I want to know how many of each decade have been found, so the obvious choice is a dict where the keys are the decades and the values are the number of times each decade has been found. One option would be to pre-fill the dict with all the decades, each with a value of zero, but that seems redundant and ugly, so instead I start out with an empty dict. This presents a challenge - if we find a decade that isn't already a key in the dict (which will frequently be the case) we need to notice that and add it. We could do this by pre-emptively testing the dict with its has_key() method, but that struck me as annoyingly wordy, so I went with:</p>
<div class="highlight"><pre><span></span><code><span class="k">try</span><span class="p">:</span>
<span class="n">years</span><span class="p">[</span><span class="n">year</span><span class="p">]</span> <span class="o">+=</span> <span class="mi">1</span>
<span class="k">except</span> <span class="ne">KeyError</span><span class="p">:</span>
<span class="n">years</span><span class="p">[</span><span class="n">year</span><span class="p">]</span> <span class="o">=</span> <span class="mi">1</span>
</code></pre></div>
<p>If we are incrementing a year that isn't already in the dict, python will raise a KeyError, at which point we know what's happened and know the correct value is 1, so we just set it explicitly. Seems like the simplest solution, but is it the sanest?
The only other thing I wanted to say is a complaint - having built up the dict I then want to print it nicely, so I have a quick list comprehension to produce a list of strings of the format "19xx: yy" (i.e. the decade and the final number of tracks found for that decade), which I then join together using:</p>
<div class="highlight"><pre><span></span><code><span class="s1">', '</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="n">OUTPUT</span><span class="p">)</span>
</code></pre></div>
<p>which I hate! Why can't I do:</p>
<div class="highlight"><pre><span></span><code><span class="n">OUTPUT</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="s1">', '</span><span class="p">)</span>
</code></pre></div>
<p>(where "OUTPUT" is the list of strings). If that were possible, what I'd actually do is tack the .join() onto the end of the list comprehension and a single line would turn the dict into a printable string.
So there we have it, my thoughts on the structure of my script. I'd also add that I've become mildly obsessive about getting good scores from pylint on my code, which is why it's rigorously formatted, docstring-ed and why the variable names in the __main__ section are in capitals.
What are your thoughts?
Oh, and the answer is no, most of my music is from the 2000s. The 1990s come in second :)</p>gtk icon cache search tool2010-05-13T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2010-05-13:/2010/05/13/gtk-icon-cache-search-tool.html<p>Earlier on this evening I was asking the very excellent Ted Gould about a weird problem with my Gtk+ icon theme - an app I'd previously installed by hand in /usr/local/, but subsequently removed, had broken icons because Gtk+ was looking in /usr/local/share/icons/ instead of /usr/share …</p><p>Earlier on this evening I was asking the very excellent Ted Gould about a weird problem with my Gtk+ icon theme - an app I'd previously installed by hand in /usr/local/, but subsequently removed, had broken icons because Gtk+ was looking in /usr/local/share/icons/ instead of /usr/share/icons/.
We did a little digging and realised I had an icon theme cache file in /usr/local/ that was overriding the one in /usr/. A bit of deleting later and it's back, but in the process we whipped up a little bit of python to print out the filename of an icon given an icon name.</p>
<div class="highlight"><pre><span></span><code><span class="ch">#!/usr/bin/python</span>
<span class="c1"># gtk-find-icon by Chris Jones <cmsj@tenshu.net></span>
<span class="c1"># Copyright 2010. GPL v2.</span>
<span class="kn">import</span> <span class="nn">sys</span>
<span class="kn">import</span> <span class="nn">gtk</span>
<span class="n">THEME</span> <span class="o">=</span> <span class="n">gtk</span><span class="o">.</span><span class="n">IconTheme</span><span class="p">()</span>
<span class="n">ICON</span> <span class="o">=</span> <span class="n">THEME</span><span class="o">.</span><span class="n">lookup_icon</span><span class="p">(</span><span class="n">sys</span><span class="o">.</span><span class="n">argv</span><span class="p">[</span><span class="mi">1</span><span class="p">],</span>
<span class="n">gtk</span><span class="o">.</span><span class="n">ICON_SIZE_MENU</span><span class="p">,</span>
<span class="n">gtk</span><span class="o">.</span><span class="n">ICON_LOOKUP_USE_BUILTIN</span><span class="p">)</span>
<span class="k">if</span> <span class="ow">not</span> <span class="n">ICON</span><span class="p">:</span>
<span class="nb">print</span> <span class="s2">"None found"</span>
<span class="k">else</span><span class="p">:</span>
<span class="nb">print</span><span class="p">(</span><span class="n">ICON</span><span class="o">.</span><span class="n">get_filename</span><span class="p">())</span>
</code></pre></div>Hybrid - Disappear Here2010-04-19T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2010-04-19:/2010/04/19/hybrid-disappear-here.html<p>It's a while since I wrote anything in the Music category of this blog, and since everything has been about my software projects recently, I figure it's time to mix things up a little. It's also just a few weeks since the release of the latest studio album by probably …</p><p>It's a while since I wrote anything in the Music category of this blog, and since everything has been about my software projects recently, I figure it's time to mix things up a little. It's also just a few weeks since the release of the latest studio album by probably my favourite band of the last few years, <a href="http://www.hybridsoundsystem.com/" title="Hybrid">Hybrid</a>.
The album is called Disappear Here and it's pretty damn good - I was going to describe the tracks individually, but that's what reviewers typically do and it always sounds insufferably poncy, so I suggest you just go to the <a href="http://www.disappearhere.info" title="Disappear Here">album's site</a> and listen to the damn thing yourself ;)
They also post semi-frequent hour long DJ mixes on their <a href="http://soundcloud.com/hybridsoundsystem">Soundcloud page</a>, which I would recommend!</p>Writing Terminator plugins2010-04-18T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2010-04-18:/2010/04/18/writing-terminator-plugins.html<h2>Terminator Plugin HOWTO</h2>
<p>One of the features of the new 0.9x series of Terminator releases that hasn't had a huge amount of announcement/discussion yet is the plugin system. I've posted previously about the decisions that went into the design of the plugin framework, but I figured now would …</p><h2>Terminator Plugin HOWTO</h2>
<p>One of the features of the new 0.9x series of Terminator releases that hasn't had a huge amount of announcement/discussion yet is the plugin system. I've posted previously about the decisions that went into the design of the plugin framework, but I figured now would be a good time to look at how to actually take advantage of it.
While the plugin system is really generic, so far there are only two points in the Terminator code that actually look for plugins - the Terminal context menu and the default URL opening code. If you find you'd like to write a plugin that interacts with a different part of Terminator, please let me know, I'd love to see some clever uses of plugins and I definitely want to expand the number of points that plugins can hook into.
The basics of a plugin</p>
<hr>
<p>A plugin is a class in a .py file in terminatorlib/plugins or ~/.config/terminator/plugins, but not all classes are automatically treated as plugins. Terminator will examine each of the .py files it finds for a list called 'available' and it will load each of the classes mentioned therein.
Additionally, it would be a good idea to import terminatorlib.plugin as that contains the base classes that other plugins should be derived from.
A quick example:</p>
<div class="highlight"><pre><span></span><code><span class="kn">import</span> <span class="nn">terminatorlib.plugin</span> <span class="k">as</span> <span class="nn">plugin</span>
<span class="n">available</span> <span class="o">=</span> <span class="p">[</span><span class="s1">'myfirstplugin'</span><span class="p">]</span>
<span class="k">class</span> <span class="nc">myfirstplugin</span><span class="p">(</span><span class="n">plugin</span><span class="o">.</span><span class="n">SomeBasePluginClass</span><span class="p">):</span>
<span class="n">etc</span><span class="o">.</span>
</code></pre></div>
<p>So now let's move on to the simplest type of plugin currently available in Terminator, a URL handler.
URL Handlers</p>
<hr>
<p>This type of plugin adds new regular expressions to match text in the terminal that should be handled as URLs. We ship an example of this with Terminator, it's a handler that adds support for the commonly used format for Launchpad. Ignoring the comments and the basics above, this is ultimately all it is:</p>
<div class="highlight"><pre><span></span><code><span class="k">class</span> <span class="n">LaunchpadBugURLHandler</span>(<span class="n">plugin</span>.<span class="n">URLHandler</span>):
<span class="n">capabilities</span> = [<span class="s">'url_handler'</span>]
<span class="n">handler_name</span> = <span class="s">'launchpad_bug'</span>
<span class="nb">match</span> = <span class="s">'\\b(lp|LP):?\s?#?[0-9]+(,\s*#?[0-9]+)*\\b'</span>
</code></pre></div>
<div class="highlight"><pre><span></span><code><span class="w"> </span><span class="nv">def</span><span class="w"> </span><span class="nv">callback</span><span class="ss">(</span><span class="nv">self</span>,<span class="w"> </span><span class="nv">url</span><span class="ss">)</span>:
<span class="w"> </span><span class="k">for</span><span class="w"> </span><span class="nv">item</span><span class="w"> </span><span class="nv">in</span><span class="w"> </span><span class="nv">re</span>.<span class="nv">findall</span><span class="ss">(</span><span class="nv">r</span><span class="s1">'[0-9]+'</span>,<span class="w"> </span><span class="nv">url</span><span class="ss">)</span>:
<span class="w"> </span><span class="k">return</span><span class="ss">(</span><span class="s1">'https://bugs.launchpad.net/bugs/%s'</span><span class="w"> </span><span class="o">%</span><span class="w"> </span><span class="nv">item</span><span class="ss">)</span>
</code></pre></div>
<p>That's it! Let's break it down a little to see the important things here:
- inherit from plugin.URLHandler if you want to handle URLs.
- include 'url_handler' in your capabilities list
- URL handlers must specify a unique handler_name (no enforcement of uniqueness is performed by Terminator, so use some common sense with the namespace)
- Terminator will call a method in your class called callback() and pass it the text that was matched. You must return a valid URL which will probably be based on this text.</p>
<p>and that's all there is to it really. Next time you start terminator you should find the pattern you added gets handled as a URL!
Context menu items</p>
<hr>
<p>This type of plugin is a little more involved, but not a huge amount and as with URLHandler we ship an example in terminatorlib/plugins/custom_commands.py which is a plugin that allows users to add custom commands to be sent to the terminal when selected. This also brings a second aspect of making more complex plugins - storing configuration. Terminator's shiny new configuration system (based on the excellent ConfigObj) exposes some API for plugins to use for loading and storing their configuration. The nuts and bolts here are:</p>
<div class="highlight"><pre><span></span><code><span class="kn">import</span> <span class="nn">terminatorlib.plugin</span> <span class="k">as</span> <span class="nn">plugin</span>
<span class="kn">from</span> <span class="nn">terminatorlib.config</span> <span class="kn">import</span> <span class="n">Config</span>
<span class="n">available</span> <span class="o">=</span> <span class="p">[</span><span class="s1">'CustomCommandsMenu'</span><span class="p">]</span>
<span class="k">class</span> <span class="nc">CustomCommandsMenu</span><span class="p">(</span><span class="n">plugin</span><span class="o">.</span><span class="n">MenuItem</span><span class="p">):</span>
<span class="n">capabilities</span> <span class="o">=</span> <span class="p">[</span><span class="s1">'terminal_menu'</span><span class="p">]</span>
<span class="n">config</span> <span class="o">=</span> <span class="kc">None</span>
<span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
<span class="bp">self</span><span class="o">.</span><span class="n">config</span> <span class="o">=</span> <span class="n">Config</span><span class="p">()</span>
<span class="n">myconfig</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">config</span><span class="o">.</span><span class="n">plugin_get_config</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="vm">__class__</span><span class="o">.</span><span class="vm">__name__</span><span class="p">)</span>
<span class="c1"># Now extract valid data from sections{}</span>
<span class="k">def</span> <span class="nf">callback</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">menuitems</span><span class="p">,</span> <span class="n">menu</span><span class="p">,</span> <span class="n">terminal</span><span class="p">):</span>
<span class="n">menuitems</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">gtk</span><span class="o">.</span><span class="n">MenuItem</span><span class="p">(</span><span class="s1">'some jazz'</span><span class="p">))</span>
</code></pre></div>
<p>This is a pretty simplified example, but it's sufficient to insert a menu item that says "some jazz". I'm not going to go into the detail of hooking up a handler to the 'activate' event of the MenuItem or other PyGTK mechanics, but this gives you the basic detail. The method that Terminator will call from your class is again "callback()" and you get passed a list you should add your menu structure to, along with references to the main menu object and the related Terminal. As the plugin system expands and matures I'd like to be more formal about the API that plugins should expect to be able to rely on, rather than having them poke around inside classes like Config and Terminal. Suggestions are welcome :)
Regarding the configuration storage API - the value returned by Config.plugin_get_config() is just a dict, it's whatever is currently configured for your plugin's name in the Terminator config file. There's no validation of this data, so you should pay attention to it containing valid data. You can then set whatever you want in this dict and pass it to Config().plugin_set_config() with the name of your class and then call Config().save() to flush this out to disk (I recommend that you be quite liberal about calling save()).
Wrap up</p>
<hr>
<p>Right now that's all there is to it. Please get in touch if you have any suggestions or questions - I'd love to ship more plugins with Terminator itself, and I can think of some great ideas. Probably the most useful thing would be something to help customise Terminator for heavy ssh users (see the earlier fork of Terminator called 'ssherminator')</p>Terminator 0.93 released!2010-04-15T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2010-04-15:/2010/04/15/terminator-093-released.html<p>Another week, another release focused on squashing as many bugs as possible. There's also one feature in this release - a patch from Kees Cook to add a preferences UI for the alternate_screen_scroll setting.
Please keep those bug reports coming, the response to the 0.9x series has been fantastic!</p>Terminator 0.92 released2010-04-07T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2010-04-07:/2010/04/07/terminator-092-released.html<p>Hot on the heels of 0.91 we have a new release for you. This is another bugfix release, stomping on as many regressions from 0.14 as we can find. Many, many thanks to all of the people who have been in touch with the project to tel us …</p><p>Hot on the heels of 0.91 we have a new release for you. This is another bugfix release, stomping on as many regressions from 0.14 as we can find. Many, many thanks to all of the people who have been in touch with the project to tel us about the things that are affecting them. If you find more regressions/bugs, please let us know!
Also in this release the Palette section of the Profile editor in the Preferences GUI is now fully active, which means that all of the config file options should now be fully editable in the GUI.</p>Terminator 0.90 released!2010-03-31T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2010-03-31:/2010/03/31/terminator-090-released.html<p>After lots of work we're really very proud to announce that the completely re-worked Terminator 0.90 is now available! Hopefully we haven't introduced too many new bugs in exchange for the much requested features of being able to save layouts!</p>Terminator 0.91 released2010-03-31T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2010-03-31:/2010/03/31/terminator-091-released.html<p>Unfortunately I overlooked some very annoying bugs during the 0.90 release process. This is a quick release to address them. apologies to those affected.</p>Heads up, new Terminator incoming2010-03-30T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2010-03-30:/2010/03/30/heads-up-new-terminator-incoming.html<p>Ok folks, I suck for not getting Terminator 0.90 released earlier and I suck for not having a bunch of bug fixes for 0.14 in Ubuntu Lucid.
I'm going to fix both tonight by releasing 0.90 and begging the lovely Ubuntu Universe folks to grant an exception …</p><p>Ok folks, I suck for not getting Terminator 0.90 released earlier and I suck for not having a bunch of bug fixes for 0.14 in Ubuntu Lucid.
I'm going to fix both tonight by releasing 0.90 and begging the lovely Ubuntu Universe folks to grant an exception to get it into Lucid.
Here's hoping everything goes smoothly!</p>An adventure with an HP printer/scanner and Ubuntu2010-03-15T00:00:00+00:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2010-03-15:/2010/03/15/adventure-with-hp-printerscanner-and.html<p>For a while now I've been thinking about some ideas for a project that will require a scanner. No problem you think, scanners of various kinds have been supported in Linux for a long time.
I dislike ordering hardware online because of the shipping lag and because I'm a sucker …</p><p>For a while now I've been thinking about some ideas for a project that will require a scanner. No problem you think, scanners of various kinds have been supported in Linux for a long time.
I dislike ordering hardware online because of the shipping lag and because I'm a sucker for the retail experience, so I was checking out which devices would work with Ubuntu and which devices were on sale in my local computer supermarket. The latter was a depressingly short list, and the former was getting annoying to search for, but I stumbled on the idea of a multi-function printer. It turns out that it's cheaper to buy a scanner as part of a printer than it is to buy a scanner on its own (granted the resolution of the scanner isn't quite as good, but it's more than sufficient for my needs). The reason for this is undoubtedly that the manufacturers are expecting to make up their money by selling me ink cartridges every few months.
As I started to look at models of multi-function printers, one thing became apparent almost immediately - HP have done a lot of leg work on this. I quickly found a bunch of info on their site about how they basically support all of their stuff on Linux, including a page which specifically listed popular distros and which versions worked with which printers.
I decided pretty much immediately that I wanted to support this, so off I went to the shop to buy an HP. They had the decent looking F4580 for around £40, so I nabbed that and set off home.
When I got home I fired up my laptop running Lucid and plugged the new device in. Less than 10 seconds later I was told it was ready for printing, and I fired up Robert Ancell's excellent new Simple Scan to see what configuration I would need to do to make that work.... the answer being none, it scanned a page first time.
Now smug with the ease with which that had worked I started installing the HP driver software on a popular proprietary operating system so I could use it to configure the printer's WiFi feature (something I assumed I couldn't do from within Ubuntu - an assumption that turns out to have been wrong). Ten minutes later it was still finishing off the install process, but eventually I did get the printer hooked up to our wireless network.
Back to the Lucid machine, I told it to add a new printer, it immediately saw the HP announcing itself on the network and let me quickly add that and I could print over wifi. Pretty nifty stuff.
Then I started poking around with HP's Linux Imaging and Printing software (HPLIP) and noticed that there was an "hp-toolbox" that wasn't installed. This is the tool I should have used to configure the wifi network on the printer; It also shows the ink levels and lets you kick off scanning/printing/cleaning type jobs. Out of sheer curiosity I went into hp-toolbox's preferences and changed it from using xsane to simple-scan, and told it to start a Scan. I wasn't expecting it to work because the device wasn't connected via USB, but it turns out that not only does the device support scanning over WiFi, it works in Linux. It's not quite as fast as a direct hookup, but it's certainly significantly more convenient!
So there we have it, out of the box I was up and running within 10 seconds of plugging the device in, and if I'd known to just install hp-toolbox I would have had everything running wirelessly a few minutes later. This being compared to installing CDs and dealing with great gobs of driver/application mess (I've seen HP's Windows drivers and it's no fun trying to persuade them to update themselves, or to persuade them not to prompt you to register every week). A huge, epic victory for Linux and Ubuntu - and one that I seem to find with much random consumer hardware these days. A few years ago this post would have been full of complicated commands and scripts and compilation as I described how to make the device work, but now all I can do is be smug about how easy it was :D
Win.</p>Terminator 0.90beta3 released2010-03-15T00:00:00+00:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2010-03-15:/2010/03/15/terminator-090beta3-released.html<p>We've been hard at work over the last 7 months preparing a whole new core for Terminator and it's getting close to being ready, so this is a beta release intended for testing only. Ubuntu packages have been uploaded to our test PPA (<a href="https://launchpad.net/~gnome-terminator/+archive/test">https://launchpad.net/~gnome-terminator/+archive/test</a>) and …</p><p>We've been hard at work over the last 7 months preparing a whole new core for Terminator and it's getting close to being ready, so this is a beta release intended for testing only. Ubuntu packages have been uploaded to our test PPA (<a href="https://launchpad.net/~gnome-terminator/+archive/test">https://launchpad.net/~gnome-terminator/+archive/test</a>) and a tarball is available from <a href="http://mairukipa.tenshu.net/~cmsj/terminator/">http://mairukipa.tenshu.net/~cmsj/terminator/</a>.
Please provide any feedback about this release to our bug tracker at <a href="https://bugs.launchpad.net/terminator/">https://bugs.launchpad.net/terminator/</a> or our IRC channel, #terminator on irc.freenode.net.</p>
<p>Caveats:
* config files from 0.14 and earlier are currently ignored by 0.90 because the config file format has changed.
* we now have a very basic ability to save and restore layouts, but this feature is very new and likely to contain many bugs</p>Final approach for Terminator epic-refactor2010-01-21T00:00:00+00:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2010-01-21:/2010/01/21/final-approach-for-terminator-epic.html<p>I'm done hacking on the Terminator epic-refactor branch for the evening and the following has been achieved today (in chronological order):</p>
<ul>
<li>Fix a bug in handling URLs dropped on the window</li>
<li>Implement directional navigation</li>
<li>Implement geometry hinting</li>
<li>Fix a bug in group emitting that cause "Broadcast off" and "Broadcast to …</li></ul><p>I'm done hacking on the Terminator epic-refactor branch for the evening and the following has been achieved today (in chronological order):</p>
<ul>
<li>Fix a bug in handling URLs dropped on the window</li>
<li>Implement directional navigation</li>
<li>Implement geometry hinting</li>
<li>Fix a bug in group emitting that cause "Broadcast off" and "Broadcast to all" to become inverted</li>
<li>Implement WM_URGENT bell handler</li>
</ul>
<p>I'm <em>really</em> happy with how this is going. All that is left to have feature parity with trunk, I think, is some keyboard shortcut handlers.
I'd still love to get more testing results to make sure I haven't missed anything, but at this rate I'm expecting to be able to land the epic-refactor branch on trunk this weekend, after five and a half months.
Then I'm going to write a tool to convert old config files and we can think about putting out a 0.90 beta release. Exciting stuff!</p>This is your captain speaking, Terminator has now landed!2010-01-21T00:00:00+00:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2010-01-21:/2010/01/21/this-is-your-captain-speaking.html<p>I managed to finish off what I thought were the last few missing keyboard shortcuts during my lunch break today, but then realised that I'd missed two, but I was so excited an short of time that I decided to just go ahead and land the branch anyway!
So there …</p><p>I managed to finish off what I thought were the last few missing keyboard shortcuts during my lunch break today, but then realised that I'd missed two, but I was so excited an short of time that I decided to just go ahead and land the branch anyway!
So there it is - trunk is now completely refactored and full of exciting new bugs. I noticed while I was working from it this afternoon that the transparency setting code wasn't working, but I expect I can get that cleared up tonight :)
Now a bunch of bug fixing and a config converter and we can release!
Thanks to everyone who has been testing so far.</p>Terminator 0.90 progress2010-01-19T00:00:00+00:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2010-01-19:/2010/01/19/terminator-090-progress.html<p>Further to my previous post I thought I'd post a quick update about how things are progressing. I mentioned in my previous post that I knew of several things that were not yet working in the Epic Refactor branch:</p>
<ul>
<li>-e and -x command line options</li>
<li>all forms of drag & drop …</li></ul><p>Further to my previous post I thought I'd post a quick update about how things are progressing. I mentioned in my previous post that I knew of several things that were not yet working in the Epic Refactor branch:</p>
<ul>
<li>-e and -x command line options</li>
<li>all forms of drag & drop</li>
<li>directional navigation</li>
<li>some keyboard shortcuts</li>
</ul>
<p>I'm pleased to say that the first two of these are now taken care of, but the latter two are still to be done. I'm less pleased to say that I haven't had much external feedback about this branch yet, but I suspect that most people who might be interested probably don't read my blog ;)
So if you know people who like Terminator and enjoy testing things out, all they need to do is:
bzr branch lp:~cmsj/terminator/epic-refactor
cd epic-refactor
./terminator</p>
<p>and give some feedback!</p>Testing Terminator 0.902010-01-05T00:00:00+00:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2010-01-05:/2010/01/05/testing-terminator-090.html<p>You might have seen my recent posts about the epic refactoring that has been going on in the Terminator codebase for the last few months.
I think it's finally time that we get some more eyeballs on it, mainly so I can check that I haven't massively screwed something up …</p><p>You might have seen my recent posts about the epic refactoring that has been going on in the Terminator codebase for the last few months.
I think it's finally time that we get some more eyeballs on it, mainly so I can check that I haven't massively screwed something up. I know there is lots of missing functionality right now, and probably a bunch of subtle bugs, but I could use your help quantifying exactly what these are!
If you're inclined to help, please branch <em>lp:~cmsj/terminator/epic-refactor</em>, cd into it and run <em>./terminator</em>, then use it like you always would and file bugs, preferably indicating clearly in the bug that you're using this branch and not trunk (maybe tag the bug '<strong>epicrefactor</strong>').
Things I know are broken right now:</p>
<ul>
<li>-e and -x command line options</li>
<li>all forms of drag & drop</li>
<li>directional navigation</li>
<li>some keyboard shortcuts</li>
</ul>
<p>Things I know are missing because they're not coming back:
- Extreme tabs mode (sorry, it's just too insane to support)
- GNOME Terminal profile reading (I'm trying to simplify our crazy config system and dropping GConf is a good way to achieve that)
- Config file reading. At some point I'll write something that migrates old Terminator configs to the new format, but for now you'll have to live without your old config file. The new one isn't documented yet either, but it is a whole bunch better!</p>
<p>Now would also be a great time to start writing plugins for Terminator and telling me about them. I'm happy to ship good plugins, but more importantly I want feedback about the weaknesses/strengths of our plugin system. Right now you can only hook into URL mangling and the terminal context menu, but the latter of those gives you pretty serious flexibility I think. Obviously one massive weakness is a lack of documentation about the plugin API, but I'll get to that, I promise!
So there we have it, another step along the way to me being able to merge this branch into trunk and put out a real release of 0.90 and then eventually 1.0!</p>Python wanderings, part two2009-12-31T00:00:00+00:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2009-12-31:/2009/12/31/python-wanderings-part-two.html<h2>2. Plugging it all in</h2>
<p>Sometimes we get feature requests and merge proposals for features that are clearly useful for someone, but not appropriate for the general use cases. It's always unfortunate to have to say no to these folks, but we have a slim menu UI and I'm wary …</p><h2>2. Plugging it all in</h2>
<p>Sometimes we get feature requests and merge proposals for features that are clearly useful for someone, but not appropriate for the general use cases. It's always unfortunate to have to say no to these folks, but we have a slim menu UI and I'm wary of cluttering it with niche features. Still, turning away legitimate users is something I don't like doing, so for a while we've been considering how to fix this.
The obvious answer is that we should support plugins, and I've been working on such a system for my epic refactoring. This is a quick wander through some thoughts I've had.
I started out by googling for python plugin systems; One if the top hits was <a href="http://lucumr.pocoo.org/2006/7/3/python-plugin-system">this page</a> by Armin Ronacher . In it he demonstrates a plugin system in under 40 lines of Python. It's simple and flexible, but there are some issues, like it makes doctest very sad.
I asked about this in #python and was politely informed that I was Doing It Wrong. I chatted for a while with the helpful residents and came away with a list of plugin frameworks to look at, namely twisted.plugin and zope.interface.
Pulling in external dependencies is a big deal for us - many of our users are on Ubuntu or similar desktops with lots of python packages already installed, but some are not using GNOME or a Linux desktop at all, so we have to be sure that we need a library before we depend on it.
After playing a little with both of the options I came to the conclusion that while they are both really well made and capable, they are far more formal than we need, and the added dependency issues continued to concern me.
I revisited Armin's plugin system and removed the use of .__subclasses__() that was breaking doctest and offending #python instead having a list in each .py file which the plugin system extracts and treats any classes mentioned in that list as plugins. I also extended it to always instantiate the plugins and look for the plugin files in both the system directories and the user's home directory.
<a href="http://bazaar.launchpad.net/%7Ecmsj/%2Bjunk/terminator-epic-refactor/annotate/head%3A/terminatorlib/plugin.py">This plugin system</a> is currently hooked into two places in the branch, URL mangling and the context menu. This allows plugins to add support for new URL types (e.g. we just added support for Launchpad code URLs like lp:~cmsj/+junk/terminator-epic-refactor), and insert new options into the context menu. I'm not sure if we need to go further, but if you would like to hook into other parts let me know - it's pretty easy to arrange now :)</p>Python wanderings, part one2009-12-23T00:00:00+00:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2009-12-23:/2009/12/23/python-wanderings-part-one.html<p>As mentioned in my earlier post about refactoring Terminator, I want to talk about some of the things I've learned about Python and programming in the last few months. If I were you I wouldn't place any great significance in anything I'm about to say - after all I'm a rank …</p><p>As mentioned in my earlier post about refactoring Terminator, I want to talk about some of the things I've learned about Python and programming in the last few months. If I were you I wouldn't place any great significance in anything I'm about to say - after all I'm a rank amateur in the field of programming.
This is going to be a multi-part post so I at least get something out there, rather than leaving it to rot forever in my drafts folder.</p>
<h2>1. Solving global warming^W variables</h2>
<p>I have objects that represent terminal widgets, objects that represent widgets that contain terminals, objects that contain configuration, and one master object that functions as the brains of the operation.
Inevitably these objects need to know about each other, but how to achieve that? The brain object is simply called 'Terminator' and almost every other part of the system needs to know about it, same with the config object, and Terminator needs to know about all of the terminal objects, etc. The dependencies are all over the place and one aim of the re-factor was to separate all these parts out and decouple them, but ultimately I was never going to get away from different objects needing to know about each other.
So how to go about it? As far as I know the options are:
- pass around object references (every time you create something, pass it your references to all the bits it needs)
- Pros: no hacks or tricks involved
- Cons: makes every __init__() more complicated, means passing references that an object doesn't need other than to pass to its children.</p>
<ul>
<li>
<p>use global variables</p>
<ul>
<li>Pros: they're global</li>
<li>Cons: everyone seems to hate global variables, perhaps because it's an implicit dependency not an explicit one, or because of potential namespace collisions, or maybe other reasons.</li>
</ul>
</li>
<li>
<p>use singletons</p>
<ul>
<li>Pros: explicit dependency</li>
<li>Cons: often seems to involve hackery to get the singleton object reference</li>
</ul>
</li>
</ul>
<p>In my searching around I came across a fourth option that somewhat relates to singletons... the Borg pattern.
This is a very simple idea - it's a class that always instantiates to the same thing. You don't need a factory or function or something that gives you a reference to the singleton, you just instantiate a class and it's the same as all of the others you've instantiated of the same class.
Best of all, the Borg pattern is incredibly simple in Python. Like, really simple. Don't believe me? Click <a href="http://code.activestate.com/recipes/66531/">here</a>. Yep, four lines of code. Technically it's probably a bit ugly, but the resulting code feels very clean.
So now I have the Borg pattern in use for the main class, a class that provides all the configuration, a class that discovers plugins and lets them be referenced, and a fairly new class I'm experimenting with that acts as a factory for all of my classes, as a way to break any possibility of circular module dependencies.
Reality has to bite though, the Borg isn't a panacea; One has to be very careful about how one creates Borg objects. I chose to create a base class called Borg which Terminator, Config, Factory and PluginRegistry all derive from, but this turns out to have been a very short sighted decision to abstract out the common 4 lines. It wasn't until I started building Config to have functions that allow it to be accessed as a dict that I realised all of my Terminator, Config, Factory and PluginRegistry instances were the same thing as opposed to each type being distinct. It's also terrifyingly important that the subclasses of Borg not use class attributes. Any attributes defined by these classes *must* be instantiated as None so they are instance variables, and *after* you've called Borg.__init__(self) in your own __init__() you can then set up your attributes however you want because they are then part of the shared state.
On the whole I am happy with the Borg pattern. I've written test code to ensure that all of the assumptions I explicitly made are guaranteed, and all of the implicit assumptions I've discovered I made are also safe. Nonetheless, it's not a completely clean solution and I find myself wishing it was somehow a primitive of the language.</p>Random musical linkage2009-12-23T00:00:00+00:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2009-12-23:/2009/12/23/random-musical-linkage.html<p>I listen to a lot of electronic music and one of the nice things about that is the intermingling of other music by way of samples.
Case in point, I'm currently watching The Ballet Boyz's production of The Rite Of Spring with Rike, and I suddenly realised that the string …</p><p>I listen to a lot of electronic music and one of the nice things about that is the intermingling of other music by way of samples.
Case in point, I'm currently watching The Ballet Boyz's production of The Rite Of Spring with Rike, and I suddenly realised that the string motif playing at the entrance of The Elders is sampled and looped to great effect by the hypnotic Physical World on Freeland's album Now & Them.
This kind of thing makes me smile :)</p>Epic Terminator refactoring afoot2009-12-19T00:00:00+00:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2009-12-19:/2009/12/19/epic-terminator-refactoring-afoot.html<p>The current bzr repository for Terminator began its life in November 2006 with the simplest possible implementation of the concept of packing multiple terminals into one window. In the 3 years since then we have expanded and extended the code in a variety of directions to produce a moderately compelling …</p><p>The current bzr repository for Terminator began its life in November 2006 with the simplest possible implementation of the concept of packing multiple terminals into one window. In the 3 years since then we have expanded and extended the code in a variety of directions to produce a moderately compelling feature set, but one that is really obviously incomplete.
In the same time period we've also seen a really gratifying amount of adoption - I believe our active userbase numbers in the thousands if not tens of thousands. I am forced to largely estimate these numbers for all the usual FOSS reasons, but it's all based on one real metric - Ubuntu has about a million users reporting popcon data and over 10,000 of those have Terminator installed. I don't actually think that they all use it, but nonetheless it's the kind of number that makes you think "hey maybe I need to be doing more for these folks".
And I do think that, and I am trying to do more.
Back in August I took a serious look at where we are and came to the same old conclusions - we lack one or two headline features that people keep asking for (barely a week goes by when I don't get asked how someone can save a particular layout of terminals). These features are very subtle and deeply problematic with the existing code architecture - we've just been hacking in features as we can without any regard for architecture or future maintainability.
I decided that I'd had enough of being confused and frustrated by the status quo and so I started a side branch in my Launchpad /+junk/ folder called "epic-refactor" with the aim of refactoring all of Terminator from scratch. I'd read through every line of existing code and figure out what we were actually doing and how it could fit together more sensibly, then sketch that out in the barest form possible while I experimented with various Python techniques to arrive at an architecture that makes sense for our project, then port over the existing code feature-by-feature to the new architecture.
It's now a little over four months since I started the epic refactor and looking at where I stand today I am really happy. It's not ready to be merged into trunk yet, but the amount of work to get it there is less than the amount of work I've done on it so far. I don't want to put a timescale on it, but I hope to be calling for some wider testing in 2-3 months or less.
Once we are acceptably close to feature parity with the current releases I'll merge the epic-refactor branch over and we can start to push forward with implementing the features that everyone wants, and finally get to the point of being able to comfortably release a 1.0 version.
I'd always thought that I'd hand over maintainership after a 1.0 version, but the last 4 months have been a whirlwind of programming discovery, so I might very well just stick around and see what people want on the road to a 2.0 release. Alternatively, the work I've been doing in the last few days on a plugin system might mean that I can kick back and watch everyone else implement crazy, awesome and sublime features I'd never thought of!
I'll be back with more when I have written a configuration subsystem for epic-refactor, because by then I'll be wanting your help to test!
I'm also going to write a separate post shortly about some of the interesting Python paradigms and ideas I've hit upon along the way. I'm sure none of it will be a revelation to anyone with serious programming chops, but for a rank amateur like me it would have been useful to have read four months ago ;)</p>Terminator 0.14 released!2009-12-03T00:00:00+00:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2009-12-03:/2009/12/03/terminator-014-released.html<p>This has been another long gap release unfortunately, but we do have some goodies for you.
Stephen Boddy is back with some excellent re-working of the UI relating to grouping terminals, Kees Cook brings us some clever work relating to window geometry and myself and others have been fixing bugs …</p><p>This has been another long gap release unfortunately, but we do have some goodies for you.
Stephen Boddy is back with some excellent re-working of the UI relating to grouping terminals, Kees Cook brings us some clever work relating to window geometry and myself and others have been fixing bugs for you. We hope you enjoy this release, especially those of you who hated the "I'll be back" notifications from 0.13!</p>
<p>Home Page: <a href="http://www.tenshu.net/terminator">http://www.tenshu.net/terminator</a>
Launchpad Page: <a href="http://launchpad.net/terminator">http://launchpad.net/terminator</a>
Ubuntu PPAs: <a href="https://launchpad.net/~gnome-terminator/+archive/ppa">https://launchpad.net/~gnome-terminator/+archive/ppa</a></p>Rise of the Floating Fonters2009-10-20T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2009-10-20:/2009/10/20/rise-of-floating-fonters.html<p>For about two years now I've been using a 127dpi laptop screen as my primary computer display. It's a comfortable thing to be looking at, and after much playing around I've settled on 6.5pt as my ideal application font size.
No problems with that, right? Fontconfig says font sizes …</p><p>For about two years now I've been using a 127dpi laptop screen as my primary computer display. It's a comfortable thing to be looking at, and after much playing around I've settled on 6.5pt as my ideal application font size.
No problems with that, right? Fontconfig says font sizes are a double (a high precision floating point number), but not all libraries/applications follow this.
In my testing of Karmic I've found two such things that particularly stick out:</p>
<ul>
<li>
<p>notify-osd</p>
<ul>
<li>Assumes font sizes are whole numbers, so ends up using a tiny font</li>
</ul>
</li>
<li>
<p>Gwibber</p>
<ul>
<li>Assumes font sizes are integers and completely fails to run if they are not</li>
</ul>
</li>
</ul>
<p>Obviously this won't do, so I've checked that we have <a href="https://bugs.launchpad.net/ubuntu/+source/notify-osd/+bug/396736">filed</a> <a href="https://bugs.launchpad.net/gwibber/+bug/383759">bugs</a> (and in the case of Gwibber, a patch), but I seem to be meeting some resistance, or this just isn't considered to be a high priority.
Thus a new Launchpad team is born, <a href="https://launchpad.net/~floatingfonters">The Floating Fonters</a>, for exiles such as myself who won't kowtow to the integers. We even have <a href="https://launchpad.net/~floatingfonters/+archive/floatingfixes">a PPA with fixed versions</a> of notify-osd and Gwibber, but no guarantees are included!</p>Thinkpad USB keyboard2009-10-17T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2009-10-17:/2009/10/17/thinkpad-usb-keyboard.html<p>Yesterday I took delivery of a new Thinkpad USB keyboard because I've started putting my laptop up on a stand to be next to it's external monitor.
In so doing I needed a keyboard and the ten quid Logitech was making me very sad, hence the Thinkpad one.
If you've …</p><p>Yesterday I took delivery of a new Thinkpad USB keyboard because I've started putting my laptop up on a stand to be next to it's external monitor.
In so doing I needed a keyboard and the ten quid Logitech was making me very sad, hence the Thinkpad one.
If you've ever typed on a thinkpad you immediately know why this keyboard is awesome :)</p>
<p><img alt="keyboard" src="http://www.tenshu.net/wp-content/uploads/2009/10/l_1600_1200_865448E8-0EFC-41CC-A0F3-82920E4CD51E.jpeg">
<img alt="keyboard" src="http://www.tenshu.net/wp-content/uploads/2009/10/l_1600_1200_F60F9354-27C3-4F49-A275-021CAC524592.jpeg"></p>I'm waving I'm waving2009-10-06T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2009-10-06:/2009/10/06/i-waving-i-waving.html<p>Do you wave? If so, apply some superposition to cmsjtenshu@googlewave.com</p>You know that Alan "popey" Pope?2009-09-29T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2009-09-29:/2009/09/29/you-know-that-alan-pope.html<p>I've always said he's a really amazing guy, you know.</p>Lifesaver 1.1 released!2009-09-25T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2009-09-25:/2009/09/25/lifesaver-11-released.html<p>This release incorporates some bug fixes and some new features: Animation, FriendFeed support, Configuration via gconf keys, Improved visual layout.</p>
<p>Visit <a href="http://launchpad.net/lifesaver">http://launchpad.net/lifesaver</a> for downloads and information about packages for Ubuntu 9.10 (Karmic)</p>Double posts2009-09-24T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2009-09-24:/2009/09/24/double-posts.html<p>I've just chucked in a new wordpress plugin to syndicate my Launchpad project announcements (so Lifesaver and Terminator) as regular posts here so I don't need to keep writing release announcements in two places. This does mean that all the old Terminator announcements are doubled up here. I'm not sure …</p><p>I've just chucked in a new wordpress plugin to syndicate my Launchpad project announcements (so Lifesaver and Terminator) as regular posts here so I don't need to keep writing release announcements in two places. This does mean that all the old Terminator announcements are doubled up here. I'm not sure if I'll leave it or go back and remove the dupes. Probably nobody cares anyway ;)</p>Thinkpad kernel module in Ubuntu 9.10 (Karmic)2009-09-18T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2009-09-18:/2009/09/18/thinkpad-kernel-module-in-ubuntu-910.html<p>The Ubuntu Kernel Team has decided to remove the tp_smapi module from our kernel for 9.10 (Karmic Koala) because the author chooses to remain anonymous and it is therefore impossible to be sure that the code is not based on incorrectly obtained information.
Slightly annoying perhaps, but ultimately a …</p><p>The Ubuntu Kernel Team has decided to remove the tp_smapi module from our kernel for 9.10 (Karmic Koala) because the author chooses to remain anonymous and it is therefore impossible to be sure that the code is not based on incorrectly obtained information.
Slightly annoying perhaps, but ultimately a decision that's hard to argue with and fortunately one that's pretty easy to work around. I wish the author would clear things up once and for all because the tp_smapi module is desperately important for Thinkpad owners wishing to protect the life of their laptop battery. Given that my new X301 is tasked with a lifetime of 3 years, I am particularly keen to protect its battery!
The source for the module is still in the archive (and my understanding is that it will stay there, we just don't want to ship it by default) as tp-smapi-source, packaged by Evgeni Golov (a thoroughly decent chap who is the current owner of the Thinkpad X300 I've posted about previously). You can install it with the command:</p>
<div class="highlight"><pre><span></span><code>sudo apt-get install tp-smapi-source
</code></pre></div>
<p>Then run:
sudo module-assistant</p>
<p>and select the tp-smapi module to build and install. You are now just a quick:
sudo modprobe tp_smapi</p>
<p>away from having battery charge control options in /sys/devices/platform/smapi/BAT0/
Woo! If I get a chance I'll try and produce a version of the package which uses DKMS (Dell's Kernel Module management system which makes sure that additional modules like this are rebuilt automagically whenever you get a kernel update).</p>Brewing Lifesaver 1.12009-09-16T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2009-09-16:/2009/09/16/brewing-lifesaver-11.html<p>If you're using the Livesaver PPA I mentioned in my previous post you should shortly get offered a package of 1.1 which I'm almost ready to release.
I want to push it into the PPA first to get a little testing before officially tagging the release. Please file bugs …</p><p>If you're using the Livesaver PPA I mentioned in my previous post you should shortly get offered a package of 1.1 which I'm almost ready to release.
I want to push it into the PPA first to get a little testing before officially tagging the release. Please file bugs if you spot any!
There are a few little bug fixes, the first external code contribution, and configuration is now done via gconf so you can change the fonts/colours and search keywords. I did also add a source for FriendFeed, but it's currently disabled because I don't want to extend the amount of time it takes to collect data from the web too much.</p>New project released - Lifesaver2009-09-15T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2009-09-15:/2009/09/15/new-project-released-lifesaver.html<p>For a few days now I've been hacking away on a new project: Lifesaver.
The idea is really simple - it's a screensaver for GNOME that displays recent posts about Ubuntu from Twitter and Identi.ca. That's it!
The code and bugs and downloads live <a href="https://launchpad.net/lifesaver">on Launchpad</a>, and packages for Ubuntu …</p><p>For a few days now I've been hacking away on a new project: Lifesaver.
The idea is really simple - it's a screensaver for GNOME that displays recent posts about Ubuntu from Twitter and Identi.ca. That's it!
The code and bugs and downloads live <a href="https://launchpad.net/lifesaver">on Launchpad</a>, and packages for Ubuntu 9.10 (Karmic) are in <a href="https://launchpad.net/~cmsj/+archive/lifesaver">my PPA</a>.
Please let me know what you think!
Obligatory screenshot:
<img alt="Lifesaver" src="/wp-content/uploads/2009/09/2009-09-14-lifesaver.png">
<strong>Update</strong>: I had to remove the Jaunty package from the PPA because it turns out that I've been using GooCanvas features that aren't available in the Jaunty version of GooCanvas. Sorry Jaunty users, you'll have to wait until next month when Karmic is released and you upgrade!</p>Lifesaver 1.0 released!2009-09-14T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2009-09-14:/2009/09/14/lifesaver-10-released.html<p>This is mainly a release to get some wider testing, so I expect to make a new release soon fixing any problems. Give it a try!</p>Terminator 0.13 released!2009-06-23T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2009-06-23:/2009/06/23/terminator-013-released.html<p>After a wait of really far too long, we're proud to present Terminator 0.13 with a bunch of bug fixes, some minor new features and various optimisations. Downloads in the usual place!</p>Terminator 0.13 released!2009-06-23T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2009-06-23:/2009/06/23/terminator-013-released_23.html<p>I'm very pleased to announce the release of Terminator 0.13!
You can find the download on <a href="http://launchpad.net/terminator/trunk/0.13/+download/terminator_0.13.tar.gz">Launchpad</a> or see more at the <a href="http://www.tenshu.net/terminator/">Homepage</a>. Enjoy!</p>I've got a, but I'm not a b2009-05-25T00:00:00+01:002018-09-18T16:55:01+01:00Chris Jonestag:cmsj.net,2009-05-25:/2009/05/25/i-got-but-i-not-b.html<p>Just to subvert the absurd nonsense that Stuart Langridge has been enjoying recently, I present:</p>
<div class="highlight"><pre><span></span><code><span class="n">cmsj</span><span class="nv">@tenshu</span><span class="err">:</span><span class="o">/</span><span class="n">tmp</span><span class="o">/</span><span class="n">failthing</span><span class="err">$</span><span class="w"> </span><span class="n">wc</span><span class="w"> </span><span class="o">-</span><span class="n">l</span><span class="w"> </span><span class="p">..</span><span class="o">/</span><span class="n">randomlist</span><span class="p">.</span><span class="n">txt</span>
<span class="mi">411894</span><span class="w"> </span><span class="p">..</span><span class="o">/</span><span class="n">randomlist</span><span class="p">.</span><span class="n">txt</span>
</code></pre></div>
<p>My favourite so far would have to be:
<em>"I've got meg, but I'm not a megalopenis"</em></p>