Archive

Photo import workflow

Introduction

Since I'm writing about workflows today, I thought I'd also quickly chuck in a guide to how I get the photos and movies that I've taken with my iPhone, onto my laptop and specifically, imported into Aperture. The Mechanics This requires a few moving parts to produce a final workflow. The high-level process is:

  1. Plug iPhone into a USB port
  2. Copy photos from the iPhone into a temporary directory, deleting them as they are successfully retrieved
  3. Import the photos into Aperture, ensuring they are copied into its library and deleted from the temporary directory

Simple, right? Well yes and no.

Retrieval from iPhone

This really ought to be easier than it is, but at least it is possible. Aperture can import photos from devices, but it doesn't seem to offer the ability to delete them from the device after import. That alone makes it not even worth bothering with if you don't want to build up a ton of old photos on your phone. OS X does ship with a tool that can import photos from camera devices and delete the photos afterwards, a tool called AutoImporter.app, but you won't find it without looking hard. It lives at: /System/Library/Image Capture/Support/Application/AutoImporter.app

If you run that tool, you will see no window, just a dock icon and some menus. Go into its Preferences and you will be able to choose a directory to import to, and choose whether or not to delete the files: prefs

Easy!

Importing into Aperture

This involves using Automator to build a Folder Action workflow for the directory that AutoImporter is pulling the photos into. All it does is check to see if AutoImporter is still running and if so wait, then launch Aperture and tell it to import everything from that directory into a particular Project, and then delete the source files: Aperture autoimport workflow

That's it!

Really, that's all there is. Now whenever you plug in your iPhone, all of the pictures and movies you've taken recently, will get imported into Aperture for you to process, archive, touch-up, export or whatever else it is that you do with your photos and movies.


A sysadmin talks OpenSSH tips and tricks

My take on more advanced SSH usage

I've seen a few articles recently on sites like HackerNews which claimed to cover some advanced SSH techniques/tricks. They were good articles, but for me (as a systems administrator) didn't get into the really powerful guts of OpenSSH.

So, I figured that I ought to pony up and write about some of the more advanced tricks that I have either used or seen others use. These will most likely be relevant to people who manage tens/hundreds of servers via SSH. Some of them are about actual configuration options for OpenSSH, others are recommendations for ways of working with OpenSSH.

Generate your ~/.ssh/config

This isn't strictly an OpenSSH trick, but it's worth noting. If you have other sources of knowledge about your systems, automation can do a lot of the legwork for you in creating an SSH config. A perfect example here would be if you have some kind of database which knows about all your servers - you can use that to produce a fragment of an SSH config, then download it to your workstation and concatenate it with various other fragments into a final config. If you mix this with distributed version control, your entire team can share a broadly identical SSH config, with allowance for each person to have a personal fragment for their own preferences and personal hosts. I can't recommend this sort of collaborative working enough.

Generate your ~/.ssh/known_hosts

This follows on from the previous item. If you have some kind of database of servers, teach it the SSH host key of each (usually something like /etc/ssh/ssh_host_rsa_key.pub) then you can export a file with the keys and hostnames in the correct format to use as a known_hosts file, e.g.:

server1.company.com 10.0.0.101 ssh-rsa BLAHBLAHCRYPTOMUMBO

You can then associate this with all the relevant hosts by including something like this in your ~/.ssh/config:

Host *.mycompany.com
    UserKnownHostsFile ~/.ssh/generated_known_hosts
    StrictHostKeyChecking yes

This brings some serious advantages:

  • Safer - because you have pre-loaded all of the host keys and specified strict host key checking, SSH will prompt you if you connect to a machine and something has changed.
  • Discoverable - if you have tab completion, your shell will let you explore your infrastructure just by prodding the Tab key.

Keep your private keys, private, private

This seems like it ought to be more obvious than it perhaps is... the private halves of your SSH keys are very privileged things. You should treat them with a great deal of respect. Don't put them on multiple machines (SSH keys are cheap to generate and revoke) and don't back them up.

Know your limits

If you're going to write a config snippet that applies to a lot of hosts you can't match with a wildcard, you may end up with a very long Host line in your ssh config. It's worth remembering that there is a limit to the length of lines: 1024 characters. If you're going to need to exceed that, you will have to just have multiple Host sections with the same options.

Set sane global defaults

HashKnownHosts no
Host *
    GSSAPIAuthentication no
    ForwardAgent no

These are very sane global defaults:

  • Known hosts hashing is good for keeping your hostnames secret from people who obtain your known_hosts file, but is also really very inconvenient as you are also unable to get any useful information out of the file yourself (such as tab completion). If you're still feeling paranoid you might consider tightening the permissions on your known_hosts file as it may be readable by other users on your workstation.
  • GSSAPI is very unlikely to be something you need, it's just slowing things down if it's enabled.
  • Agent forwarding can be tremendously dangerous and should, I think, be actively and passionately discouraged. It ought to be a nice feature, but it requires that you trust remote hosts unequivocally as if they had your private keys, because functionally speaking, they do. They don't actually have the private key material, but any sufficiently privileged process on the remote server can connect back to the SSH agent running on your workstation and request it respond to challenges from an SSH server. If you keep your keys unlocked in an SSH agent, this gives any privileged attacker on a server you are logged into, trivial access to any other machine your keys can SSH into. If you somehow depend on using agent forwarding with Internet facing servers, please re-consider your security model (unless you are able to robustly and accurately argue why your usage is safe, but if that is the case then you don't need to be reading a post like this!)

Notify useful metadata

If you're using a Linux or OSX desktop, you either have something like notify-send(1) or Growl for desktop notifications. You can hook this into your SSH config to display useful metadata to yourself. The easiest way to do this is via the LocalCommand option:

Host *
    PermitLocalCommand yes
    LocalCommand /home/user/bin/ssh-notify.sh %h

This will call the ssh-notify.sh script every time you SSH to a host, passing the hostname you gave, as an argument.  In the script you probably want to ensure you're actually in an interactive terminal and not some kind of backgrounded batch session - this can be done trivially by ensuring that tty -s returns zero. Now the script just needs to go and fetch some metadata about the server you're connecting to (e.g. its physical location, the services that run on it, its hardware specs, etc.) and format them into a command that will display a notification.

Sidestep overzealous key agents

If you have a lot of SSH keys in your ssh-agent (e.g. more than about 5) you may have noticed that SSHing to machines which want a password, or those which you wish to use a specific key that isn't in your agent, can be quite tricky. The reason for this is that OpenSSH currently seems to talk to the agent in preference to obeying command line options (i.e. -i) or config file directives (i.e. IdentityFile or PreferredAuthentications). You can force the behaviour you are asking for with the IdentitiesOnly option, e.g.:

Host server1.company.com
    IdentityFile /some/rarely/used/ssh.key
    IdentitiesOnly yes

(on a command line you would add this with -o IdentitiesOnly=yes)

Match hosts with wildcards

Sometimes you need to talk to a lot of almost identically-named servers. Obviously SSH has a way to make this easier or I wouldn't be mentioning this. For example, if you needed to ssh to a cluster of remote management devices:

Host *.company.com management-rack-??.company.com
    User root
    PreferredAuthentications password

This will match anything ending in .company.com and also anything that starts with management-rack- and then has two characters, followed by .company.com.

Per-host SSH keys

You may have some machines where you have a different key for each machine. By naming them after the fully qualified domain names of the hosts they relate to, you can skip over a more tedious SSH config with something like the following:

Host server-??.company.com
    IdentityFile /some/path/id_rsa-%h

(the %h will be substituted with the FQDN you're SSHing to. The ssh_config man page lists a few other available substitutions.)

Use fake, per-network port forwarding hosts

If you have network management devices which require web access that you normally forward ports for with the -L option, consider constructing a fake host in your SSH config which establishes all of the port forwards you need for that network/datacentre/etc:

Host port-forwards-site1.company.com
    Hostname server1.company.com
    LocalForward 1234 10.0.0.101:1234

This also means that your forwards will be on the same port each time, which makes saving certificates in your browser a reasonable undertaking. All you need to do is ssh port-forwards-site1.company.com (using nifty Tab completion of course!) and you're done. If you don't want it tying up a terminal you can add the options -f and -N to your command line, which will establish the ssh connection in the background.

If you're using programs which support SOCKS (e.g. Firefox and many other desktop Linux apps) you can use the DynamicForward option to send traffic over the SSH connection without having to add LocalForward entries for each port you care about. Used with a browser extension such as FoxyProxy (which lets you configure multiple proxies based on wildcard/regexp URL matches) makes for a very flexible setup.

Use an SSH jump host

Rather than have tens/dozens/hundreds/etc of servers holding their SSH port open to the Internet and being battered with brute force password cracking attempts, you might consider having a single host listening (or a single host per network perhaps), which you can proxy your SSH connections through.

If you do consider something like this, you must resist the temptation to place private keys on the jump host - to do so would utterly defeat the point.

Instead, you can use an old, but very nifty trick that completely hides the jump host from your day-to-day usage:

Host jumphost.company.com
    ProxyCommand none
Host *.company.com
    ProxyCommand ssh jumphost.company.com nc -q0 %h %p

You might wonder what on earth that is doing, but it's really quite simple. The first Host stanza just means we won't use any special commands to connect to the jump host itself. The second Host stanza says that in order to connect to anything ending in .company.com (but excluding jumphost.company.com because it just matched the previous stanza) we will first SSH to the jump host and then use nc(1) (i.e. netcat) to connect to the relevant port (%p) on the host we originally asked for (%h). Your local SSH client now has a session open to the jump host which is acting like it's a socket to the SSH port on the host you wanted to talk to, so it just uses that connection to establish an SSH session with the machine you wanted. Simple!

For those of you lucky enough to be connecting to servers that have OpenSSH 5.4 or newer, you can replace the jump host ProxyCommand with:

ProxyCommand ssh -W %h:%p jumphost.company.com

Re-use existing SSH connections

Some people swear by this trick, but because I'm very close to my servers and have a decent CPU, the setup time for connections doesn't bother me. Folks who are many milliseconds from their servers, or who don't have unquenchable techno-lust for new workstations, may appreciate saving some time when establishing SSH connections.

The idea is that OpenSSH can place connections into the background automatically, and re-use those existing secure channels when you ask for a new ssh(1), scp(1) or sftp(1) connections to hosts you have already spoken to. The configuration I would recommend for this, would be:

Host *
    ControlMaster auto
    ControlPath ~/.ssh/control/%h-%l-%p
    ControlPersist 600

This will do several things:

  • ControlMaster auto will cause OpenSSH to establish the "master" connection sockets as needed, falling back to normal connections if something is wrong.
  • The ControlPath option specifies where the connection sockets will live. Here we are placing them in a directory and giving them filenames that consist of the hostname, login username and port, which ought to be sufficient to uniquely identify each connection. If you need to get more specific, you can place this section near the end of your config and have explicit ControlPath entries in earlier Host stanzas.
  • ControlPersist 600 causes the master connections to die if they are idle for 10 minutes. The default is that they live on as long as your network is connected - if you have hundreds of servers this will add up to an awful lot of ssh(1) processes running on your workstation! Depending on your needs, 10 minutes may not be long enough.

Note: You should make the ~/.ssh/control directory ahead of time and ensure that only your user can access it.

Cope with old/buggy SSH devices

Perhaps you have a bunch of management devices in your infrastructure and some of them are a few years old already. Should you find yourself trying to SSH to them, you might find that your connections don't work very well. Perhaps your SSH client is too new and is offering algorithms their creaky old SSH servers can't abide. You can strip down the long default list of algorithms to this to ones that a particular device supports, e.g.:

Host power-device-1.company.com
    HostkeyAlgorithms ssh-rsa,ssh-dss

That's all folks

Those are the most useful tips and tricks I have for now. Hopefully someone will read this and think "hah! I can do much more advanced stuff than that!" and one-up me :)

Do feel free to comment if you do have something sneaky to add, I'll gladly steal your ideas!


Evil shell genius

Jono Lange was committing acts of great evil in Bash earlier today. I gave him a few pointers and we agreed that it was sufficiently evil that it deserved a blog post. So, if you find yourself wishing you could get pretty desktop notifications when long-running shell commands complete, see his post here for the details.


HP Microserver Remote Access helper

I've only had the Remote Access card installed in my HP Microserver for a few hours and already I am bored of accessing it by first logging into the web UI, then navigating to the right bit of the UI, then clicking a button to download a .jnlp file and then running that with javaws(1). Instead, I have written some Python that will login for you, fetch the file and execute javaws. Much better! You can find the code: here and you'll want to have python-httplib2 installed.


HP Microserver Remote Access Card

I've been using an HP ProLiant Microserver (N36L) as my fileserver at home, for about a year and it's been a really reliable little workhorse. Today I gave it a bit of a spruce up with 8GB of RAM and the Remote Access Card option. Since it came with virtually no documentation, and since I can't find any reference online to anyone else having had the same issue I had, I'm writing this post so Google can help future travellers. When you are installing the card, check in the BIOS's PCI Express options that you have set it to automatically choose the right graphics card to use. I had hard coded it to use the onboard VGA controller. The reason for this is that the RAC card is actually a graphics card, so the BIOS needs to be able to activate it as the primary card. If you don't change this setting, what you will see is the RAC appear to work normally, but its vKVM remote video feature will only ever show you a green screen window, with the words "OUT OF RANGE" in yellow letters. Annoyingly, I thought this was my 1920x1080 monitor confusing things, so it took me longer to fix this than it should have, but there we go.


What is the value of negative feedback on the Internet?

I'm sure we've all been there - you buy something on eBay or from a third party on Amazon, and what you get is either rubbish or not what you asked for. The correct thing to do is to talk to the seller first to try and resolve your problem, and then when everything is said and done, leave feedback rating the overall experience. Several times in the last year I have gone through this process and ended up feeling the need to leave negative feedback. The most obvious case was some bluetooth headphones I'd bought from an eBay seller in China that were so obviously fake that it was hilarious he was even trying to convince me I was doing something wrong. In each of these cases, I have been contacted shortly after the negative feedback to ask if I will remove the feedback in return for a full/partial refund. This has tickled the curious side of my brain into wanting to know what the value of negative feedback is. The obvious way to find out would be to buy items of various different price and then leave negative feedback and see how far the sellers are prepared to go to preserve their reputations. The obvious problem here is that this would be an unethical and unfair way to do science. Perhaps it would be possible to crowd-source anecdotes until they count as data?


Dear Apple

I just woke up here in London and saw the news about Steve Jobs. It's early and, as usual for this time of day, my seven month old son is playing next to me. He has no concept of what my iPhone is, but it holds his fascination like none of his brightly coloured toys do. Only iPad can cause him to abandon his toys and crawl faster. I'd like to thank you all, including Steve, for your work. You have brought technology to ordinary people in a way that delights them without them having to know why. Please keep doing that for a very long time


Terminator 0.96 released

I've just pushed up the release tarball and PPA uploads for Terminator 0.96. It's mainly a bug fix release, but it does include a few new features. Many thanks to the various community folks who have contributed fixes, patches, bugs, translations and branches to this release. The changelog is below:

terminator 0.96:

* Unity support for opening new windows (Lucian Adrian Grijincu)   * Fix searching with infinite scrollback (Julien Thewys #755077)   * Fix searching on Ubuntu 10.10 and 11.04, and implement searching by regular expression (Roberto Aguilar #709018)   * Optimise various low level components so they are dramatically faster (Stephen Boddy)   * Fix various bugs (Stephen Boddy)   * Fix cursor colours (#700969) and a cursor blink issue (Tony Baker)   * Improve and extend drag&drop support to include more sources of text, e.g. Gtk file chooser path buttons (#643425)   * Add a plugin to watch a terminal for inactvity (i.e. silence)   * Fix loading layouts with more than two tabs (#646826)   * Fix order of tabs created from saved layouts (#615930)   * Add configuration to remove terminal dimensions from titlebars (patch from João Pinto #691213)   * Restore split positions more accurately (patch from Glenn Moss #797953)   * Fix activity notification in active terminals. (patch from Chris Newton #748681)   * Stop leaking child processes if terminals are closed using the context menu (#308025)   * Don't forget tab order and custom labels when closing terminals in them (#711356)   * Each terminal is assigned a unique identifier and this is exposed to the processes inside the terminal via the environment variable TERMINATOR_UUID   * Expand dbus support to start covering useful methods. Also add a commandline tool called 'remotinator' that can be used to control Terminator from a terminal running inside it.   * Fix terminal font settings for users of older Linux distributions


Migrations

To the cloud! I'm officially done hosting my own Wordpress blog. Not because it's particularly hard, but because it's quite boring. I would have done a straight export/import into a wordpress.com blog, but their options for hosting on a personal domain are pretty insane - if you want to host your blog on domain.com or www.domain.com you have to just point the entire domain at the wordpress.com DNS servers. I'm not prepared to trust my domain to a bunch of PHP bloggers, so instead I've shoved the blog over to Blogger (by way of a very helpful online conversion tool), but this still presents a few niggles around URLs. You can have Blogger send 404s to another vhost, so for now I just have a tiny little vhost somewhere else which uses mod_rewrite to catch the old page names and attempt to catch the blog post names. Ideally I'd fetch all the old post URLs and make a proper map to the new ones, but I can't really be bothered to do that, so I just went for the approximate:

RewriteRule ^/archives/(\[0-9\]{4})/(\[0-9\]{2})/(\[0-9\]{2})/(\[a-zA-Z0-9\\-\]{1,39}).\*$ http://www.tenshu.net/$1/$2/$4.html \[R=301,L\]

Another obvious sticking point is that Wordpress categories become Blogger labels, so another rewrite rule can take care of them (although not so much if you've used nested categories, but again I can't really be bothered to account for that):

RewriteRule ^/archives/category/(.)(.\*) http://www.tenshu.net/search/label/${upmap:$1}$2 \[R=301,L\]

Also cloudified so far is the DNS for tenshu.net - I'm trying out Amazon's Route53 and it seems to be pretty good so far. Next up will be email and then I can pretty much entirely stop faffing around running my own infrastructure :)


Monitoring an Apple Airport Express/Extreme with Munin

So you have an Apple Airport (Express or Extreme), or a Time Capsule, and you want to monitor things like the signal levels of the connected clients? I thought so! That's why I wrote this post, because I'm thoughtful like that. While it's not necessary, I'd like to mention that this was made possible by virtue of Apple having put out an SNMP MIB file. Without that, finding the relevant OIDs would have been sufficiently boring that I wouldn't have bothered with this, so yay for that (even if the MIB is suspiciously ancient). So if you don't need the MIB file, what do you need?

Having all of those things, how do you use it? Simple! - Place the munin plugin somewhere (doesn't really matter where, but the munin package probably put the other plugins in /usr/share/munin/plugins/) - Make sure you have a hostname or IP address for your Airport(s). If you have more than one you should either make sure they have static IPs configured, or that the one doing DHCP has static leases configured for all the other Airports. - Create a symlink for each of the types of graph for each of your Airports. Assuming that your Munin machine can resolve your Airport as 'myairport' you'd want to make the following symlinks: - cd /etc/munin/plugins/ - ln -s /path/to/snmp__airport snmp_myairport_airport_clients - ln -s /path/to/snmp__airport snmp_myairport_airport_signal - ln -s /path/to/snmp__airport snmp_myairport_airport_noise - ln -s /path/to/snmp__airport snmp_myairport_airport_rate

There is an explicit assumption that your SNMP community is the default of 'public'. If it's not then you'll need to hack the script. Otherwise, you're done! Now you win pretty graphs showing lots of juicy information about your Airport. Yay! You're welcome ;)