ORG-mode and Fossil (by )

I'm always moaning about how I have too many ideas and not enough time, so it's quite important to me to manage my time efficiently.

My biggest concern, with many projects on the go at once (and I don't just mean fun personal projects, I'm including things my family depends upon me for as well), is that I'll forget something important I'm supposed to do. And I'm also concerned that I'll forget a fun personal project, so that when I do get a moment, I can't think of anything to do, or that I spend my time on something that doesn't get me as good a reward for the available resources as I could have had.

Therefore, I've always been a big fan of "To Do" lists in one form or another. I've tried a few apps to manage TODOs for me, from the excellent personal information management facilities of the Palm Pilot to Things on the Mac, but I've tended to find such things restrictive. For a long time I had a complex OmniOutliner setup that also computed my timesheets with AppleScript, which suited me well; indeed, I've still yet to completely migrate all of the content out of that file (tricky now I no longer have a Mac, but I've looked at the underlying XML file and it seems reasonably parseable), and I think it still contains some notes about ARGON that I've not written up anywhere else!

However, I've had the most success with text files, adding hierarchic structure with headings, so it was fairly natural for me to try Org Mode one of these days. For those not in the know, this is an Emacs package designed to help you organise things with hierarchically-structured plain text files. You write heading lines prefixed with an asterisk, indicating the level of nesting by adding more asterisks, and Org helps by syntax-highlighting the header lines, hiding entire subtrees so you can see the large-scale structure, editing operations to cut and paste entire subtrees (properly adjusting the levels to match where you paste the subtree too), and so on.

But that's just the start. That's what it inherits from the Outline Mode it's based on.

What Org Mode adds on top of that is really hard to list. You can add workflow tags (TODO -> INPROGRESS -> STALLED -> DONE, for instance; you get to define your own little state machine), along with optional priorities, to mark some headings as tasks requiring attention (and obtain a report in the "agenda view" of all headings in certain states, ordered by priority, for instance). You can attach tags to headlines (and use them as filters in the agenda). You can attach arbitrary key-value metadata lists to headings (which are folded down into a single line, and opened up on request, so they don't clutter it), and use those to annotate things with deadlines, or scheduled dates, and have a calendar view in your agenda. Or use the key-value properties to filter the agenda view. Or have Org Mode automatically record a log book of state transitions of a task in the metadata. Or take metadata keys out and display them as extra columns in the hierarchy of headings, in a manner reminicent of OmniOutliner. You can embed links to other files that can be opened in Emacs; if it's an org-mode file you can link to a heading by a unique ID, or you can link to any old text file by line number or by searching for nearby text. There's a feature you can use, while editing any file, to create an Org-Mode heading containing text you are prompted for and a link to the place you were at in the file you were editing, timestamped, inserted under a specified heading of a specified org-mode file, so you can trivially create tasks referencing the file you're working on. Or you can embed executable elisp code to perform arbitrarily complex operations.

I'd been using Org Mode for a while, but I wasn't really using it properly; I had a whole bunch of .org files for different areas of my life, but it was sometimes difficult to fit things into the taxonomy. However, lately, I've had a big tidy-up of my home directory.

I've migrated old projects from Subversion or Git into Fossil, for a start, so now all of my projects - open-source ones at Kitten Technologies, and personal ones, are in their own Fossil repos, which means they have their own ticket trackers for their individual tasks. But each and every one of them has a heading in my new single tasks.org file, which is a unified repository of things I should, or would like to, think about. Fossil projects have a single heading, tagged with "FOSSIL", that lists the place in my home directory where I have the repo working copy, and the URL of the master repository on my server; ŧhis exists to prevent me from forgetting about a project.

I've migrated our long-standing home wiki (mainly a repository of recipies and other such domestic stuff) into the inbuilt wiki of the Fossil repo I already use to store documentation about the house, such as network configuration, a PDF of the plans from the Land Registry, and stuff like that; and the ticket tracker in that repo is now the domestic TODO list. Running the Fossil web interface for that repo off of the home fileserver means that Sarah and I can share the Wiki and task list. And I've configured the Fossil user roles so that anonymous users can't see anything too sensitive.

So in general I've moved as much as I can to Fossil repositories, combining versioned file storage and ticket tracking with a wiki as appropriate; and my tasks.org exists to act as an index to all of them, and to store actual task list items for things that don't naturally map to a fossil repo, although I may find ways to deal with those as well (for instance, I have a fossil repo I use to store my org files, encrypted password database, household budget, address book, and the like, that I'm not using the ticket tracker on; that could be used as a place to put my general administrative tasks as tickets).

However, although putting tickets in the repositories that store individual projects is conceptually neat, and allows for third parties to interact with my task list for open-source projects on Kitten Technologies, it does mean that I have a lot of separate task lists. tasks.org means I won't forget about any of them, but I still have no simple way of knowing what the most urgent or interesting task out of all of my twenty-five repositories is. That's not a great problem in itself, but the next logical step will be to use the automation facilities of Fossil to pull out the tickets from all of my repos and to add them into tasks.org as tasks beneath the corresponding Fossil project heading (including the ticket URL so I can go and edit them easily), so I can see them all consolidated on the agenda view...

Part of this process which has been interesting, though, is digging out various old TODO lists (such as the aforementioned OmniOutliner file) and project directories scattered over archives of old home directories and consolidating them. I've found various projects I'd forgotten about, and neatly filed them as current projects or into my archive tree as old projects (and, oh, how I look forward to being able to put things like that as archives into Ugarit, automatically cross-referenced by their metadata...). Having brought everything together and assembled an index reduces the horrible, lingering, feeling of having lost or forgotten something...

AVR microcontrollers and Arduinos. (by )

I'm a fan of the Atmel AVR microcontroller. The main competitors in its area are the older 8051 and PIC architectures, which have less pleasant instruction sets and are generally harder to program.

Ease of programming is key. Most AVRs can be programmer via a SPI link, which is just four digital I/O pins following a widespread standard that most microcontrollers can drive, and there are widespread interfaces to drive an SPI bus from a PC. It's almost as good as the LPC2000 series 32-bit microcontrollers' asynchronous serial programming interface, which can be driven from an RS-232 port with a little bit of level shifting. I'm also a fan of the LPC2000s, but they fit into a higher-powered niche than the AVRs!

A long time ago I did some AVR development professionally, with a programming board driven from a PC parallel port by some Windows software. I still have the board, and a windows PC with a parallel port and the software installed sitting under a desk, but the "activation energy" of getting the PC powered up and plugged into a keyboard and monitor, and digging out the board, and having to deal with Windows-based development software and all that has stopped me from doing anything with AVRs for a while, given my shortage of time.

However, Sarah has tasked me with developing some electronics for her, as part of a project she's working on. And it looked like the easiest way of doing what's required will be to drop an AVR in.

But rather than dig out the Windows-based dev environment, I've just picked up a USBtiny ISP kit for less money than my original AVR dev system cost. It runs off of a USB port, and supports an entirely open-source AVR toolchain that I can run on my laptop. Inside, it's just an AVR itself, with a USB interface on one end and a SPI interface on the other; everything that I need in one neat little package.

As a plus, it has a cable coming out that I can plug into a header on the board the AVR is part of; my old dev board needed me to pull the chip out of its circuit and put it into the board to program it. Pah!

But while I was there, I also picked up an Arduino Uno. This is a little gadget that has been taking the hobbyist electronics world by storm lately; it's basically an AVR on a board with an inbuilt USB programming interface and a bunch of female headers to make it easy to wire up to various things, and some software to let you program it in C easily with a useful library. There's a wide range of boards that plug directly into the headers to do all sorts of fun stuff, too.

Now, I'm a bit disdainful of the Arduino; given the ability to program bare AVRs directly and to assemble my own circuits on protoboard, I can easily do all sorts of stuff that Arduinos can't, at a fraction of the cost.

However, they're great for beginners, as they are plug and play devices; you can get started without touching a soldering iron or having to work out which pin is which. My disdain is purely personal, I think they're a great thing for the community as a whole 🙂

So why am I getting one, I hear you ask? Well, I have a wife who wants to be able to control LEDs and a six year old daughter who is passionate about building a robot, so I'll be glad to have an easy-to-use module I can just hand them rather than needing to build AVR boards for them all the time; but mainly, I plan to use it as a Bus Pirate clone by putting a FORTH on it along with some words to do things like I2C and SPI...

Time (by )

Tired of lying in the sunshine
Staying home to watch the rain
You are young and life is long
And there is time to kill today
And then one day you find
Ten years have got behind you
No one told you when to run
You missed the starting gun
Pink Floyd - Time

I've always felt rather cursed with the fact that I have an addiction to designing things. It's bad enough knowing that I can easily design something that will take a week to actually do in half an hour, and that I can do that designing while walking or driving or in the shower or lying in bed, while I can only actually do any making when free of distractions... I try to make the best of it, writing the best of my ideas up on this blog when I get time in the hope that some of them will inspire others in some way, as I can't bear the thought of them all being lost. I believe that ideas are cheap, especially for me, so there's no point in hoarding them - I can always come up with more!

However, the past few years have been worse than ever; I've been crucially short of time, so I'm lucky to get a day a month to sit down and make things. I knew that parenthood would take up a lot of my time, but I didn't reckon on pregnancy and childbirth making my wife an invalid, or our house flooding, or all the knock-on effects of these things. I'm running a Cub pack on my own, because nobody else can spare the time to help me; I'm already barely keeping up with the basic requirements of running the pack, and I can't put in any less time without shutting the whole thing down (which would weigh very heavily on my heart, as I love working with those kids, and couldn't bear to let them down). That takes up two or three evenings a week. And I lose a lot of evenings or weekend days helping Sarah build her career, to keep her sane (I don't want her being stuck in a dead-end life of childcare) and to help relieve our financial pressures. I lose three lunch breaks a week to transporting Jean. I'm barely keeping up with keeping the house clean; it gets worse all week and I catch up at the weekend if there is time. And yet most of the things that are taking up my time are the kinds of things I can do while still designing things in my head, so the creative output hasn't slowed that much, even though the time I have to follow up any of the ideas has nearly vanished. There just really isn't much time for me in the week; my one safe escape valve is my weekly visit to Bristol Hackspace on a Thursday after work, where I have two hours.

But then a second problem kicks in: When I do get some time without pressure, I often don't actually want to concentrate on things right away. Over the bank holiday weekend I got about a day to myself (in a few chunks of several hours here and there), and I think I spent at least the first three hours playing Cyber Empires; I only felt up to doing something mindless. After that I got stuck in and did some work on a couple of Ugarit tickets (4363bc7631 and 34e21d597f)... But it's too easy to spend my two hours in Bristol each week just nattering to people!

I've found I'm starting to get self-conscious about it. I'm feeling embarrassed about telling people about the fun ideas I've had, because I know they know I probably won't ever execute them.

There are too many people who go around being smug about the great ideas they have, that they can't implement as they don't have the skill (often, these folks feel that implementation is a mindless task to be given to hired goons). But you can't design something you couldn't imagine every step of the construction of; knowing the limits of the medium is essential to designing something that pushes those limits to their best... It's no better than a child triumphantly saying they have designed the best car ever, that drives at a hundred miles an hour AND flies AND has a laser gun AND has a fridge full of cakes in the boot. That's not a design; it's a requirements document (of sorts).

I don't want people thinking of me like that. Every time I've updated the ARGON web site I've put in more and more perrimistic estimates of my hope of ever implementing it. When I started it, it looked like a tractable project I could slowly work on over several years; now it looks like something I might manage to do in my retirement, at best. That makes me sad. I'm not a person who designs things they can't build (except when I'm doing science-fiction worldbuilding, at least...); I'm a person who just doesn't have the time to build the things they design...

zmiku: An automation daemon (by )

A few years ago, I wrote my own service monitoring system for my servers and networks; I did this because Nagios, the common choice, was just too complicated for my tastes and didn't cleanly fit my needs. And so, The Eye of Horus was born, and has been monitoring my servers ever since. I open-sourced it, but I've not migrated it to the new Kitten Technologies infrastructure yet, so I don't have a link.

A design goal for Horus was to limit what needed to be installed on the monitored servers; it's a Python script that you run from cron which sshes into the servers and runs shell commands directly; the results are sucked back from standard output. The configuration file format is easy to work with, and it's modular - the python script spits out a new status file listing the status of all the services that a set of CGIs uses to produce HTML reports on demand and to update rrdtool logs of measurables, and produces a list of changes to be fed to a notification system.

However, it has some rough edges - I decided to make the shell commands run on the remote servers all output a standard format of report, which means mangling the output of commands such as pidof with sed and awk in order to produce them, which is a pain to do portably. In general, support for generating different commands to get the same report on different platforms is poor, too. I never got around to implementing hysterisis in the change detectors to put a service that's rapidly failing and recovering into an "unstable" state. And it's written in Python, when I've migrated all of my new development into Scheme.

I was tinkering with the idea of a straight rewrite in Scheme, with the rough edges fixed up, when I noticed a convergence with some of my other projects beginning to form.

I've long wanted to have a system where some small lightweight computer (perhaps a Raspberry Pi), attached to the home LAN, drives speakers as a music player, streaming music from the home file server. There's off the shelf stuff to do that, but I wanted to go a little further and also provide a text-to-speech notification system; the box would also have a queue of messages. If the queue was not empty, it would pause the music (perhaps with a nice fade), emit an announcement ding sound, then play the messages in turn via a text-to-speech engine. I had previously had some success in helping my wife manage her adult ADHD by putting a cronjob on her Mac that used the "say" command to remind her when it was time to have lunch and the like, as she easily gets too absorbed in something on her laptop and forgets everything else; I thought it would be good to extend that so it worked if she wasn't near her laptop, by making it part of a house-wide music system composed of music streamers in many rooms. And it would be a good place to route notifications from systems like Horus, too. And as the house we lived in then had a very long driveway, we could have a sensor at the end of the drive speak a notification if a car entered the driveway (in the new house, we have a similar requirement for a doorbell that can be heard in distant rooms...). And so on.

But that started to lead to similar design issues as the notification system in Horus; sometimes a single event causes a lot of notifications to be generated, which spam the user when you really just want a single notification that tells them all they need to know. Horus has some domain-specific knowledge about what services depend on what listed in the configuration file, and it automatically suppresses failures that "are to be expected" given root failures, but it could be smarter (for instance, if the failure occurs after the root service has been checked and is fine but before the child services have been checked, then it will notify of the failure of all the child services, rather than noticing the suspicious trend).

And when multiple events occur in the same time span, yet are unrelated so such tricks can't be applied, some notion of priority and rate limiting need to be applied. If ten thousand notifications suddenly appear in the queue in a single second, what's the system to do? Clearly, it will start fading the music down the very instant a notificatoin arrives, but by the time it then gets to start talking a second later, it may have recevied a lot of messages; now it needs to decide what to do. Repeated messages of the same "type" should be summarised somehow. A single high-priority message should be able to cut through a slew of boring ones. And so on.

At the same time, I was looking into home automation and security systems. There you have a bunch of sensors, and actions you want to trigger (often involving yet more notifications...) in response to events. And similarly I wanted to try and automate failover actions; host failure notifications in Horus should trigger certain recovery activities - but only if the failure state lasts for more than a threshold period, to make sure expensive operations are not triggered based on transient failures.

Programming these complex "rules", be they for automation, analysing the root cause of failures from a wide range of inter-dependent service statuses, or deciding how best to summarise a slew of messages, is often complex as they deal with asynchronous inputs and the timing relationships between them, too; specialist programming models, generally based around state machines, help a great deal.

Also, having a common infrastructure for hosting such "reactive behaviour" would make it possible to build a distributed fault-tolerant implementation, which would be very useful for many of the above problems...

So, I have decided, it would be a good idea to design and build an automation daemon. It'll be a bit of software that is started (with reference to a configuration file specifying a bunch of state machines), and then sits there waiting for events. Events can be timers expiring, or external events that come from sensors; and the actions of the state machines might be to trigger events themselves, or to activate external actuators (such as the text-to-speech engine or a server reboot). And a bunch of daemons configured to cooperate would all synchronise to the same state in lock-step; if daemons drop out of the cluster, then all that will happen is that sensors and external actions attached to that daemon will become unavailable, and state machines which depend on them will be notified. In the event of a network partition separating groups of daemons, the states can diverge; a resolution mechanism will need to be specified for when they re-merge.

Having that in place would mean that building a service monitoring system would merely involve writing a sensor that ran check commands, and a standard state machine for monitoring a service (with reference to the state machines of services it depends on), generating suitable events for other consumers of service state machine to use - and human-level notification events in a standard format recognised by a general human notification handler running in the same automation daemon cluster.

The shared infrastructure would make it easy to integrated automation systems.

Now, this is a medium-term project as what I have is working OK for now and I'm focussing on Ugarit development at the moment, but I am now researching complex event processing systems to start designing a reliable distributed processing model for it. And I've chosen a name: "zmiku", the Lojban word for "automatic" or "automaton"; its goal is, in general, to automate complex systems. As I apply it to more problems, I'd like to bring in tools from the artificial intelligence toolbox to make it able to automate things in "smarter" ways; I feel that many of these techniques are currently difficult to employ in many cases where automation is required, so it would be good to make them more available.

The sorry state of keyboard interfaces (by )

Back in the Dark Ages, keyboards were simple devices. Putting too much processing power in them would raise the cost unacceptably; they were kept as simple devices that told the host computer when buttons were pressed and released, and the host computer had the job of converting that into meaningful information such as entered data or commands.

In particular, keyboards didn't even know what was printed on their buttons. They told the computer what button was pressed as a "scan code", which was loosely tied to the key's position on the keyboard. Keyboards for different alphabets had different things printed on the keys, but generated the same scan codes regardless; the computer had to be told what "keyboard map" to use to convert those scan codes into letters.

This wasn't a big deal in the days of the original PC and AT keyboard interfaces, and the later PS/2 interface, where only one keyboard could be plugged into a computer; telling the computer what kind of keyboard you had wasn't a big deal.

However, by the time USB came to be, micro-controllers were sufficiently cheap that one capable of managing the USB interface to a keyboard would easily have been able to manage its own mapping to a standard set of codes based on what the key did rather than where on the keyboard it was, avoiding the need for keyboard maps. But, they didn't. Oh no. Instead, they standardised a new set of scan codes for the positions of keys on a "standard layout", regardless of what was printed on them. And of course, keyboards that don't follow the standard layout (such as compact laptop keyboards, or ergonomic ones, or keyboard emulators such as chorders) generate the scan codes for keys based on where they would be in a standard layout, meaning that the scan codes aren't really relating to anything sensible at all.

And meaning that we still need keyboard maps on the computers.

This becomes a real pain when you have more than one keyboard, which is easily done with USB - and is increasingly becoming the norm, as a laptop (with its own keyboard) is used as a desktop computer (with a nicer, external, keyboard). For a while I was using an Apple laptop, but mainly as a VM host for a NetBSD VM. The Apple laptop had an Apple keyboard, but I plugged in a USB PC keyboard. When using Mac OS software outside my VM, the laptop had the correct keymap but the external keyboard did not; when using my VM, it was the other way around. The situation sucked.

Also, I'm sure people who work with multiple languages would love to have multiple keyboards that they can switch between easily depending on what language they're typing, without having to reconfigure their keyboard map when they do so. As a nerd, I would love to be able to buy a small keypad covered in extra function keys and have it work alongside my normal keyboard (maybe even foot pedals!). How about specialist keyboards with function keys for tasks like computed-aided design?

So, here's my proposal: keyboards should identify their buttons with Unicode strings, and a type flag (glyph or function), and an optional position flag for duplicated keys (chosen from nine options: top left, top middle, top right, center, etc; a tenth value can be used for non-duplicated keys). When you press a key with "H" printed on it, the keyboard should say "glyph H is down". When you press the left shift key, the keyboard should say "function shift (bottom left) is down".

Keys with more than one glyph printed on them, corresponding to what should happen when that key is pressed with combinations of modifier keys, can be handled by the keyboard also providing a "modifier table". If I press shift+5 in the hope of getting the % sign printed above 5 on my keyboard, the keyboard should note that a shift key is pressed, then that 5 is pressed; but the modifier table should note that the keyboard's key caps will be giving the user the impression that this combination should produce a "%".

Function keys can be given any name, including useful keys such as "help", "cut", "copy", and "paste". And you can have as many soft-bindable F-keys as you want. All these rich function names can be passed through to software as-is, letting apps bind functionality to appropriately-named keys; to make this easier, there should be a shared vocabulary of function key names to avoid synonyms cropping up.

This would be easy to implement.

This would make keyboards plug-and-play.

This would make it easy to use multiple keyboards on the same computer.

This would open up new markets for keyboards with heaps of special function keys.

This could be done in a backwards-compatible manner by making keyboards expose the old USB scancodes by default, along with a note that they can be switched into Sensible Mode if the host computer supports it.

Lobby the USB implementors forum to put this into the next version of the USB HID specification now!

WordPress Themes

Creative Commons Attribution-NonCommercial-ShareAlike 2.0 UK: England & Wales
Creative Commons Attribution-NonCommercial-ShareAlike 2.0 UK: England & Wales