Vomit-induced implementations of the 9P protocol in Chicken Scheme (by )

Last Saturday, I came down with what I suspect was Norovirus - the rest of the family (apart from the baby) having come down with it on Thursday and me having spent the past few days mopping up after them, this was probably unavoidable; although I'd tried my best by wearing a respirator when performing clean-up operations (it was also nice to not have to smell what I was clearing up...).

But, it meant I spent Monday off of work recuperating. I was too weak and exhausted to do any work from home, but I was bored senseless of just lying there on the sofa, so I decided to try and extend Chicken Scheme's 9p egg, which is a client implementation of the 9P2000 file server protocol, to also be able to act as a server.

This is something I want for Ugarit; it means that a Chicken Scheme app will be able to provide a virtual filesystem that can be mounted from a computer and used like your "real" filesystem. In particular, I want to be able to let people access their backed-up snapshots from a Ugarit vault as a live read-only filesystem, rather than needing to go in and manually "restore" their desired files back into the filesystem to access them. And it'll really come into its own when I implement archive mode, as it will make it possible to actually use the Ugarit archive seamlessly.

Unfortunately, being rather fuzzy-headed, I kept making rookie mistakes, but I eventually managed to get the core protocol implementation working. In doing so, I found out that a 9P server that puts incorrect lengths in the stat structures returned from directory reads causes 9P mounts in Linux to "hang" in a way that can't be unmounted and you need to power-cycle the machine as it won't even shut down cleanly... so be careful of that when mounting the few public 9P servers out there!

In order to test it, and as a utility for Chicken apps that would like to provide a 9P "control/status filesystem" in the manner of wmii et al, I started to write a simplified high-level virtual filesystem server library on top of the core server implementation. At the point where I made this status update to my friends in the Chicken Scheme IRC channel, directory listings weren't working (they now are), but you can see the idea - create a filesystem object from Scheme and register files and directories in it, and it appears as a live filesystem under UNIX.

Now I'm feeling a bit better today I've realised several other rookie errors I've made (not ones that cause bugs, I hope, but ones that complicated the code unnecessarily) - I'll fix those up before I submit all of my changes to the 9p egg's maintainer for merging in...

Then it'll be time to start on the Ugarit integration. THAT will be fun 🙂

Spring Cleaning (by )

I've spent more time building infrastructure than using it, I suspect. I love building infrastructure, so I've often built it because I can; however, with everything that's happened in the past six years, I've ended up struggling to maintain the infrastructure I already had. So I've had to change tack and become much more pragmatic about my infrastructure astronautics, such as getting rid of my limited company and migrating from a tightly-bound cluster to a single box for my hosting platform.

This has given me some time to tidy up and simplify the infrastructure I want to keep.

So this weekend, I got around to rebuilding the Kitten Technologies web site. This is where I publish my open-source creations; they were all version-controlled in Subversion, and I had a PHP site with some static pages, then a dynamically generated project browser that would pull out files called VERSION.txt and README.txt from the SVN repositories to build a description page, offered up tarballs of all released versions for downloading, and linked to an SVN web interface for browsing the repo. I wanted to get around to implementing ticket tracking at some point so folks could submit tickets.

However, for a while I've ached to migrate to Fossil for version control, mainly because it has integral ticket tracking and a wiki for each project, along with integral repository browsing; it provides a fully-featured project Web site, and it's a distributed VCS to boot, which is also useful. However, I wanted it to still all look like a nice integrated site for all my projects. So what I've done is to write a Fossil skin stylesheet that has my new look in it, then to build the wrapper site using the same CSS (eg, by using Fossil's names for div classes and overall page structure), based on Hyde; the CSS is actually generated from an scss master file that Hyde processes to generate the CSS as part of the static site, which the Fossil repos just refer to. My deployment script rolls the skin out to all of my repositories whenever it's updated, so they are all kept magically in sync. It still has a few rough edges (I want to improve the navigation with a consistent site-wide nav bar above the Fossil menu bar, that has the current project highlighted; this will be slightly more complex as I'll need to make the script modify the skin for each project to highlight the correct one) and I am still incapable of making non-ugly CSS, but it means that Kitten Technologies is now live on Fossil. I've a lot of projects still to migrate, but after I've done the "fiddly" ones that need some level of manual tweaking, I hope to produce a script to automate handling all the rest.

Secondly, I've been tidying up the home fileserver. It was down for some time for various reasons, which means that I've ended up with a new archive of photos, music, and PDFs forming on my laptop. I pulled our music collection out of the backups onto my laptop, too, which meant that I then had a diverging fork of that (as there was some new music on the file server since the last backup, which I later retrieved from the disks), so the re-unification of all those tens of gigabytes of files has been fiddly. But, it's now largely done, which is great; and there's now precisely one master copy of everything, and the home wiki is back up to date and pruned of outdated TODO items from several years ago.

However, this has increased my desire to implement Ugarit's archival mode. Rather than manually curating directory structures to organise my stuff (and the pain of merging changes to them), I'd love to just be able to pour files into a Ugarit library and tag them with metadata (maybe some time after the original import, if I'm in a hurry at the time), then create virtual filesystem views on that which reflect things like "All my music, organised by artist/album/title" or "All my photos, organised by who is in them, year, then event title". Combine that with the proposed Ugarit replication backend, and it will even manage replicas on my laptop as well as the home fileserver, all kept seamlessly in sync; having a home fileserver was easy when I worked from home on a desktop machine so I could just permanently mount the filesystem from the server, but it's a bit trickier with a modern laptop-based lifestyle.

Also, as the archive is already backed up into Ugarit, migrating it into a Ugarit "library" will be fast and efficient - Ugarit will automatically recognise that it already has the content of the files, and just need to upload the metadata!

I think with that and my workshop sorted out, I'm done with spring cleaning - my urgent tasks are now sorting out paperwork for my Cub pack, fixing an offline external disk on my fileserver, getting Ethernet to the workshop so I can do useful computer work in there (and move the home fileserver out of poking range of the baby, who loves to turn it off), resurrecting my salmonella install, hacking Ugarit, ring casting, and getting the foundry working so I can cast bronze, and wearable computer work! Not to mention endless minor DIY things in the house - we've got pictures to put up, dents in the plasterboard walls to fill, a flu to install for the fire, walls to repaint, ...

My new workshop (by )

I took the day off of work on my birthday, to do something I'd been dying to do since we moved in - get my workshop set up to a state where I can actually use it.

To begin with, I had a load of things to put away. The floor was covered in boxes that needed unpacking, but as soon as I'd cleared enough to get sufficient access, I put up my big shelf.

It goes on the wall above my welding bench:

Here's the wall I'd like the shelf on, above the welding bench

Drilling into masonry can be a pain, especially coarse breeze blocks like those, which are comprised of a load of tiny stones joined together with cement; the bit will tend to wander into a convenient gap between stones rather than ploughing through the wall where I want it to go. So the only thing to do is to break out my serious drill. Which I call Vera:

This is Vera

Vera is an SDS+ drill, which means it has a special chuck and takes special bits. The chuck fixing is actually designed for hammer drilling, unlike standard drill chucks, which means the drill can apply a much more significant and reliable hammering force. As such, it glides through walls like this in the way a normall drill glides through plywood.

As such, in no time I had each bracket mounted with 6x40mm screws into 10mm diameter wall plugs:

One bracket up Two brackets up

I could then life the shelf into place:

The shelf in place

However, Vera's SDS+ chuck can't drive ordinary bits (although I do have an SDS+ to ordinary chuck adapter, but Vera would really be overkill for the next step), so I used my cordless drill to predrill holes for the screws into the bottom of the shelf. I like to think of this drill as Vera's filthy little sister, as it's fast and easy. The observant will also notice that it goes up to eleven:

Vera's filthy little sister

Having done that, I screwed the shelf onto the brackets so it won't budge:

The shelf screwed to the brackets

With the shelf up I could then put even more stuff away. I've made a little guided tour movie:

There's still more to do - I need to get Ethernet cabling down there so I can get Internet access, and I need to fix the leaking flat roof, and do something about the draughty eaves and the ivy creeping in. But now that the floor is clear and things are in useful places, I can actually use the workshop, which is great.

So two days later, I performed my first project. We needed a coathook for children's coats and bags, and we found the perfect design in a shop, except it was made to hang over the top of a door rather than to be mounted on the wall.

Not a problem when you own metal working tools.

First off, I used the angle grinder to chop the bits that go over the top of the door off, then with them out of the way, went in and neatly chopped the long metal bits off close to the part we wanted:

Unwanted metal bits chopped off

Then I used a center punch to mark where I needed to drill at each end:

Punched mark where I need to drill

To begin with, I drilled a 2mm hole, as that's a lot easier to drill accurately by hand than the 5mm hole I need:

2mm hole drilled

Then I drilled it out to 5mm:

5mm hole drilled

And then the screw could fit in:

Screw in place

And it was done:

The finished product

Server upgrade (by )

I host a heap of web sites (including this blog), email domains, source control repositories, mailing lists, and various other things (such as one of the official Chicken Scheme egg mirrors, a Jabber server, and an IRC server with bots). I do this with a combination of dedicated server hardware which I hire space, power, and connectivity for in London for the primary stuff, and a virtual private server in California for backup services and rapid DNS lookups from the USA.

This is a costly hobby, but it gives us a platform upon which to do interesting things, and lets me help other people out with free hosting; as I need to put in the time and money to run the infrastructure anyway, the spare capacity on it is essentially free.

The most demanding part is server upgrades. Periodically, I buy a new physical server, install it with all the software it will need, put it alongside the current hardware in the data centre, and transfer the data and settings across and configure everything that needs configuring on the new server until it works just like the old, then switch them over. I do this when the current hardware is getting full or overloaded or unreliable or just plain out of date, as I don't trust in-place updates of the core system software - it's too easy to end up with NOTHING working.

However, this has been overdue for several years. I bought the new hardware (this time, with a contribution from my biggest user of disk space!) nearly two years ago, and installed it in the rack nearly a year ago, but only yesterday did I get the chance to spend a day sitting next to it in London coaxing it into readiness then doing the final switch over...

It didn't go entirely to plan, of course. I'd previously written a script that used rsync to copy all the user data over; the first time I ran it it copied everything, then subsequent runs only had to copy the differences. The idea was that I would have less down time while I copied the data from the old server to the new (which has to happen with both servers offline, so that nothing can change during the copying process) if there was only the final changes to copy. However, I realised that the accounts of my biggest user of disk space weren't covered by my script as they had been slightly hacked to accomodate their growth.

And the whole process of moving the software configuration was made more complex by the fact that I had previously been running two servers in a kind of symbiotic cluster, in order to meet the load with the hardware of the time. Nowadays 64-bit multi-core behemoths with gigabytes of RAM are cheaply available and well supported by NetBSD, so everything can be done on one box. This is a much simpler setup, but it means that I had to undo the complexity of the previous setup when transferring everything across!

I ran into a few other unexpected problems, too; I noticed that the clock on the new server was terribly wrong, despite it running NTP. I did a manual ntpdate, and then just in case, another to check that it was now only a few millisecond out - but it was already half a second out again! It quickly became apparent that the clock was ticking about one second in every two seconds of real time...

Looking in the output of sysctl -a, it became apparent that I had a choice of time counter sources: it was using the TSC, but I also had an HPET, a clock interrupt, an APIC clock, and the good old 8254; my machine was brimming with alternate clocks. I tried switching to the HPET with sysctl -w kern.timecounter.hardware=hpet0 and suddenly time was running as expected. I popped that in /etc/sysctl.conf so it would come back on reboots, resynched the clocks, and everything's been fine since. I can only presume that the kernel was reading the CPU clock speed wrong, or some kind of dynamic clock scaling is happening, so that the (CPU-based) TSC wasn't having its ticks converted to seconds properly.

I had a big setback with the email setup, as NetBSD comes with Postfix as part of the base system but I wanted a more recent version from packages, but I ended up getting tangled with what version was being run in various situations and what configuration file was being used, which took a while to sort out. And then of course there's Mailman, the mailing list server software, which is complicated by needing write access to its filesystem-based state when run from the mail system (for incoming mail) or the Web server (for the web interface), so uses lots of setgid binaries and group-writable files and the like, and so always takes a lot of fiddling to get working properly.

But... I did it. And so, having completed my tax returns earlier this year (which is what freed up the time to prepare for and do this mission), I have now gotten rid of all the major obligations that have been hanging over me for the past few years.

I still need to visit London again - I've left the old servers running alongside the new in case I missed any files that need to be transferred; I'll give people a chance to check I've not missed any of their stuff before remotely powering them down (to save electricity, which I pay for) and coming in to take them (to free up the space). But that's relatively easy!

British Gas (by )

Ok, I'm starting to get annoyed with British Gas, so it's time for a RANT.

When we moved into the new home, it had prepayment meters for gas and electricity. For those not familiar, these are meters where, rather than having your meters read and being billed on what you have used, you have to take a smartcard to a shop and pay to get it "charged" with credit, which you then take home and insert into the meter. The meter keeps a credit balance, which it subtracts from as you use energy, and when it hits zero, you get switched off until you put more in.

This is annoying, as you have to keep remembering to top it up, and it's also more expensive; you get charged more per unit of energy used for the privilege of all that extra infrastructure. It's usually an arrangement one enters into if one is having trouble paying the energy bills, as it makes you unable to use more than you can afford. In which case, you can get put onto prepayment meters, at an even more inflated rate, in order to pay off the debt as well as buying your energy every time you top up.

So, we of course wanted OFF. To do this we first need to take over the gas and electricity supply from the previous folks, then when the account's in our name, we can get it transferred. The gas and electricity accounts are both run by British Gas, who helpfully send a letter to "The Homeowner" at the address telling us to get in touch with them as soon as possible so we can take over the account, as the previous folks had told them they'd moved out; in particular, it warned us that the previous owner's smartcards may be configured to repay debts, so we'd need our own ones to be charged a more reasonable rate.

So we sign up with them, as advised, and are told it'll take about twenty-eight days for "the paperwork" to happen, and then we'll get our new gas and electricity smartcards shortly after that.

So we plod on, using the previous owner's smartcards which, I note, are particularly expensive for gas at least; we're putting in £20 top-ups of gas at least once a week, and often twice.

Then we get a letter asking us to ring them to confirm some details, which I do, and am thanked for the details, and told the account take-over can now REALLY start, and will probably be finished around the end of February, and we should be able to get our meters swapped over to normal ones after that.

Well, it's now March, and I've rung up to ask about that, only to find out that only the electricity is in our name; the gas is still in the previous people's name. Apparently "an error" was made originally. Anyway, apparently that's being cleared up, and they're trying to get me set up for a credit meter, and I was on hold for ages, and got cut off, so rang up again, and was on hold for ages, then was told it was taking a while so they'd ring me back...

That was a few hours ago. I'm still waiting. And we're still running on the previous owner's smartcards, paying inflated rates. It'll have been two months soon.

WHY does this have to be so SLOW? Why can't they just put the account into our name immediately, when we ring up? Why does it need to take 28 days at all, let alone take the month and a half it's been so far?

While we've been waiting for the gas and electricity to be fully transferred, British Telecom have managed to lay an entire NEW PHONE LINE to our house, complete with digging trenches and everything, and Andrews and Arnold (AAISP) set up broadband on it. Both actions requiring ACTUAL WORK to happen rather than just changing some entries in a database and posting us some smartcards. AAISP's contribution to this alone is probably similar to what British Gas had to do; adding us to their accounts database and contracting BT to put an ADSL linecard in at the exchange, then posting us a configured router, and it took them one week, most of which was waiting for BT to do their bit...

WordPress Themes

Creative Commons Attribution-NonCommercial-ShareAlike 2.0 UK: England & Wales
Creative Commons Attribution-NonCommercial-ShareAlike 2.0 UK: England & Wales