The scene: I, with my family, are in a Little Chef. Jean, bless her, has got food all over herself, so I go out to the van, in the car park, to get wet wipes.
I unlock the back, hop in, go to the box of stuff, and start rooting about for wipes. I feel a slight motion, and wonder if it's strong wind rocking the van, or if somebody bumped into it while getting into an adjacent car as I continue to root. Then I feel a bigger rocking motion, and look outside to, to my horror, see the world moving... the van's rolling backwards, with me in the cargo bay and nobody in the front!
5:30am and I haven't slept a wink yet! I really need to sort out my lifestyle so I get (a) exercise and (b) time to think every day. Time to think is important for me; if I don't get enough, then when I go to bed, I lie there and think. Lots.
Tonights thoughts have included:
- Some ideas about how whole-program transformations (eg, the macroexpander/partial evaluator and the OO system) in CHROME might be handled. The OO system needs to be a whole-program transformation rather than just some macros using the normal macroexpander since things like inheritance graphs and method lists for generic functions need to be accumulated together; most Lisps handle that with macros that destructively update data structures, but I'm trying to design a system without rampant mutation, so need a whole-program code walk to do this. Clearly, since we want to be able to write macros that produce generic functions, methods, and the like, we need to do this AFTER normal macro expansion, but before the compiler gets its hands on it.
- Some ideas about separating the generic function/method system - the dispatch part of Lispy OO - from the classes-inheriting thing. Subtype relationships that are used to dispatch GFs should be doable with plain predicates -
my-record-type?etc. Or more complex predicate expressions on groups of arguments, so we can support multivariate typeclasses in the Haskell sense, as a rich interface/implementation system as well as a traditional records-with-single-inheritance class system. To do this properly we also need declarations that one predicate implies another -
(integer? x)- so that a method on numbers will be used for integers, yet a more specific integer method can override it. I'm not sure how decidable the "most specific method wins" thing can be with complex multivariate type predicates, though. Must experiment and ask knowledgeable formal logic folks.
- Thoughts about future computer architectures. The drive is for more speed, but these days, most CPUs are idle waiting for memory to feed them code and data, or (much more often) for the disk, network, or user to feed them. The only places where the CPU maxes out tend to be highly parallelisable tasks - server stuff handling lots of requests at once, games and other audiovisual things, and scientific number crunching. This suggests to me that a future direction of growth would be one or more high-bang-per-buck MISC processors embedded directly into SRAM chips (sort of like a CPU with an onboard cache... but the other way around, since the CPU would be much smaller than the SRAM array) which are bonded to the same chip carrier module as a set of DRAMs. One or more of CPU-and-SRAM and some DRAM chips are then all designed together as a tightly-coupled integrated unit for maximum speed due to short traces and the lack of standardised modular interfaces between them (like DIMMs and CPU socket standards) meaning that the interface can evolve rapidly. The whole CPU+SRAM+DRAM unit is then pluggable into a standardised socket, which motherboards will have several of. The result? Lots of cores of low power consumption reasonably fast CPU with high speed access to high speed memory. And for those demanding games/media/scientific tasks? The higher-end modules will have FPGAs on as well...
- Forget nasty complex unpredictable memory caches: have a nonuniform physical address space (with regions of memory of varying speed) and let the OS use the virtual memory system to allocate pages to regions based upon application hints and/or access statistics. Not having cache tag and management facilities makes for more chip area to put actual memory in.
- We've been wondering about getting goats lately. Goats are useful creatures; they produce milk (which can be turned into cheese) and they produce decent wool (just not in the quantities sheep produce it). Their milk and cheese don't make Sarah ill the way cow-derived versions do. Plus, we need something to come and graze our paddock. We've been doing a little bit of research and apparently two goats would be about right for the space we have. We'd need to put an inner layer of fence around the paddock to keep them in while still allowing the public footpath, and we'd need a little shed for them to shelter in. But thinking about setting things up in the paddock, I'm now wondering if it would be a good idea to build a duck run in there too, down at the bottom by the stream, all securely fenced against foxes and mink and with a duck-house up on stilts in case of flooding, but with a little pond dug out for them (connected to the stream by a channel with a grille over it to prevent escapage). It would be a convenient place to have the ducks, and it would make a good home for them, I think.
It's now 6am. Do I try and go to sleep, or try and last the day out? Hmmm...
Web server upgrade (by alaric)
Whew. On Monday I upgraded some of the software on my primary web server, since it was running some old stuff with security holes in.
www/apache2 package in NetBSD seemed to now conflict with
devel/subversion-base since apache 2 required
devel/apr and they were conflicting packages. So, I had to upgrade to
www/apache22. Fair enough.
One recompile later, and I start apache, and start checking out different web applications I host to see if they all still work...
...and my browser times out. Hmm, OK. I go to an open ssh window to look at the log files, and it's frozen.
I quickly check the network hasn't failed, then resign myself to the fact that my server has just dropped off of the net. It won't even ping, and I can't reach any of the services it forwards in to the backend server either, so the network stack is totally down.
So that evening I head down to the datacentre and take a look... to find that it's died handling the
exit() syscall from Apache. Apparently an assertion failure inside
knote_destroy or something.
Reboot. Start Apache. Start taking a look at sites.
Kerboom! It dies again in the same way.
Hmmm... Clearly, my three year old NetBSD 2.0 kernel is none too happy with Apache 2.2. It looks like Apache's doing something that triggers a bug in the kernel; knotes are event notification things, so I bet Apache's doing some kind of asynch I/O, and triggering a bug in the kernel code that implements it, causing it to leave the knote state of the process in an invalid state, so that the kernel panics when trying to close down the process state after process termination.
So I reboot it again, stop Apache starting, and leave it at that for the time being. No web service, but everything else works.
Then this evening (the day after), I returned, now with a shiny NetBSD 4.0 install CD in hand. Nervously I backed up some critical directories, then bit the bullet and did an upgrade.
And, to my delight, it was nearly seamless. The NetBSD installer upgraded and rebooted into a nearly perfectly working system. All my existing software, compiled under 2.0, ran fine under 4.0's 2.0 emulation, with the mysterious exception of
net/bind9, which wouldn't start. A quick
cd /usr/pkgsrc/net/bind9; make install later, and it was starting fine. Even Apache worked without hosing the system!
I had to compile a custom kernel with routing enabled, to allow the NAT that the server provides between the single public IP of the
love.warhead.org.uk cluster and the backend server
infatuation; then a quick reboot and that was working too.
All in all a successful mission, and it only took an hour or two. I still need to recompile all of my packages, but only to avoid the risk of there being a problem in the 2.0 emulation. While I was there I recompiled
sudo, just because it's nice to be able to rely on them.
I've just finished reading A Commonsense Approach to the Theory of Error-Correcting Codes. The book does exactly what it says on the tin; it explains error correcting codes in terms of linear modulo-2 algebra, only getting into Galois fields and all that in an appendix.
And, as such, it does little more than scratch the surface. It only goes into Reed-Solomon codes that can correct single word errors, for example. But hey, I'm not complaining - it's done a great job of giving me an intuitive understanding of Hamming codes, cyclic codes (such as CRCs), the single-word-correcting RS codes, and so on. And I've learnt a lot about Linear feedback shift registers.
But it strikes me that the whole field of error correcting code is a bit insular. The maths required to really grasp it are really complex. Finite fields are bizarre things. While lots of people can experiment with things like data compression or basic encryption, error correction codes, the third cornerstone of low-level coding technology, is quite inscrutable.