Category: Sci/Tech

The Eye Of Horus (by )

As I mentioned in passing before, I've been writing my own server status monitoring package, The Eye Of Horus, because I wanted to better monitor my own servers.

Well, I installed it today, both to get started with some actual monitoring and to try it out in a real environment before releasing it properly, and the first thing I found was that the load on my primary server was high. As in, around 5. And a bit of digging revealed that it was Postfix being kept busy - delivering spam.

So I upgraded postfix on it, and my backup mail server, to the most recent version in pkgsrc, and added a bunch of SMTP-level anti spam checks to take the load off of spamassassin - and pow, system load has dropped to reasonable levels again.

The Eye Of Horus has saved the day already!

It's not yet as featureful as Nagios, but it's a better architecture, so it's easier to configure and has potential to overtake Nagios in the feature stakes. I've written an optional module for it to log statistics (load average, disk space, etc) to RRDtool databases, and hooks into the Web status display CGI to allow it to link to graphs produced from RRDtool, which is pretty nice.

Merging BitTorrent and HTTP (by )

I've been kicking an idea around for a while now, so I thought I'd blog it, rather than just sit on it then feel frustrated when somebody else has it and gets RICH and FAMOUS and POPULAR...

Basically, BitTorrent makes publishing large files on the Web much less of a burden on the server than HTTP. If I put a 10MB file up on an HTTP server and give out the URL, everyone who fetches the file will transfer 10MB from my server. The same 10MB, over and over again.

If, however, I run a BitTorrent seed on my 10MB file, connecting to a tracker server, and give people the .torrent file describing my file and naming the tracker, then people with BitTorrent clients can connect to the tracker and find a list of connected clients with parts of that file (initially, just my seed client), and start fetching chunks of the file from them. As soon as a few people are downloading my file at once, they can actually start sharing chunks between themselves - my seed sends a chunk to one client, then my seed and that client are both available to send chunks to more clients. This reduces the load on my server a LOT, and thus reduces the cost of publishing large files.

Lovely stuff.

However, it's complex. Rather than dump a file in a directory on my web server and give out the URL, I have to run a tracker server, create a .torrent file, run a seed client, and distribute the .torrent file (perhaps by copying it to a directory on a web server and giving out the resulting URL).

It strikes me that one could probably write an extension to HTTP, implemented by an Apache module, that:

  1. If a GET request for a file comes in with a special header stating that the client supports it, then engaging this special behaviour. Otherwise, sending the file as normal. The server may be configured to send the file as normal if its size is below a certain limit, too.
  2. Have a tracker built into the server. I think the tracker protocol is HTTP anyway?
  3. If one does not already exist, automatically generating a .torrent for the file, naming itself as the server, and sending that as the response body

Then clients/web browsers that support it could then automatically fetch static files using BitTorrent, from servers that support it, while still maintaining perfect backwards compatibility between mixtures of old and new servers and clients, and without needing any extra admin effort (beyond perhaps installing and enabling the Apache module).

As far as I can tell, it'd be better than Web Seeding.

Kitten Technologies (by )

For some time now, I've been sitting on the domain kitten-technologies.co.uk, intended as an outlet for my "intellectual property" - whereas Snell Systems is me for hire to do bespoke stuff, Kitten Technologies is meant to be my more generic packaged outputs; all open source stuff for now, although I have plans for some more commercial things later.

Anyway, I've slowly been working towards a fairly decent automatic release management site, based around all the projects being in Subversion repository and having standardised filenames at the top level of each project root (LICENCE.txt, README.txt, VERSION.txt, etc).

But with the successful upgrading of my server infrastructure to Apache 2, I can run Subversion over HTTP, meaning I can finally allow public Subversion access (with the option to give other people commit access to individual projects in future), so I've now got the project management page up to a state where I'm not ashamed of it any more.

So, for example, I've recently been messing around with a server status monitoring package, a bit like Nagios but done in a way I prefer, which I've called The Eye Of Horus.

There is a main project information page, and a download page which links to the latest official release, and to a nightly dev snapshot tarball; and gives the public read-only Subversion URL (http://svn.kitten-technologies.co.uk/horus/trunk/), and links to a subversion browser to look at the revision histories of everything.

Right now there's only me working on any of the projects, but if others collaborate (I have a few potential takers for Horus, since it seems there's a lot of minor dissatisfaction with Nagios), I can give them Subversion commit access, and set up project mailing lists as required; but I may integrate issue tracking into the Kitten Tech site itself, if it seems useful to let others submit bug reports and the like.

One Size Fits All (by )

On Monday, I happened to be discussing some ARGON stuff with a friend, and he pointed out that what I'm trying to do, in many ways, is to find a one-size-fits-all solution for a lot of problems, and that this is often dangerous since you can end up making a nasty compromise.

He's right - part of the challenge in designing ARGON has been to find ways to avoid nasty compromises. So I thought I'd describe a few techniques I've been using.

Read more »

T H Huxley (by )

Dear fellow geologist,

I have found a thing of exquisite beauty in the Oxfam book shop in Stroud yesturday. I couldnt believe my eyes when I saw it - I spent a whole 12 quid on it so I expect you all to ogle it and go ooooooo lots!

TH huxley

Isn't it just the best? Notice the date if you will on this next photo!

1895!

1895!!!!!!! Wow - not only is it one of our oldest books but just wow! I found it becuase I was looking for a presant for Al who collects old technical and scienific books.

For us RSM peeps T H Huxley is very very improtant 🙂

I am horrified at the wikipedia artical on the guy - yo geologists that are better than me - go and put our history into the artical please!

I would give a link telling people about T H Huxley and stuff to do with the Huxley school, the RSM and Imperial college but I cant find a decent write up or the pages dont work - why am I not suprised? If a peeps knows of anything good I could link to let me know?

Thanks

Oh and isnt the book lovely - for those who cant read the title on the book its Science and Education Essays by T. H. Huxley its volum III though so now I suppose I have to hunt the others down to make a nice set!

Sarah is very happy - geology love and old book love combined in one precoius bundle - sarah skips away into the sunset to read - woundering vaguely if it should be done with rubber gloves on? Hmmm.... maybe I should get Al to do more reading of old books.

WordPress Themes

Creative Commons Attribution-NonCommercial-ShareAlike 2.0 UK: England & Wales
Creative Commons Attribution-NonCommercial-ShareAlike 2.0 UK: England & Wales