The Tangled Tale of My Career (by )

I've been feeling a bunch of despair, boredom and ennui about my career lately; so I've decided to attack it in the way I know best - by writing about it.

When I was little, I wanted to be a scientist. Of course, that wasn't what I actually wanted - but popular media had created this vision of a mad scientist labouring in a lab making weird and wonderful things. I later learnt that what I wanted to be was called an "Inventor", which is a kind of engineer; the application of science to make things - the goal of building a fantastic machine is to get that fantastic machine, not to test some hypothesis. So it's definitely not science.

This was fed, in part, by the reading material I had around the house. My maternal grandparents had been an electrical engineer and a science teacher, respectively, so I had a mixture of O-level science textbooks and old cloth-bound tomes about electrical power distribution networks. Plus, my mother was a science fiction and fantasy fan, so I had stories like E. E. 'Doc' Smith's Lensman books (epic space battles, written by an engineer so tending to linger on the technical details) inspiring me. I saw technology as a way of solving real problems, and I found learning about it fascinating, and the challenges were engaging - and what's even better, I found I had a knack for solving them. The future looked exciting!

The early days

Somewhere in my first years of school, my mother bought me a teach-yourself-electronics kit that included a breadboard and a bunch of 7400-series digital logic chips; one part of the board was carefully set up with a row of DIP switches with pull-up resistors, and a set of 5mm red LEDs, to serve as inputs and outputs while the other part of the board hosted a variety of different circuits to demonstrate the principles. I had a lot of fun going through the exercises with her! Then when I was about seven or eight, she bought me a ZX Spectrum+ home computer which we plugged into our black-and-white TV, and I started to learn to program in BASIC. Of course, the manual for the Spectrum included a block diagram of the system, and combined with reading all the classic Usborne books I could get my hands on, I wondered about building my own computer from scratch. In my mid teens I rescued an I/O board from a minicomputer from its way to a skip, and extracted a Z80 processor chip, which I hooked up to a clock generator and got it talking to RAM, but (lacking an EPROM burner) I was stumped by the complexity of building a programming panel to manually program the RAM so it never ran anything. I read the legendary book The Soul of a New Machine, from the local library, and marvelled at the tales of a microcoded 32-bit processor hardware memory protection; I tried to design my own processor, but at the time lacked the theory behind synchronous logic, so the best I could come up with was to design a complex system of my familiar Z80 processors, working in parallel to effectively provide a semi-hardware-semi-software virtual machine with a very high-level instruction set.

Books were a huge inspiration. I drained the local public library's technical department, and when I could get access, the local technical collage's library as well. Many of my birthday and Christmas presents were books I requested. I read about programming languages like FORTH, ADA, LISP and Prolog; computer networks; distributed databases; all amazing stuff.

Around then, I upgraded to an original IBM PC (rescued from going into a skip at the local technical college), and was writing code using Turbo C++. I read every book on computing I could find in the local library - and then started on the library at the local technical college. I also devoured instruction leaflets they had for the college's VAX cluster (although I never managed to get access to it).

The computers I had access to - both the ZX Spectrum+ and my PC running DOS - lacked any real multi-processing functionality. They would always be following a single program, which spent most of its time waiting for user input to do anything; they were strictly reactive devices that would simply sit there waiting for the trigger they were expecting before doing the next thing in a pre-programmed sequence. However, I was reading about multi-user systems, and dreaming of computers that could manage a set of responsibilities, dynamically reacting to events - what might nowadays be considered data-driven goal-seeking behaviour. I was really excited when I got hold of a copy of Prolog Programming for Artificial Intelligence sometime around age ten; I was building, in my mind, a model of computers as much more autonomous, powerful, helpful, capable devices.

The Internet

Around when I was fourteen, I got a dialup Internet connection (at 9600 baud!). This gave me access to a vast new range of reading material, and also got me really thinking about networks of computers. What if a bunch of computers connected by a network could share information between themselves to collectively perform the functions assigned to them - distributing workloads around their resources? The idea of being able to expand your "computer" by just adding more computers to it on a network was enticing. Opportunities for highly available systems with redundant components came to mind. This required a substantial re-think of how communication between parts of a software system worked - passing pointers to objects in memory wouldn't work between computers with different memories. The problem of how to handle failure in a distributed system was a trivial matter of failing over to other hardware and retrying operations - until you had to deal with changing stored information, so I started to hoover up information about distributed databases and two-phase commits.

And the Internet gave me my first taste of a computing culture. At the time, it was widely believed that C, or this new upstart C++, were the Best Languages because they were Fastest, and that a skilled programmer was somebody who was intimately familiar with the details of the hardware that would run their program; other languages with their array-bounds checking were fine for learning to program, and simple tasks, but Real Programmers used C or, for the real Inner Circle, assembly language - to get the ultimate performance and power over the machine. However, I was fascinated by ways for programmers to cede control to the machine so it could figure things out itself, automatically reacting to situations on the fly rather than needing all cases hand-coded. I was excited about how exception handling could save a programmer explaining how to handle every possible error in every possible case. I wondered how automation could be expressed as state machines, which seemed better suited to unending processes like controlling a machine, than the traditional block-structured programming that seemed more focussed on things with a beginning and an end. And so I joined a great culture war, arguing with people who said things like "An interpreted language will never take off" (making the misapprehension that the kinds of advanced language features I was talking about - such as polymorphism - could never be compiled). Oh, how wrong they were, in this day and age when so much code is written in Javascript, Python, PHP, Ruby, Java, and friends!

What if, I thought, you could introduce computers to each other via the network, by pre-sharing some secret keys so they can trust each other as being part of the same cluster and not some interloper, and protect their communications from an untrusted network? And they then shared a single database (or set of databases - it's a matter of perspective) between themselves, providing a single global logical state? And that state included software to handle events, such as users clicking buttons in user interfaces, or external machinery, or service requests from third-party clients across the network? And then the computers in that "cluster" looked up how to handle the events that impinged upon them, or if they were busy punted the request across the network to another computer that had more available resources? And what if we wrote that software in a language designed to capture the nature of the problem directly and let the computer try to figure out the best solution, rather than just as a series of instructions expressed in terms of computer innards, that we hoped would solve the problem? What if we could install such software on a bunch of standard off-the-shelf computers on a network, rather than needing expensive specialist hardware to do it? And so, slowly, I began to brew a design for a complete re-think of the infrastructure software was built upon - that, in my mid-teens, came together into ARGON.

I was excited! I saw a better vision of computing, that would make it easier to write faster, more scaleable, and more reliable software. And the growth of the Internet had led to a boom in open source development, as anybody with an interest could easily join a project. Mailing lists and Usenet buzzed with cool ideas; I helped out with the release of OpenDOS, and contributed to the "PNG extensions" addendum to the now-ubiquitous PNG file format (My ever-changing surname in the credits of those references is another story entirely...). I contributed a library for low-level hardware access to the DJGPP project. It was easy for an enthusiastic person to be part of something great.


I went to University to study Computer Science. This was a logical choice, as I already knew most of it. I also got my first job, working in the exciting new field of Web development (this was the 1990s, so that was a booming growth industry). I worked with good people who sought to do a good job for our clients, so we did a lot of learning and research into new technologies, and we created innovative solutions. This was great for me - I spent a significant proportion of my pay on the legendary O'Reilly books documenting most of the new Internet technologies. The Web boom brought interest in new approaches to things; new languages and technologies were eagerly adopted, and a new event-driven (by HTTP requests) model of software development made it acceptable for people to think about models of computation beyond the POSIX-style "long running process with input and output channels" that had previously dominated the software industry. I quickly found that actual front-end Web development was fairly tedious, particularly since the connectionless HTTP model meant you had to find ways to manage form state through complex processes, but that just drove me into backend development; I specialised in infrastructure for Web app state management, databases, and hosting and network infrastructure. I put my technical problem solving skills to use; in design meetings, it was usually me leading, asking the questions needed to pin down our requirements and proposing solutions that met those requirements. I gained a reputation as a problem-solver, and people would bring me in on things when they were stumped (this became self-perpetuating; many times, the act of explaining the problem to me helped people realise the solution themselves, so my involvement was sometimes merely to ask a few questions). And in my spare time I did my degree, integrated what I was learning into my designs for ARGON, read books, and built my own server hosting infrastructure.

Pages: 1 2 3 4 5

1 Comment

  • By andyjpb, Wed 29th May 2024 @ 11:04 am

    Good luck! It sounds like you're on the right path in the right direction so just keep on trucking!

Other Links to this Post

RSS feed for comments on this post.

Leave a comment

WordPress Themes

Creative Commons Attribution-NonCommercial-ShareAlike 2.0 UK: England & Wales
Creative Commons Attribution-NonCommercial-ShareAlike 2.0 UK: England & Wales