Category: Computing

Social Networking (by )

Although I'm not a big fan of Facebook, I occasionally feel an urge to update my real social network: my FOAF profile at http://www.snell-pym.org.uk/alaric-foaf.rdf. I've not made that link clickable, to save people the horror of having their confused browser show them a pile of raw RDF. This time, since I've been reminded my somebody that my PGP identity has been a bit unmaintained, I've been putting my key out on keyservers, updating the identities attached to it, and putting signatures on my FOAF documents, then linking to them with the Web of Trust ontology so it's all linked properly in RDF. My PGP key ID is 7371086A.

The reason I'm not a big fan of facebook and other social network sites is that they're centralised. I have to give all my data to some third party and rely on them to keep their servers running! It's the same problem that most instant messaging systems, like MSN Messenger or whatever they call it these days (Live Something). I have to rely on the kindness of a third party to keep it going, and I have to trust them with my stuff.

Read more »

Type systems (by )

There are a number of type systems out there in different programming languages. In fact, there's zillions, but they boil down to a few basic approaches.

Read more »

n2n (by )

n2n looks like a lovely piece of technology.

It's basically a VPN system, but quite different from existing VPN technologies. Existing VPNs work by creating a point-to-point link between two systems, usually a personal computer on an untrusted, remote, and often frequently changing network - and a router which then routes or bridges traffic (depending on the layer the VPN operates on) to other VPN clients and/or a physical private network.

The usual configuration is that there's a network with some resources on it that can't be trusted to the open Internet - insecure file sharing or network management services, for example - with an access device connected both to that network and the public Internet, such that remote computers can connect to the access device via the Internet and thus be virtually and securely connected to the private network so they can access the resources therein as if they were physically plugged into it. All over an encrypted link that they need to authenticate to set up, keeping third parties from reading or injecting traffic.

But the conventional VPN approach doesn't work so well for more complex setups. I, for example, have two private networks with various servers and workstations on, an isolated server, and two roaming laptops. It would be nice if I could set up varying levels of trusted connectivity between the three; the isolated server should really appear to be local to the first private network, which could be done with a conventional VPN, except that a permanent connection would require the isolated server to try to set the VPN up on boot and, if it goes down due to network problems or the access server on the private network rebooting, retry the connection automatically. Likewise, I'd like some level of routing between the two private networks, with a bit of packet filtering to tailor the precise trust relationship; I'd have to choose one network's router to be the VPN server and the other the client, set up another auto-reconnecting VPN, and set up routing across it. Then have the laptops also connect to a VPN server on one of the private networks, or perhaps the isolated server, to then use routing across the VPN links between the two private networks in order to reach everything they should be able to.

In practice, I'd probably pick the best connected private network to be the hub, and run a VPN server on it, and have everything else connect to that. Traffic between a laptop and the other private network would go via the hub, causing double bandwidth consumption at the hub and increasing latency. If the hub goes down, the whole network is fragmented.

Plus, mainstream VPN protocols are a pain to configure and use, as they tend to use strange protocols like GRE.

But n2n is much better than all that.

Uniqueness Typing (by )

Ever since I was a kid, I've been interested in exploring Uniqueness typing as a paradigm for mutation in a programming language.

The principle is simple: mutating operations - assignment, I/O, etc - are a pain. Both for the implementers of the language, who are limited in what optimisations can be performed when the values of things can shift around beneath them and when any given part of the program may have side effects so order of execution must be preserved, and for the programmers in the language, who have to deal with bugs and complex behaviour that just don't happen when everything is referentially transparent.

Read more »

Insomnia (by )

yawn

5:30am and I haven't slept a wink yet! I really need to sort out my lifestyle so I get (a) exercise and (b) time to think every day. Time to think is important for me; if I don't get enough, then when I go to bed, I lie there and think. Lots.

Tonights thoughts have included:

  1. Some ideas about how whole-program transformations (eg, the macroexpander/partial evaluator and the OO system) in CHROME might be handled. The OO system needs to be a whole-program transformation rather than just some macros using the normal macroexpander since things like inheritance graphs and method lists for generic functions need to be accumulated together; most Lisps handle that with macros that destructively update data structures, but I'm trying to design a system without rampant mutation, so need a whole-program code walk to do this. Clearly, since we want to be able to write macros that produce generic functions, methods, and the like, we need to do this AFTER normal macro expansion, but before the compiler gets its hands on it.
  2. Some ideas about separating the generic function/method system - the dispatch part of Lispy OO - from the classes-inheriting thing. Subtype relationships that are used to dispatch GFs should be doable with plain predicates - pair? my-record-type? etc. Or more complex predicate expressions on groups of arguments, so we can support multivariate typeclasses in the Haskell sense, as a rich interface/implementation system as well as a traditional records-with-single-inheritance class system. To do this properly we also need declarations that one predicate implies another - (number? x) -> (integer? x) - so that a method on numbers will be used for integers, yet a more specific integer method can override it. I'm not sure how decidable the "most specific method wins" thing can be with complex multivariate type predicates, though. Must experiment and ask knowledgeable formal logic folks.
  3. Thoughts about future computer architectures. The drive is for more speed, but these days, most CPUs are idle waiting for memory to feed them code and data, or (much more often) for the disk, network, or user to feed them. The only places where the CPU maxes out tend to be highly parallelisable tasks - server stuff handling lots of requests at once, games and other audiovisual things, and scientific number crunching. This suggests to me that a future direction of growth would be one or more high-bang-per-buck MISC processors embedded directly into SRAM chips (sort of like a CPU with an onboard cache... but the other way around, since the CPU would be much smaller than the SRAM array) which are bonded to the same chip carrier module as a set of DRAMs. One or more of CPU-and-SRAM and some DRAM chips are then all designed together as a tightly-coupled integrated unit for maximum speed due to short traces and the lack of standardised modular interfaces between them (like DIMMs and CPU socket standards) meaning that the interface can evolve rapidly. The whole CPU+SRAM+DRAM unit is then pluggable into a standardised socket, which motherboards will have several of. The result? Lots of cores of low power consumption reasonably fast CPU with high speed access to high speed memory. And for those demanding games/media/scientific tasks? The higher-end modules will have FPGAs on as well...
  4. Forget nasty complex unpredictable memory caches: have a nonuniform physical address space (with regions of memory of varying speed) and let the OS use the virtual memory system to allocate pages to regions based upon application hints and/or access statistics. Not having cache tag and management facilities makes for more chip area to put actual memory in.
  5. We've been wondering about getting goats lately. Goats are useful creatures; they produce milk (which can be turned into cheese) and they produce decent wool (just not in the quantities sheep produce it). Their milk and cheese don't make Sarah ill the way cow-derived versions do. Plus, we need something to come and graze our paddock. We've been doing a little bit of research and apparently two goats would be about right for the space we have. We'd need to put an inner layer of fence around the paddock to keep them in while still allowing the public footpath, and we'd need a little shed for them to shelter in. But thinking about setting things up in the paddock, I'm now wondering if it would be a good idea to build a duck run in there too, down at the bottom by the stream, all securely fenced against foxes and mink and with a duck-house up on stilts in case of flooding, but with a little pond dug out for them (connected to the stream by a channel with a grille over it to prevent escapage). It would be a convenient place to have the ducks, and it would make a good home for them, I think.

It's now 6am. Do I try and go to sleep, or try and last the day out? Hmmm...

WordPress Themes

Creative Commons Attribution-NonCommercial-ShareAlike 2.0 UK: England & Wales
Creative Commons Attribution-NonCommercial-ShareAlike 2.0 UK: England & Wales