Category: Computing

IPv6 versus NAT (by )

I was enthusiastic about IPv6 when I first read of it, in the late 1990s. Mainly, I liked the autoconfiguration, and the inbuilt support for anycast and multicast, which are used to great effect: there is s standard IPv6 address for "my nearest time server" and the like, which has various benefits.

However, it comes at a cost. It's a whole new Internet that has to be built alongside the existing one and a careful handover done with complex mechanisms to let them coexist transparently. And the better autoconfiguration of IPv6 isn't that useful in the presence of recent developments such as automatic IPv4 address assignment, mDNS for finding things, and of course, good old DHCP for managed networks.

And it's not working. More than a decade has passed, and IPv6 is still a toy. It's extra work to set up, and the IPv4/IPv6 migration mechanisms you need to be able to still access the IPv4 Internet actually break existing stuff, mainly because the IPv6 side isn't being maintained well (so often breaks without being noticied) and hosts using the mechanisms will prefer IPv6 over IPv4 (as otherwise, IPv6 would never get used, as almost everything that offers IPv6 also offers IPv4) if it's advertised.

So there's little motivation for people to bother turning on IPv6 - it's more work, and it breaks your Internet access (or, if you're a service provider, unless you're careful, it offers an alternate way to access your site that is more work to maintain, but breaks more often as you won't be putting as much effort into maintaining it). This means that the critical feedback loop of people wanting IPv6 because there are good things that are only on IPv6 will never kick in. It'd be stupid to try and be IPv6-only, but until useful things are IPv6-only, there's little incentive to even support IPv6 alongside IPv4.

Now, the main reason people say we should move to IPv6 is because of the IPv4 address space exhaustion. But there are other solutions.

The widespread one is Network Address and Port Translation (or "NAT" for short). Under NAT, an entire network has a single public IPv4 address and the devices inside the network are assigned addresses from a special private range (that can be reused for every private network), and outgoing connections get their source address and port rewritten so they all come from that one address, and when the replies come back, they're mapped back into the private address of the actual device. This means an entire network (which could be an entire organisation with millions of PCs, or an entire ISP with millions of customers) can use just one (or a few, if they need more ports to support all the connections at once) public IPs.

There are issues with this - the NAT device needs to remember what external ports are used by what connections, and it needs to keep track of when those connections are still being used so it can re-use the ports. But if a device is switched off or unplugged or dies, it will never explicitly close the connection,. so the NAT device has to discard connections that just aren't used for a long time, assuming the owner to have died. However, this means that long-lived connections that aren't used much tend to get killed. But since NAT is so widespread now, most apps that open those kinds of connections nowadays send "keep-alives", empty messages that just keep the connection alive so the NAT device doesn't forget them.

And it also means that devices behind NAT can't accept incoming connections; the NAT device only lets incoming connections out and remembers the return path for replies - if an incoming connection comes in, it has no way of knowing what device "wants" it unless it's been specifically configured with a "port forward". Standards like UPnP exists to allow devices to find their nearest NAT router and ask for a port forward to be set up, but they suck for various reasons I shan't elaborate right now.

This isn't a great issue, though. As a laptop user, I am resigned to being behind NAT most of the time. Almost everything I do from my laptop is based around connecting out to remote servers, and for the exceptions, I have an N2N VPN that lets my peers connect to me via an encrypted IP-level relay server. My long-lived SSH connections have keepalives turned on. It works out OK in practice.

However, I think it could easily be improved...

Before NAT became popular, the standard way of doing the same thing was via a SOCKS5 proxy. This worked much like NAT - you'd have a network using private addresses, and a single border device on that network that also had an Internet connection with a public IP. The border device ran some software - the SOCKS5 proxy.

When applications on devices inside the network wanted to connect to somewhere outside of the local network, rather than trying to reach it directly, they'd instead connect to the SOCKS5 proxy. Over that connection they'd send a request for the connection to be forwarded on. The SOCKS5 proxy would then open a connection, from its public IP address, to the destination server. It would then forward traffic between the two halves of the connection, making the device's connection to the SOCKS5 server in effect be a connection to the remote server - and back again in the opposite direction.

So it basically did the job of NAT, except that it required the devices to know about SOCKS5, and to know where the SOCKS5 server was. NAT won, as it was transparent: the NAT box just pretended to be a router offering access to the Internet (the "default route" you have to put in when manually configuring a network, or configured automatically via DHCP or PPP). SOCKS5 didn't really require you to modify the application (although many applications did add support to SOCKS5), as it was possible to write a "socksify" tool that pretended to be the OS's normal interface to the network (the "sockets API"), but which actually made connections via SOCKS where applicable.

But SOCKS5 doesn't have NAT's problems with keepalives. And it has a big advantage over NAT - the SOCKS5 protocol lets a client request an incoming connection, in which case the SOCKS5 server opens an incoming connection port on the public side and reports its address back to the app, along with a notification when the connection is taken up. It's a bit limited, as it only lets a single connection in (while a proper listening port lets multiple connections).

Also, SOCKS5 actually makes it easier to adopt IPv6. When an outgoing connection is requested, the app can specify an IPv4 address, an IPv6 address, or a hostname - and in the latter case, the SOCKS5 server could in principle find an IPv6 server at that hostname (with an AAAA record) and open an IPv6 connection, even though the application has connected to the SOCKS5 server via IPv4 - or vice versa, if the client connects to it via IPv6.

And unlike NAT, SOCKS5 has a login phase:: each connection can supply a username and password to identify the user. Under NAT, all you have is the private IP address of the device. This means that SOCKS5 servers can give better connections to more important users, and better log who did what (where that matters).

So perhaps it's time for a SOCKS5 comeback. The protocol has been extended to support IPv6, but I think it could do with a bit more sprucing up to make it more powerful and modern. Here's what I'd suggest:

  • Proper listening socket support. It should be possible to request a listening socket, and if you are accepted, then be sent messages every time a client connects; but rather than your connection then becoming the relayed client connection, the accept message just gives you a magic token identifying the connection. You can then open another connection to the SOCKS5 server and, rather than requesting an outgoing connection, offer up the magic token to accept the incoming connection and have it relayed. Or just reply on the original listening-socket connection to reject the request.

  • Listening sockets should be able to request a specific port to listen on, along with a flag to specify whether they're happy to accept another, or should just give up the attempt if they can't have the one they request. Such a request might be rejected due to it being already in use, or certain listening ports might be reserved for specific users.

  • Better UDP support. The current UDP support in SOCKS5 amounts to asking the SOCKS5 server to set up a UDP relay. All your UDP traffic must then be sent to an IP+PORT the SOCKS5 server sends in the reply, with a header added to authenticate it; this eats up some of the limited available size of a UDP packet. It'd be nice if the UDP packets could tunnel over the SOCKS5 connection, like TCP connections are, with suitable framing.

  • Ubiquitous support for SOCKS5-over-SSL in clients and servers. Then it can be used as a simple VPN - offer a SOCKS5 server on the public side of your SOCKS5 relay, too, that lets authenticated users who are outside of the office connect in to access servers on the private network. Or just trust your internal network less, as some SOCKS5 connections are better than others (due to being optionally authenticated to a specific user) so are worth stealing. For this use, it'd be nice if a SOCKS5 server could announce (when it's connected to) what addresses it provides access to - for a normal Internet gateway, it'd reply "all addresses"; for a VPN, it'd just report the private IP range.

  • Better support in devices. SOCKS5 should be a standard feature of the sockets library, not something you need to hack in under an app. SOCKS5 should be in smartphones and tablet computers. There should be the option to specify a list of SOCKS5 servers as well as a default route (they can be connected to and asked what address ranges they provide, and connections made via them accordingly). DHCP servers should announce SOCKS5 proxies (there doesn't seem to be a DHCP option for SOCKS5 proxies; am I looking in the right place?).

I think that extending SOCKS5 in the above way (to make SOCKS6!) and then getting a good implementation of it open-sourced under a BSD license and thence it device OSes as standard would be a LOT less work than migrating to IPv6, while also offering an improvement over IPv4 with NAT - and yet also able to coexist happily with IPv4+NAT, as non-SOCKS devices can still be NATed via the default route.

So, how about it? If somebody volunteers to write a decent "SOCKS Next Generation" server (using nice scaleable event-driven IO and all that) and client, I'll volunteer to help you as best I can, and write up a proper draft RFC for the enhanced protocol. If we can get the server into consumer and small office ADSL routers (whose manufacturers seem to be quite open to adding extra features to the brochures), along with advertising themselves as such via DHCP option that clients listen to, it can be come ubiquitous and useful; then we can work on getting the ISPs that to support it (making sure our SOCKS server is happy to pass connections on to an upstream SOCKS server, for when we are proxying to an ISP's own private network). I reckon that'd be a few weeks' development time, at most, then it's all about the lobbying to get it accepted into OSes and routers.

Fame and glory await!

Wearable computers (by )

One of my too many projects is to make a wearable computer.

Lots of people are interested in making wearables, but nobody's yet come up with one that hits a "sweet spot" of decent functionality along with it being unobtrusive enough to not be a pain.

Well, I'm a nerd, so I'm far happier to put up with obtrusiveness to get my pervasive cognitive-assistance fix... I've been fascinated by pervasive computers since I was a kid; I read about Steve Roberts' recumbent bicycle as a youngster, as well as plenty of fiction about brain implants and the like.

Read more »

I had a Dream (by )

Actually I've been having lots of very vivid dreams which doesn't bode well for sugar levels but I haven't got the results of the Glucose Tests back yet - by this time with Jean's pregnancy I had gestational diabetes. But then I often have vivid dreams - many of them are what is termed lucid and I have some sort of control on them. Part of this is the fact that when pain levels are high I don't actually go to sleep properly so I am in a sort of resting trance. They have benifits but it makes it harder for you body to repair itself from injuries - this isn't mumbo jumbo this was out of the Drs mouth at the pain clinic when they attempted to medicate my sleep when we lived back in Essex.

Anyway I thought I really needed to share lasts nights dream. It starts with me trying to get to a PhD interview at Reading University - the PhD is about modeling other solar systems and exo planets etc... I have no idea if Reading does this sort of thing but it was Reading in the dream - the only issue was Alaric was running late so instead of having a nice sedate drive to the interview we had to high jack a state of the art plane from the local army base type place.

As we took off I noticed the tail wasn't actually attached to the plane but the whole thing was segments held together a bit like a kite - the tail itself looked remarkably like a cray fishes or something lobstery only in shiny metal.

We get to the university and I am late - I haven't read the notes on what the things is actually about but they agree to see me anyway as there is only one other candidate - a UG astrophysics girl. I then proceed to think on the spot and tell them that they need to reassess everything. I tell them that what they need in a lovely large database with a nice archive mode - this is sort of a giant wiki with the ability to pull meta structures from the data such as phase diagrams. You see I don't just want to make a database of the planets and the physics but why not add all of mineralogy and astronomy?

Why not had layers where people can choose the data to run their simulations and the like? In the dream I'm in a pale yellow room with aging equipment and they are like - we don't have the money to pay the programmers and our stuff never quiet works.

Of course it doesn't I crow - your not programmers and you just use which ever language you happen to have picked up. Then I tell them not to worry - I'll make the database - I'll make the initial system and we can have people adding their own stuff!

It would be massive and everyone would argue about things added which is were the archive system would come in - they could just take an previous theory ect... With this we could easily extrapolate the composition of planets around other stars. It could have the ability to swap between notations so no more issues over what a Chemist calls a metal compared to a Geologist compared to a Astrophysist etc...

I have to say at this point there was decent in the interviewers - there is of course a problem of who the data belongs too and would we have to pay and keep it secret - that would hamstring the project - it would kill alot of their grants dead etc... alter the peer review system. Subjects that I have touched on before whilst awake!

But then I point out that it would have commercial applications and launched into a whole thing about the gaming industry being a growth sector and how you could build games engines on this thing! (again this is something we are sort of doing anyway in the real world but not with real physics).

I point out that scifi authors and the like would love to get their mitts on such a database too - for it would make world building a lot simpler and you could make smaller custom ones.

They were still like but we need someone to make all this and we just wanted a data monkey to enter numbers into spreed sheets. I laugh and say I can build it for them (I can't but I'm planning on using an advance version of Alaric's Ugarit.).

Anyway it ended with me negotiating to mainly work from home and stuff.

Part of me is now going - this needs to become real! We need to have this database - an extention on an idea I had a few years ago! And Alaric was like that is exactly the sort of thing the archive mode of Ugarit would be good for. The arogance of me in the dream was a suprise though. Besides last time I had a PhD interview I told the person their project was recording the wrong things - which didn't go down too well :/ And this dream is just that rite large - plus there is no way I am going to be doing anything academic for a while either - but it was a cool dream non the less!

Accurate budgeting (by )

If you are paid monthly, then it makes sense to work on a monthly budget. Many expenses are paid monthly, so this works out quite nicely.

However, some things are paid quarterly, or even yearly. If those things are big enough that they can't just disappear into the noise of your monthly budget, you need to budget for a share of them each month, and put that money aside somewhere to save up for the annual costs.

And some things are paid weekly, or (worse) every four weeks. We used to have a self-storage room that cost us about two hundred pounds every four weeks, which was a royal pain as sometimes this meant we paid £200 a month, and sometimes £400. It was hard to lose THAT in the noise.

So, I decided to write some software to work all this out for me.

Read more »

Portable computers (by )

I like the iPad hardware; shame about the crippled iOS software. Similarly, I like the Kindle hardware - shame about the restrictive DRM system.

But the biggest shame is that I'm actually tempted to own three or more different computing devices, purely because of different situational specialisations in the hardware. A smartphone (or, more ideally, a wearable computer) for real-time pervasive tasks. A tablet device for portably viewng stuff on (whether it's ideally electronic ink or a nice colour LCD really depends on what I'm viewing) . A laptop for actually working on... ideally a small one for portability and a larger one for power (both in terms of CPU+RAM and in terms of more screen real estate). Plus remote servers that store a significant part of my data since it needs to be available to others in some way (this blog, my email, etc).

But this sucks - there's a lot of duplication of hardware (mass storage and CPU power) there, when I'll only really be using one device at a time. And there's an annoying duplication of data that needs to be "synched" between things. And an annoying variation of user interface, as different devices often have very different models of storage management.

Here's what I'd love to have:

In my pocket sits a smartphone. It might have a Blackberry-style keyboard or an iPhone-style touchscreen, depending on taste. It has a small computer to run its apps, and a battery, and the usual Bluetooth/USB/GSM/etc interfaces. And it has a wodge of Flash to store my stuff.

Maybe it has enough Flash to store all my stuff. Maybe it doesn't in which case, I might also carry a featureless cuboid that contains batteries and a lot of flash, or even a hard disk, or some combination of the two. In which case, my phone is slaved to it - using its own internal flash to cache resources fetched from the storage box, and accessing it via Bluetooth or some more advanced personal-area radio network; but when it's plugged into the storage box via USB, it can access it more speedily, and suckle power from the larger battery.

Perhaps I'm lucky enough to also possess a head-up display (which also functions as a headset for audio), and/or a chord keyer, that talk to the smartphone via radio or wired links. They're just extra I/O devices that plug into it, though.

But maybe I own a tablet computer, or an electronic paper device. They might have their own internal storage, but I'd slave them to my phone, or direct to my storage box (slaving to the phone, if the phone is slaved to the storage box, just slaves them to the storage box, in effect), and use their local flash as a cache. Also, if I don't like their file-browsing interfaces, I can just use the phone to select a file and "send" it to any willing device reachable through the slaving relationships, which causes it to access the file (from the original source) and open it up. The file isn't actually "sent", it just seems that way (except faster, and with a single central copy if I start editing it).

But what of the laptop? Or my big, powerful, desktop machine? Let them be slaved to my phone or my storage box, too. Take my home directory and installed apps from my central storage. Then there's no synching of address books and all that. There might be files in my phone that only laptop-scale software can manage, which I then can't open on the go, but I can at least use my portable devices to email a copy to a colleague who needs one, and to open simpler types of files that happen to be associated with the same project. They can talk to my storage box via a wireless link within range, or I can hook my storage box, phone, etc. up to it via cables for high-speed communications and power distribution.

What will it take to do this? Some cleverness to negotiate which device should give which power when they're joined by a cable (easy if one of them has access to mains power, trickier to decide if they're both battery devices; perhaps the default should be to not share power at all unless asked to by the user or unless the battery of one device is flat, turning it into a non-self-powered device). But it's mainly down to standardising file systems and protocols. Working at the "USB mass storage device" level is a bad idea, as only one device can have such a filesystem mounted. It needs to be more like NFS. And, mainly, devices and desktop OSes need to let go of managing their own filesystem and learn to use a shared standard for home directory layout - which will NEVER happen for legacy systems, but at least they can mount your mobile data store as "My Documents" or something like that, and perhaps make some effort to invisibly sync between their own personal-information databases and whatever's in there.

There'll still be some need for syncing - my email and blog, and shared work stuff like Git repos and group calendars, have to be on servers somewhere. And there's a good reason to mirror my portable storage to somewhere else whenever I can, as a backup. But the less syncing, the better!

WordPress Themes

Creative Commons Attribution-NonCommercial-ShareAlike 2.0 UK: England & Wales
Creative Commons Attribution-NonCommercial-ShareAlike 2.0 UK: England & Wales