The Long Game (by alaric)
I've written before about my plethora of projects and how I'm trying to spend more time on them, and to focus on ones that can produce immediate rewards (such as Ugarit) at the cost of longer-term ones (such as ARGON).
However, I have projects I can't even start on without access to massive resources. I have them Far Out Beyond The Back Burner, just in case I gain the resources required to start them within my lifetime, but without any great expectation of doing so.
I'm listing them in an approximate order based on what ones I think would be easiest to start, and would in turn make later ones more approachable.
I'm hoping for a proper Drexlerian revolution of molecular manufacturing. A post-scarcity economy of cheap diamond and home production of anything you can download or design a plan for, as long as it doesn't require exotic atoms (which only really rules out nuclear devices; no big deal).
Post-scarcity is always a relative term, however. Sure, we'd be in a world where we can use solar power to directly convert our own waste products back into all the goods we currently hunger for; where a small patch of land gives you enough space to plant a tiny nanotech seed (that anybody else on Earth can make for you at practically zero cost) to grow yourself a solar array and then use the energy from it to harvest raw materials from the ground and air to make yourself a house that provides a level of material luxury beyond what even the richest humans alive right now can have. But we'd still need some kind of economy to buy land in the first place, and to buy skills and services (from designing things you can tell your home to build for you to entertainment).
I hope that my skills as a designer of intricate systems would be held in high regard in such a world, so I don't need to spend too much time working. As a molecular assembly unit can just be fed a design and sit making it overnight, I won't need to spend my time laboriously making complex machinery; I want to focus as much as possible on spending my time designing the machinery and software for my next steps.
However, although I'll still need money to buy services, I have some plans that would require large amounts of material, and that might be expensive as the human population rises. So I'd focus my available resources on building one of those tiny nanotechnological seeds and firing it into space, to start converting asteroids into nano-replicators, under control from a nice radio dish I'd command my house to grow. I wouldn't be the only person to think of this, and I could expect territorial claims to start appearing around the solar system pretty sharpish, so it would be good to start quickly.
I might stay living on Earth, or try to build a large spacecraft and relocate to orbit if that's practical. However, my physical location will be largely irrelevant, and more so in later stages of the plan.
Having to work so that I can hire the services of humans to fill in gaps in my design skills, or just to save me time so I can progress my plans faster, is a bottleneck. And a risk, as the rest of the human race may not react rationally to the emergence of a post-scarcity world and start wiping itself out. One way out (which is rather speculative, as I don't know if it would work) would be to turn all that asteroid mass I'm converting with my space probes into solar-powered computers and setting them the task of evolving intelligence in a simulated neural network or rule engine. Rather than doing lots of hard thinking about the nature of intelligence, I'd brute-force it - a massively parallel genetic algorithm trying to find a configuration of the simulation which can answer questions I'd feed into it. I'd train it on a mixture of my own questions and exercises from textbooks in fields of interest to me. With a large enough training set, I should be able to evolve a system that's a general function from questions posed in English with access to the background knowledge implied by the kinds of textbooks I trained it from, to answers in English. If it worked, I would have an artificial intelligence, without an artificial sentience.
That difference is quite profound. Artificial sentience opens up ethical questions: should it have the rights of a person? But I have no need to create a mind in the image of my own, with desires and awareness of time and sensory capacity and a continuity of consciousness based on memory of past events. All I need is a function from question to answer, that can be embedded into software that needs it. I can ask it questions beyond the scope of its training (if I manage to evolve it to be sufficiently general) by including appropriate textbook material in the question.
I could use it to solve problems by posing them as questions, firstly. But I could also use it for intelligent automation; systems could react to events by feeding the nature of the event, as well as background information about the situation and relevant history, in as part of a question as to the best course of action to follow to meet some defined goal.
Weak life extension
I may be lucky to get this far within my lifespan as it stands, but I don't want to risk any further, so I will have been learning (or assembling reference material for my AI) about human biology sufficient to cure ailments, and decelerate or reverse the process of aging, in case I need a bit more time to complete the next stage.
We think by exchanging pulses between the neurons in our brains. The neuron is a cell that, beyond the normal structures required of a functioning cell, contain one or more long thread-like structures called axons, which enable the neuron to connect to other neurons elsewhere in the brain; and the connection points, which are called synapses. We're still a bit vague on exactly what happens inside the synapses; we have an idea of their properties, but we can't really test it well enough to see if it's complete. Hopefully nanotechnology will let us put probes inside working neurons and examine them better.
But fully mapping the function of the synapse can come later. I'll start with a lower-hanging fruit: mapping the functioning of the axon.
Signals travel through axons at about eight metres per second. Signals travel through copper cable at about two hundred million metres per second. If I could inject nanomachines into my cranium that would trace out the neurons, finding the synapses and the axons that join them together into the neuron, and bypassing the axons with insulated copper cables carrying electronic signals directly between the synapses, I would significantly increase the speed at which I thought.
The danger would be timing dependencies in the brain. If a neuron fires, sending a pulse down a long axon, while the same pulse also travels via shorter axons through one or more extra synaptic junctions, then changing the speed of propogation down axons without changing the speed of processing of synapses would result in the relative timings of the effects of the initial firing arriving at the destination differing. So I'd start by having my electronic bypasses insert a delay to more exactly simulate the original axons at first, and try selectively decreasing it in various parts of my brain first, to see what happened (and with an automatic return to normal timings after fifteen seconds, like when you change the resolution on your display and the OS isn't sure if you can then actually see the dialog asking you if the result is OK).
In the worst case, I'd have to take time to study the synapse so I could model it in an electronic system and thus create a timing-perfect electronic model of my brain, but that would take longer. It is necessary for later steps in the plan, but it would be nice to reap the benefits of accelerated consciousness sooner than that, in order to make better use of my time.
It's hard to say how fast I could make myself go. The hard limit (if the response of the synapses was irrelevant to the speed of thought, and axon delays were the limiting factor) would be that I would think two hundred million divided by eight, which is twenty five million, times as fast. At that speed, anything that wasn't moving at a good fraction of the speed of light would appear immobile to me. I would seem to be frozen, stuck in an immobile body, and I'd probably go mad from boredom and claustrophobic panic. So I wouldn't do that. Since I'd already tapped all my axons, I'd divert my peripheral nervous system to a virtual body in a 3D computer simulation. Then I could do all the thinking and planning and designing and reading and writing I wanted to. Of course, fetching stuff from the Internet would be a pain; if I sent out an HTTP request to Wikipedia for some information, it would take a long subjective time for the response to come back. Likewise with communicating with friends by IRC and email.
But even if removing axon delays only made my thoughts happen ten or a hundred times as fast, due to synapse delays being significant, I'd still need to go into a virtual world to live without the slowness of my physical body trapping me. And unless it was only a few times as fast as normal living, I would find myself spending a lot of time waiting for the world outside to react to my latest HTTP request or other action.
So I would probably program my control software to make my synaptic delays infinite - suspending neural firing - until something interesting happened (or a timeout occurred; I'd want to wake up at least once a millisecond just to see what was happening through my real eyes, in case there was an explosion in progress or something else I needed to attend to).
I'd probably want to automate management of my body. Walking by taking note of my inner ear and eyes a hundred times a second and deciding what impulses to send to my muscles would be hard work; I'd need to automate it to the level of choosing a direction of motion and a desired body position and facial expression and letting the computer walk for me, checking up on it ten times a second or so. I'd want to be able to tell my mouth to speak a sentence and leave it to get on with it, and whenever I checked up on my body I'd replay to the past few seconds of recorded audio and video so that I could discern speech directed at me.
Driving my physical body might take only a tiny fraction of my time. So why not drive several? I could control heaps of robot bodies at the same time, by just examining the state of each in turn, via radio links. I could be an entire team of robot ninjas infiltrating a building at the same time. That would be awesome.
However, interacting with computers would be a pain. As much faster as my brain was, computers would be correspondingly slower. My 3D virtual world would need to be quite basic, even with a massively parallel nanotech computer rendering it and only needing to render my foveal region in full resolution, or it just wouldn't be able to generate frames fast enough for me. Waiting milliseconds for a web browser to actually render a page into an image would be intolerable. I would need to run very simple software on very fast processors if I wanted interactive responses.
But either way, my main limiting factor - time to design things - is now significantly relieved.
But the logical next step is to get rid of those synapses, and entirely replace my brain with an electronic version. This would gain me the rest of the speed improvement available. Also, an electronic synapse would probably be smaller than the real thing, and it wouldn't need the body of the neural cell any more, so I could make my entire brain much smaller, thereby gaining an extra few times speed by just removing the distances those two hundred metre per second electrical impulses have to travel.
But being a fully digital simulation would have other benefits. My neural interconnection map and synaptic states would be a string of bits that could be transferred and a new electronic brain built and initialised from. This could be used to back me up in case of the physical destruction of my brain and body. It could also be used to work around the annoying consequences of communications delays being so notable when living at twenty five million times the normal speed; I could transmit my brain state into deep space and have my brain constructed there in order to get hands-on with some process, then send it back afterwards (or resume the old version still at home if the transferred copy is lost or corrupted somehow). If I build a solar antimatter refinery and made enough antimatter to send a nanoseed probe to Alpha Centauri at nearly light-speed (which might take a decade or so), and had it build an installation there, I could even visit it at the cost of four years of unconciousness while I was in transit each way. But that's nothing compared to the costs and risks of sending my physical brain there and back.
In principle I could duplicate myself and run multiple instances of myself in parallel, but I don't think I'd need to - with accelerated consciousness, I don't think that thinking time would be my bottleneck any more. A reason to run clones of myself at great distances in order to have more real-time interaction with events over a large area might develop, but I don't know of any reason why I'd need to do that, offhand.
So, assuming I've managed to not kill myself by tinkering with my brain, and I've not run into competition with other humans and been imprisoned or destroyed by them, I'm now a disembodied intelligence able to simultaneously operate bodies anywhere within a few tens of light-milliseconds of wherver I'm currently sentient from, and able to migrate between brains at the speed of light, and to be fairly immortal due to having backup copies of myself that activate if the "currently live me" stops checking in every millisecond. Arguably, I will have crossed some kind of technological singularity, as tinkering with my own cognition has made me able to out-think any normal human being (or team thereof), purely by being able to research and plan my actions in great detail - in the time it takes a visual signal to travel from the eye to the brain of a normal human being. But the post-singularity me would still be perfectly comprehensible to a normal human and vice versa; it is the quantity of my thought which will improve, not the quality.
Perhaps I will have had to leave the solar system of my birth by now, in order to keep my freedom from other humans, or whatever becomes of governments and corporations in a post-scarcity world, trying to lay claim to resources I need for my plans. But ideally I'll still be in touch with a happy brotherhood of humans rather than striking out alone or with a small circle of like-minded family and friends.
However, this next stage will probably have to happen in another solar system. Even if the rest of the human race isn't particularly hungry for energy and I can have the entire output of the Sun, that might not be sufficient. And if my experiments fail, I might destroy the solar system. So this step probably needs to happen in other star systems.
Basically, I want to implement time loop logic. There's a number of ways that might allow us to send a single bit of information back in time, and that's all I need. Perhaps I can string a cable (or send a photon) around a rapidly rotating singularity, or a uranium atom spinning in an intense magnetic field, or through the centre of a ring singularity, in order to create a timelike trajectory. Or some trick involving quantum mechanics. I'll try them all, and any others I or my AI manage to come up with.
Now, being able to build a hypercomputer with time-loop logic, and being able to solve NP problems in polynomial time, would be pretty neat. But that's not the eventual goal. Rather than just implementing pure functions such as prime factorisation in the hypercomputer, I want to perform I/O. With side effects. From inside a time loop.
You see, the consistency principle which underlies time-loop logic can be justified in quantum mechanics; in the presence of a time loop, the wave function of a contradictory state cancels itself out and becomes zero because of the link between its past and future. This is used to ensure that the desired answer arrives out of the negative delay gate in the first place, by ensuring a contradiction if it doesn't.
But what if we have a sensor attached to the computer, and arrange to have a contradiction if the value of the sensor is not equal to a desired value? Situations where the physical system monitored by the sensor would fail to produce that value are contradictory, so the physical system's wave function cancels them out and we can only have the desired states.
That gets interesting if the sensor is measuring the speed of light in a vacuum. What we have build is known as a "reality editor" and grants the owner godlike powers.
Of course, the equipment is part of the time loop, so the physical system being measured changing is not the only possible non-contradictory outcome; there's also the possibility that your equipment might just fail. Since the quantum mechanical odds of your equipment failing are probably much higher than those of the speed of light changing, you will almost certainly get an equipment failure rather than destroying the universe by altering its fundamental constants and causing all the matter to collapse to a point.
So let's set our sights a little lower. How about moving on from nanotechnology to femtotechnology? Tinkering with the energy levels inside atomic nuclei is tricky, but if we can build a sensor to tell if we've managed it, we could use a time loop to force the hand of physics. We can work out the chance of quantum tunnelling producing the desired state by pure luck alone, and make sure that the chance of our equipment failing is below that - by duplicating it. Don't forget we have the matter and energy of entire star systems to hand. Make trillions of time loop devices with their own sensors, all observing the same system. Make it more likely for the system to enter the desired state than all the time loop devices failing together.
So the time loop reality editor cannot provide complete omnipotence; it's limited by the probability of a complete system failure, and can only cause events which are more probable than its own failure, so it would be rated up to a certain improbability level (in a manner that sounds slightly familiar...). Indeed, in case of miscalculation of the probabilities or sheer bad luck causing a device failure rather than the desired event, it would be wise for each time loop unit to have a "circuit breaker" that is the most likely part to fail and can be simply reset, rather than risking more permanent, hard-to-diagnose, or violent failure modes of a device containing a significant amount of stored energy in one form or another.
An interesting possibility is of using the reality editor to not only make, but design, things. Rather than building a sensor that checks if a working femtocomputer processing element is created, create one that tests whatever is standing on the target platform is a fully working computer meeting certain design requirements, and see what appears. As the quantum-mechanical basis of the reality editor will tend to favour the most likely, and therefore generally simplest, solution, some interestingly optimal designs might result.
Perhaps the first thing to try and make is a more compact and powerful reality editor?
Oh, and negative delay gates will enable faster-than-light communications, so I can interact with my ever-expanding interstellar empire in real time now.
Hopefully, that will be enough to keep me busy and occupied until the heat death of the universe starts to loom. At which point, hopefully I will have figured out how to:
Create new universes
Probably by tinkering with black holes or something, if not by turning as much of the mass in the Universe as possible into a giant reality editor. Either way, make a new universe with a new entropy gradient I can use to power my ongoing experiments.