Category: ARGON

AURUM (by )

My recent thoughts about bitcoin reminded me of earlier thoughts I'd had about digital currency.

Cryptographic digital currency is a way of transferring value without trusted third parties being involved in every transaction, but within a closed domain, it's easier to go for a trusted party and cut out all the crypto maths. Which is why we have printer credits managed in a central database when we use a shared printer. We may use a digital currency to buy a credit, but once we have credits, we're happy for the owner of the printer to just store our balance in a database and decrement it whenever we print.

And within a company, complex processes are used to transfer money in and out of the company's actual bank account, but budgets within departments are usually allocated by just asking somebody to update a spreadsheet. Money moves within the company using easier, faster, simple methods than bank transfers, writing cheques and letting them clear, or exchanging cryptographic keys.

It's the same story for "ulimit" mechanisms in computer operating systems, and language-level sandboxes, that allocate budgets of things like CPU time and memory space to software running in a computer.

So, when I set out to design AURUM, the resource limit system for ARGON, I decided to make a unified abstraction across all of the above. A process has a budget, which contains arbitrary amounts of arbitrary resources; and it can subdivided that budget into sub-budgets.

That's just an accounting system, though. It needs to integrate with actual resource managers. For something like CPU time, for efficiency, the scheduler probably wants a nice simple machine word reserved for "jiffies in the budget" attached to a process context in a hardcoded way. So the AURUM system probably needs a handler for "run out of jiffies" that takes more from the actual AURUM budget and "prepays" them into the process context - and when the process' balance is requested, knows to ask what's been prepaid and not yet used, so it can report honestly. If the process is stopped, any remaining balance needs to be put back in to be re-allocated to the parent process' budget. And so on.

Similarly, interfaces to actual electronic banking (spend money in a budget by causing an actual bank transfer, or bitcoin transaction, or whatever to happen), and interfaces for incoming budgets from external sources (a bank account interface that fires off a handler when an incoming payment is detected - with that payment as the handler's budget so it can then allocate it appropriately), and so on, can be built.

And a power-constrained mobile device might have joule budgets - and operations such as driving motors, transmitters, and lights might use them up. That would be neat for handheld computers and deep space probes, which can then run less-trusted code in a sandbox with controlled access to expensive resources.

That's all well and good as a way to manage finite resources in a system, but the next level is to take a step back, look at the system as a whole, and see how this facility can be used to do other cool stuff.

This leads naturally to the semi-forgotten discipline of Agoric computing which seeks to make marketplaces and auctions a core tool to solve resource allocation problems. This has scope within an ARGON cluster, if it's shared between multiple organisational units, which can then use budgets purely within AURUM to manage their shared use of the computer resources and to contribute towards its upkeep accordingly.

But, more excitingly, with mechanisms like Bitcoin allowing for money to be transferred across trust boundaries, it starts to become practical to think about allowing our computers to participate in economies between them. What if my desktop PC and servers rented out their space disk space, CPU time, and bandwidth to all comers? And with the money they accumulated from doing so, in turn rented offsite disk space for their backups, and when I gave them a particularly tough job to do, hired some extra CPU and bandwidth to do it, dynamically? Without me having to hand-hold it all as the middleman, pulling my debit card out to pay for resources... If I wanted to do lots of resource-intensive work I might put more money in from my own pocket to give it more to hire extra resources with; if I tend to under-use my system and it makes profits from renting out spare capacity, then I could take cash from it from time to time.

I guess the first step would be to create standard protocols in AURUM for things like auctions and commodity markets, to facilitate transferring between different 'currencies' such as CPU time, bitcoin, fiat currencies, printer credits, disk space, and the like. And a standard interface to bank accounts, where balances and transaction histories can be queried, and transfers requested. A bank account in the context of AURUM would be a third party that holds control of some budget on your behalf, so it should look like an ordinary budget in every way possible. That would make it practical to have software that needs a given resource to find a way, through a registry of trusted markets, to convert them into the resources it wants.

That'd be neat...

User Interfaces for Event Streams (by )

Reading Phil Gyford's post about the reasoning behind his Todays Guardian app reminded me of an old interest of mine - the design of user interfaces that show people streams of events.

I hate the fact that I have several systems that have reason to throw notifications at me:

  1. Incoming email (with multiple accounts)
  2. Twitter (with multiple accounts)
  3. RSS feeds I follow
  4. Voicemails/SMSes
  5. Notification of server failures and other such technical problems
  6. Incomng phonecalls, Skype calls, etc
  7. IMs and DMs in IRC, and people mentioning my name in IRC channels
  8. People talking in channels I'm following in IRC
  9. Scheduled alarms (time to stop working and eat!)
  10. Batch processes have finished (I often start a long compilation/test sequence going then browse the Web for five minutes while it runs - then get distracted and come back twenty minutes later)

Many of these event sources are capable of producing events of different levels of urgency, too. It's really quite complex. Some things shout in my face (incoming skype messages cause a huge window to pop up over what I'm doing, for example) while some need to be manually checked (such as email; I get too much spam for the "you've got mail!" noise to mean much to me), and this has little correlation with the relative importance of them.

Obviously, the first thing to do is to have some standard mechanism in the user interface system for notifying me of events. Growl is a start, but it's focussed on immediate notifications, rather than handling a large backlog of events. What I want is something like my email inbox, that has a searchable, scrollable, history, and notifies me when new events come up. But I also want richer metadata than Growl has; I want all IMs, emails, and whatnot from the same person to be tied to that 'source' of events, so I can filter them into groups. I want to have Personal, Work, and Systems events, and to have Personal deprioritised during working and Work deprioritised during personal time. And so on.

The BlackBerry OS goes someway towards this with its integrated Messages system. Any app can register to put messages into the message stream, so when I get emails, BlackBerry IMs, notifications of new versions of software being available, etc. they all appear in the same time-stream and I get a 'new message' notification. I want something similar on my desktop, but with much more advanced filtering and display capabilities. My design for 'user agent' entities in ARGON involves using a standard "send an object to an entity protocol" for all email/IM/notification activities - the same protocol that is used to send print jobs to a printer, files to a backup system or removal storage device, orders to an automated process, and so on; it's roughly the equivalent of "drag and drop" in a desktop GUI. Incoming objects from 'elsewhere' are then combined inside the UA with internal events such as calendar alarms and situations the user agent might poll for, such as things appearing in RSS feeds, into a centralised event stream, by the simple process of translating all internal events into incoming objects like any other; but actually designing a user interface for displaying that is something I look forward to doing...

Phil's analysis of the newspapers interests me, because it's a very similar challenge. You have a stream of events, and the user may want to skim over them to see what's relevant then zoom into particular ones. How do you present that, and how do you help the user deal with an inundation of events, by applying heuristics to guess the priority of them and suitably de-emphasising or hiding irrelevant events, or making important events intrude on their concentration with an alarm? Priority is mode-dependent, too; if you're in an idle moment, then activity in your interest/fun RSS feeds should push out work stuff entirely - apart from important interruptions. And some events will demand my attention to respond to them, in which case they should offer me links to the tools I need to do that - a notification of a problem on a server, ideally, should carry a nice button that will open me up a terminal window with an ssh connection to that server. But some things might require my attention, but I can't give it yet - so I need to defer the task, so it doesn't then clutter my inbox, yet in such a way that it reappears when all higher-priority tasks are done. There are elements of workflow, where events need an initial "triage" to be categorised into "read-and-understood, do now, do later today, do whenever" and maybe prioritised, then later, deferred tasks need to be revisited.

Also, some event streams are shared. Perhaps an event should be handled by the first member of a team to be free, such as a shared office phone ringing, or a bug to be fixed or feature added to a software product. There needs to be some system for shared event pools, with support for events to be "claimed" from the pool by a person, or put back. Perhaps personal event systems should be able to contain proxy objects that wrap events stored in a shared pool somewhere, so they can be managed centrally as well as appearing in personal event streams along with events from other sources. Standard protocols would be required to manage this.

Looking at the relatively crude support for this kind of thing in even the supposedly integrated and smart combined email/calendar apps, I think there's a lot of fun research to be done!

Personal Information Management (by )

One of the neat things computers have become able to do as they become more "personal" - eg, we spend more of our lives operating through them, and they become more portable - is personal information management (PIM).

I remember PIM apps in the early 1990s, running under MS-DOS. Quitting whatever you were doing and starting up a separate app to look at your calendar was a bit unwieldy so they didn't do so well, except perhaps Borland Sidekick, which used some clever tricks to pop itself up over running applications.

But nowadays we have systems like Apple's PIM components in Mac OS X; an address book, todo list manager, crypto keyring and calendar provided with the OS, with nice interfaces for using them yourself and an API for all applications to use the same databases. Mac OS software seamlessly uses the inbuilt PIM infrastructure wherever applicable, for the good of all. It's about as good as it gets, so far.

But it's not perfect.

For a start, the task list is a bit primitive. When I had a Mac, I used the excellent Things.app to manage my tasks, as it supports projects and roles and all that stuff, which helps to keep my hectic life compartmentalised. Since it has a much more rich data model than the OS' inbuilt task list, it has to have its own database to keep it in, but it integrates with the inbuilt one as well as it can. My tasks in Things.app get added to the native task list, without their extra information; and new tasks I add to the native task list appear in Things.app, in the 'inbox' area for unclassified tasks, awaiting my attention to move them to a project.

It'd be nice if the underlying PIM database was flexible, allowing arbitrary properties to be added to objects. Then the native task list viewer could share the same actual task list as third party apps, and it would just ignore the extra information about projects.

But even that would suck a bit. Imagine I also had a task manager app that synched to my smartphone, and let me tag each task with the physical locations I can do it from (eg, DIY must be done from home), so an app on the smartphone can show me tasks filtered by location. I'd then need to juggle two task list apps; both would be a superset of the basic task list app (and would therefore duplicate all the basic display-a-task, deal-with-due-dates, etc logic), but each adding their own extra features. Altering all of the properties of a task would involve finding it in two separate apps. Et cetera.

As it happens, I'm also interested in making more use of knowledge representation techniques in software. Knowledge representations such as Resource Description Framework or Horn clauses have the useful property that information from different sources can be merged, as long as the names of things are agreed upon. They work by storing information, not in tables (like the relational model), nor as a graph structure (as the in-memory data model of most programming languages), but as an unstructured list of statements about objects.

The objects can be literal values - strings, numbers, that sort of thing - or symbols of some kind, used to represent objects that don't (or might not) exist directly within the computer, such as people or concepts. RDF uses URIs as its symbols; Horn clauses (being a mathematical notation) are a bit vaguer as to what a symbol is. Either way, they have to be some identifier for the abstract objects.

Each statement links a number of objects, with a relationship (which is itself an object, usually constrained to be a symbol).

So say we have objects called alaric and cheese (the latter representing the general concept of cheese, rather than any particular lump of cheese), and an object called like, representing the concept of liking something. We might write something like:

  (like alaric cheese)

...to mean "alaric is related to cheese by like". As it turns out, just about anything can be represented as such statements. From the relational model, a table can be converted into a symbol used as a relationship (the table name will usually do), and each row into a relationship of the objects that are the values of the fields of that row (likewise, we could have a relational table called LIKES that lists the IDs of objects that like other objects). Relational models usually use arbitrary integers as the "symbols", and automated mapping into knowledge representations is hard because it's not always explicit if an integer column is an identifier, or the identifier of what (as it could be a foreign key), or just a price or some other actual integer quantity. But with a bit of human guidance, it can usually be done.

However, note that in a relational model, that LIKES table would have two foreign keys into some other table that lists all the objects that can take part in liking. It'd be impossible to say that alaric likes like itself (it's nice to like things!), since like is a table rather than a row in whatever table the foreign key pertains to. The relational model has a very static type system, as people who have tried to map class hierarchies into it often find.

Now, the fun thing about knowledge representations is that the objects are implicit. In a relational system there'd be a table of People or whatever, giving each person an arbitrary integer as a primary key then listing details of that person. You can point at rows in this table and say "there are the people".

But a knowledge model just has lots of facts about each person, sort of spread around. There's no place you can point at and say "there's the person". If you want a list of all the people, then you need to look for all statements saying (is-a-person X) ("X is a person"), which is as close as we get to assigning types (not that (is-a-person X) can't coexist alongside statements such as (is-a-customer X); an object can have lots of types, all overlapping). If you want to delete an object, you need to scan the knowledge base for all statements referring to that object, and delete those.

If my personal information was stored in a knowledge base, then Things.app could share the same basic relationship objects as the inbuilt task list. (is-a-task foo), (title foo "Feed the cat"), (is-urgent foo), (due-on foo (date 2009 12 16)), but also add its own: (is-a-project bar), (title bar "World domination"), (is-part-of foo bar).

Note the use of generic relationships - title gives any object a title. is-part-of is a generic containment relationship. This means that even without knowing about projects and task lists, software can tell that things have names, and that there's some kind of containment relationship to explore. The names we give objects - foo, bar in my example - are arbitrary; so for objects that don't have a natural name, they could just be random strings, like UUIDs; RDF even lets such objects have no name and be referred to implicitly. Objects we want to use widely (such as relationships) will benefit from nice names, however.

It's easy to merge knowledge bases, too. Just pour all the statements into one file. You have to be careful of the same object having a different name on each side, obviously, and how to fix that can only be handled on a case-by-case basis; RDF has the concept of "inverse functional properties" that are meant to uniquely identify an object (such as email addresses for people) that can be used to detect and automatically merge things, but it's not applicable to all situations.

If I merged my task lists from my laptop and my phone for the first time, for example, I'd now have more objects satisfying the query (is-a-task X); they'd be seamlessly merged. If I had synched my devices so they both had the task called foo above, and on one I added some extra statements about foo, they'd be seamlessly merged in. It gets more fun when there's collisions - if I change an existing statement, for example. If my knowledge base is sophisticated and puts unique IDs and timestamps on each statement then it can spot that it's a change and handle that in the merge; if not, then I might end up with both the old and new statements, and the application has to decide how to resolve that.

But all of this doesn't solve the larger problem: if I have several different apps, each of which add more behaviour to tasks, I still need to find the same task in both of them to see all a task holds.

So let's get rid of the apps. Can we make a general "knowledge browser" that's good enough to just edit the knowledge base directly?

This task can be helped a lot by having statements about relationships. I might install a package of pre-written knowledge that states that the like relationship normally relates a person to another person, or concept. It might also state that (like X Y) can be written in English as "X likes Y" (and another rule can state that an object X can be written in English as the title of X, if X has one). All of this can just be more knowledge in the knowledge base. A universal knowledge editor could use these statements to build a user interface, in a very similar vein to a browser using CSS to display some arbitrary XML document.

And yet, I'd still be able to jot down my own made-up relationships. (is-on-my-christmas-list bob). There might be no vocabulary statements defining what this Christmas list thing is about, but a universal editor might well list "is-on-my-christmas-list" when I look up Bob, and would thereafter be able to tab-complete "is-on-my-christmas-list", having seen it already in the knowledge base; it might even notice that existing objects tagged with "is-on-my-christmas-list" are all also tagged "is-a-person" or "is-a-company", and use that to guide editing - but that's a bit more advanced. So I can install pre-written vocabularies that make everything nice, or I can just make my own up as I go along, and maybe write vocabularies for them later.

Knowledge base systems can also be fed rules. Say I have an appointment database, with appointments of the form (is-appointment foo) (on-date foo (date YYYY MM DD)), etc.

I might write a rule of the form "(is-appointment X) and (on-date X (date YYYY MM DD)) and (title X (printf "%s's birthday" Z)) if (birthdate Y (date SOME-YYYY MM DD)) and (title Y Z)". Then anybody (or anything) that has a birthday in the system will cause an appointment to appear called "foo's birthday" on any date that has the same month and day as the birthday.

What if we know the month and day of somebody's birthday, but not their year of birth? We can just write: (birthdate foo (date _ 04 03)) (where _ is a special variable symbol that means 'unknown'). Since searching a knowledge base is a matter of pattern matching, that will match a query for (birthdate Y (date SOME-YYYY MM DD)), and not bind a value to SOME-YYYY.

Finally, they can also read from external data sources. I might tell my knowledge base that (exists X) if X is a pathname that refers to a file or directory that exists, that (is-part-of X Y) if X and Y are pathnames that refer to files or directory that exist, X is a directory, and Y is directly within that directory, that (title X Y) holds if X is a pathname and Y is the last part of it (the filename), etc. The rules can work both ways - if the knowledge manager is directly told (is-a-directory X) and it isn't, then the extension rules can tell it how to create it.

Then, suddenly, my home directory becomes knowledge, too. I can add statements saying that a given directory tree pertains to a given project. Then my knowledge browser UI can tell me about the files I've worked on as part of a project. Perhaps I could teach it how to open up applications to edit different file types. Or teach it how to read the files itself and understand their structure and turn it all into more knowledge.

Now extend that to email messages, browser bookmarks, my phone's call history, my IM history, my Twitter account...

Designing a global knowledge base (by )

Continuing from my previous posts on HYDROGEN and IRON, I suppose the next thing I can talk about is CARBON.

One thing that users expect out of a system is some form of navigational structure. All but the simplest embedded computer systems have some kind of menu structure; while workstations often have several somewhat confusingly overlaid structures (menus in applications, a "My Documents" hierarchy full of personal stuff, a "Start Menu" or "Applications Folder" full of system-wide resources, and a "My Computer" hierarchy with things like removable drives and their contents, printers, access to the internals of the system, mounted network drives, and the like; then, via a Web browser, a hierarchy of bookmarks and then whatever navigational structures different Web sites out in the world present to you.

So, I clearly needed some concept of "large-scale organisation of resources" in ARGON. After much deliberation, I came up with CARBON. But, as usual, I like to kill several birds with one stone.

Read more »

Generic Functions (by )

When most people think of object-oriented programming, they think of the interpretation of it that originated in Simula, that has since bubbled into Java, C++, and so on. Or maybe the prototype-based variant found in Self or JavaScript.

Read more »

WordPress Themes

Creative Commons Attribution-NonCommercial-ShareAlike 2.0 UK: England & Wales
Creative Commons Attribution-NonCommercial-ShareAlike 2.0 UK: England & Wales