Who cares about user interfaces? (by alaric)
User Interface development in the 1990s
This was, in my estimation at least, the golden age of user interface research. Window-based GUIs (known then by the acronym WIMP for Window, Icon, Menu, Pointer) made computers initially accessible to people, as they provided a visual metaphor based loosely on real-world objects, and offered an interactive try-stuff-and-see way to interactively figure out unfamiliar interfaces; the previous command line and keyboard-interactive interfaces had suffered from presenting the user with a prompt to type commands at, or a keyboard covered in buttons, neither of which offered all that much initial inspiration as to what to try first. With the potential for multiple application windows on the same screen, they also offered a natural interface to the increasingly-widespread capability of computers to run multiple applications at once, and this opened up intriguing possibilities for inter-application communication and cooperation: "the clipboard" that let you copy and paste things between applications, and drag-and-drop to move things between as well as within applications.
This era had an explosion of new ideas in user interface design, and it was driven by people trying to make computers easier to use, because computer and software manufacturers were trying to woo inexperienced people into the world of computing.
I fondly remember RISC OS on the school computers; that had a convention that all menus were contextual. There weren't menu bars in the normal sense, you'd click the menu button on things and get a contextual menu for that thing. Also, it integrated drag-and-drop as a core paradigm for file operations; although you could double-click on a file icon to load it in the default app for that type, you could also drag a file to a specific application to open it. But more interestingly (I've not seen this elsewhere), when you used the Save menu option to save something, it just popped up a tiny window with an icon for the file in. You'd drag that icon to a directory window, and it would save the file there. However, you could also drag it to another app and that app would then open the file, without it actually being "saved" to disk; the system would connect the two apps with something like a UNIX pipe to transfer the data without explicitly going via a file (although I vaguely recall that it might have been saving to a temporary file to pass to the next app behind the scenes, this was hidden from the user if so).
UIs based around an underlying object model
Systems like Smalltalk, Self and Oberon built user interfaces, not at the level of normal application/OS binary APIs like RISC OS, Windows, Mac OS, and friends did, but with a deeper integration with the applications: they were tightly entangled with specific programming language runtime systems, meaning that they integrated interactive development environments and the ability to "view source" on any on-screen interface elements to see the plumbing behind them. Oberon took editable text documents as a fundamental primitive, and allowed commands to be written in them by just using a typographical convention, a bit like how links are written in Wiki text. This allows for hypertext-like embedding of active components in your documents, but it also means that things like application menus are just documents with lists of commands in - and the user can copy and paste from them to create their own menus.
Meanwhile, a user interface toolkit called CLIM (Common Lisp Interface Manager) did something similar for Lisp. Interestingly, this blended a GUI with a command-line interface, of sorts. The CLIM command line runs in a graphical window, so the output of commands can be fully graphical objects as well as just text; indeed, they are objects in the Common Lisp object-oriented programming system that can have interactive behaviour of their own. Typically, if a command returns something like a file path, it doesn't just print a string to the output - it prints a file path object, which might look like a string to the user, but the system knows what it is; and if a command prompts for a file path as an input, that file path object is then selectable (because it matches the type of object being requested). The user doesn't have to select the start and end of a string like you do in non-object-aware command lines; you just point and click on the object. CLIM also supports the placement of objects onto a two-dimensional window as well as embedding them in a command-line session, letting you do all the normal WIMP interface patterns - but seamlessly integrated with a command line for more complicated tasks.
Innovation in the API: Higher levels of abstraction
Looking at the trends underlying the technology, the old pre-WIMP single tasking user interfaces presented the application with relatively raw access to the underlying hardware: a screen to draw images on, keypress events from a keyboard, and other devices such as sound output, joysticks, and mice.
The first WIMP interfaces to become popular provided the applications with slightly more structure. As the user interface had to mediate between different applications sharing the hardware, the applications had to request the creation of windows, and then the user interface would tell the application when part of a window's contents needed to be drawn, or when the mouse was clicked or a key pressed. Very simple WIMP systems like X Windows did pretty much only that, but more advanced ones like Mac OS, RISC OS, and Windows also provided their own "widgets"; applications could say "Put a menu here (with these options)" and the user interface system would handle drawing the menu and making it respond to mouse actions; the application would simply be told when the user made a selection. As well as menus, the user interface system would provide buttons, text entry fields, scroll bars, and various other useful things. This had several advantages compared to applications doing it themselves:
- The applications didn't have to implement all that stuff, and could focus on application logic.
- The standard widgets looked, and worked, the same across all applications, providing consistency for the user.
- Because the user interface was "aware" that the widgets existed, it could provide (or third-party components could extend it to provide) advanced features: accessibility tools could let blind users navigate a menu through other means, or identify which buttons could be pressed in a window. User's interactions with widgets could be recorded and replayed, allowing simple automation of repetitive tasks.
- As the user-interface handling logic is kept explicitly separate from the application logic, it's possible to give it a higher priority for access to the CPU. This means that runaway applications consuming loads of CPU and memory can be pre-empted by the user interface when it needs to respond to a user action, keeping the overall interface snappy and responsive even if a particular application becomes laggy.
Applications using systems like X windows quickly developed libraries they could share to mitigate the first point, but the fact that multiple such libraries emerged meant that different applications still often worked differently, leaving the second point unresolved; and advanced meta-interface functionality could be built into the libraries, but then it would only work within applications using a given library, so the third point was only addressed shoddily at best.
Widget-based systems had a bonus advantage in that setting up widgets to design a window or dialog box didn't require any programming. Graphical "GUI Builder" editors let the user drag widgets from a palette and set them up in a window or dialog box, specifying their titles, setting up hotkeys for keyboard navigation, specifying how the widgets should shift if the window is resized, adding context-sensitive help text, and so on. Not only did this make life easier for programmers building GUI apps on their own, it enabled a division of labour where user interface specialists could lay out the designs and then programmers could fill in the code that reacts to button presses or loads state into widgets. It also meant that simple applications could be built by people with only very basic programming skills. 1990s Visual Basic, despite the flaws of the BASIC language dialect they used, was an easy way for people to build GUI applications; it wasn't hard for newcomers to pick it up and start building useful things (the language would stab them in the back if they started to try to build more complex applications, sadly, but that's a different issue).
But the more experimental systems worked at an even higher level than widgets. With a widget-based system, the user interface knows the widgets exist, and may even have some information about them such as a title, but they still can't take into account what the application is using them for. Systems like CLIM required the application to tell it about the problem-domain objects being worked with; things displayed on screen were representations of those objects, and user interactions with them were commands sent to those objects. This meant that all the interesting things that widget-based systems could do were still possible, but in addition, the user interface could provide more advanced functions; different representations of the same kind of object or support for new kinds of object could be loaded dynamically, allowing applications to be extended by plugins without the application needing explicit support for them. Operations like inter-application transfer of objects via the clipboard, undo/redo, and saving and loading, could be provided by the UI itself, without requiring the application to correctly provide all the plumbing to support all of these operations for all their objects in all contexts. To a certain extent, the application didn't need to know if it was being presented as a graphical user interface, or via a speech-based interface, or in virtual reality, or whatever.
Innovation in the visible bits of the interface
As well as all this technical work on different models, there was also a thriving research community investigating things from the human side, drawing on fields such as psychology and ergonomics. Groups like the MIT Media Lab produced experimental designs such as Put That There, an interface involving a wall-sized screen; the user sat in a chair facing it, wearing a position sensing glove (the kind used in VR environments today) and with the only other input device being a speech recognition microphone. They would point at objects on the screen and give voice commands, such as the titular "Put that (pointing at a thing) there (pointing elsewhere)" to move an object. The Self group, as well as working on infrastructure technology, experimented with the use of animation to indicate dynamic behaviour.
As a software infrastructure nerd, I could design user interface software and platforms; but I wasn't a human factors person so I soaked up library books on this stuff, because I found it fascinating nonetheless. I can't find the reference now, but I remember reading a report on a system that augmented a conventional WIMP interface with extra cues; less recently used files' icons would yellow to indicate age, larger files' icons would gain a deeper three-dimensional effect to suggest weightiness, and when the user was dragging an icon, the system would play audible feedback as if the icon was dragging across a surface - if the icon was being dragged over something that couldn't accept it if dropped it would make a less pleasant grinding sound, compared to a smooth noise if it could be dropped. The potential to enrich an interface with all these extra cues, unobtrusively making extra contextual information available to the user, in a variety of forms so people with different abilities and preferences were likely to be catered for, fascinated me!
Towards the end of the decade, Jef Raskin wrote The Humane Interface, a book detailing his theories about user interface design. It's a great book; I don't necessarily agree with all of his conclusions, but I definitely agree that people should be thinking about these sorts of things and writing them in books.
Because I think this more more less marked the end of that era.
