Friday, September 18, 2009

The Disappearing Desktop

[This posting was originally published on the LiMo Foundation blog...]

The “desktop” has been getting all the attention for the past score of years or so, and it was a big improvement from the “command line”, which is what we had to deal with prior to that. The desktop metaphor opened up all kinds of possibilities for people who had never used computers before, and unleashed a wave of new applications development the likes of which had never been previously seen.

But the desktop itself—the notion of the “computer” as a completely general-purpose device, a sort of “Swiss Army knife”, if you will—is itself an artefact of the fact that, at the time the metaphor hit the street, as it were, computers were extremely expensive devices; few people could afford to have more than one of them. However, times have changed and are still changing, in dramatic ways.

Computing power is cheaper than ever: if you compare a current cell phone (at around $400) with a desktop system of five years ago (at around $2500), they’re remarkably comparable in terms of their general specifications. In fact, the phone does more, in terms of being able to support GPS, Bluetooth, WiFi and other capabilities right out of the box; it probably has at least as much memory and, more than likely, a larger amount of mass storage. We don’t, however, tend to think of it as a “computer”.

Don Norman has observed that the more skilful we become at using a technology, the less visible the technology itself becomes, as it gets subsumed into the purpose to which it’s being put. When cars were new technology, you had to be an automobile mechanic to own one; the same was true of bicycles before that.

Open source development grew up in an environment where the desktop was the landscape in which one worked. But that’s a landscape that’s becoming increasingly less relevant. The sale of cell phones and mobile devices of increasing sophistication and capability has far outstripped the number of “desktops” being shipped each year. However, our understanding of development models and use cases hasn’t really kept pace with that.

There’s a great potential opportunity, one which we’re at the very beginning of seeing realized, for open source developers in the increasing number of Linux-based phones coming onto the market, and it’s measured in hundreds of millions of potential customers a year. Right now—and unlike the classic desktop market—there’s no entrenched “winner” in the mobile device space. There’s less likely to ever be one, since the investment people make is smaller and they’re more prone to replace a cell phone than a desktop or laptop system. People’s investment in applications for their smart phones also tends to be smaller than for their desktop systems: they tend to have fewer applications, and those applications are cheaper in cost.

Successful development for mobile devices calls for a rather higher standard of quality than we’ve typically been used to delivering in the open source world. In an environment where it was at least tacitly expected that everyone was capable of programming, the assumption developed that, if a problem wasn’t bothering me, then it wasn’t my problem; those whom it did bother could fix it if they liked. That won’t work on the mobile devices your grandfather and your teenagers use. This is an area where partnership between open source community-based efforts and the work of carriers and device manufacturers could be especially fruitful: the folks who make up the membership of organizations like the LiMo Foundation have a lot of experience here.

Another difference is the target audience and understanding the expectations of that audience. Open source development began as, in essence, a hobby: people wrote code for themselves and, eventually, for one another. But they were always writing for people who had a technical skill set and a certain level of ability with it. This made for a very different outlook than the one which is required to develop for end users and for consumers. We’ve learnt a pretty good amount about this in the community, especially over the past five years or so, but there’s still a long way to go. This is an area where collaboration with device manufacturers and carriers, who have long experience (not always good experience, admittedly) in things like usability, can really pay off.

But successful development for the mobile world requires—even more importantly—an entirely different way of thinking about how applications are used and even what applications actually are! As the various activities in our lives leave increasing online “impressions” (e.g. By our writing movie reviews, or purchasing books online, or engaging in various “social networking” activities). the ability of applications and web-based activities to interact, support and reinforce one another will enable new sorts of capabilities on the devices we use the most. I can already be notified (by a web site which tracks airplane flights) when one of my flights is delayed, and I can reschedule myself onto a different flight—all from my phone. I can take a photo of a business card, have it OCR’ed, added to my contacts, and then synchronized to a web-based server, so that it ultimately winds up on my desktop system—all from my phone. (It’s impossible for me to ever lose a contact any more: I have too much redundancy.)

This is an area that is evolving now, and evolving so rapidly, that no one has really been able to get their head around it yet. People continue to ask whether “the future” is “on the web” or “on the device”. The answer, as usual, is “yes”—and new potential applications like “augmented reality” are underscoring that—how that evolution plays out is the key area in which I expect to see organizations like LiMo working increasingly with the open source community as we discover what “computers” are going to be like as we become less and less directly aware that they are computers.

The “desktop” is increasingly going away, except for fairly specific, usually business-related uses. Outside the US, many more people are already accessing the internet from their phones rather than from a desktop system. As social networking, online shopping and content creation become even more important, the devices which will be most important to us are those which are supporting those activities, the devices we spend the most time with, the ones we carry around with us.

We used to talk about “the paperless office’, and—in some ways, anyway—it sort of happened: I don’t get a lot of paper bills and other documents as I used to anymore; now, they’re web pages of PDF files. On the other hand, my desk has vanished...


heng said...

Interesting post, but I think phones are going to not be general purpose (in the desktop sense) for long either. Why? Its all about the interface.

The problem is that, short of being able to change its physical characteristics, a general purpose device will never be able to present as good an interface as a dedicated device. To me, an iphone (the poster child of the smartphone concept) is a crap music player and a crap phone *before* its a good anything. By crap, I mean its not as would be designed if one had the freedom to ignore the general purpose nature of the device. This is a fundamental limitation on the concept of a smart phone.

You only have to watch someone unfamiliar with technology trying to use a mobile telephone to see the problem. These are people that are perfectly capable of using a normal telephone, because they are so damn simple, but as soon as a modicum of general-purposeness is applied to the device, as in a mobile phone, it all becomes much more difficult.

Where does this leave us? I remember reading an interesting post (no idea where) on what they referred to as semi-convergence. The idea that we will not have convergence (everything moving to one device) but that every device will be able to talk to every other device. I'll take this a stage further and suggest that the idea of a device will be superseded by the idea of an interface. It may well be the case that when you command your widget, the capabilities of other widgets are drawn upon, but ultimately, its a design problem that needs to be solved. It also may be the case that the same widget can be used for several interfaces, but equally, it might not. I suspect we will have something like a physical manifestation of desktop widgets: devices that are general purpose in principle, but present a single interface all the time. These then sit on your real, physical desktop (or whatever) and do that thing very well.

Of course all this is moot as we will soon acquire direct brain-machine interfaces ;)

Anonymous said...

‘The “desktop” is increasingly going away, except for fairly specific, usually business-related uses.’

I’m not convinced. A lot of people are students or own digital cameras. I don’t see students writing their papers on mobile phones (and not with web applications either), and photo management doesn’t work either because the monitors on mobile phones are far to small.

I wouldn’t call those use cases specific, but generic. While your story might be partially true, I think your quite exaggerating about the disappearance of the desktop.