The HitchHiker’s Guide to the Macintosh: Part 10

Written by: Adam Christianson

Categories: Editorial

A Brief and Warped History of the Mac, part 10 (Future OS)

With Mac OS X version 10.5, Leopard in the works and Windows Vista, er, having its name unveiled, my thoughts have turned to the future. Mac OS X is the best operating system ever developed – its unique blend of high power UNIX and stylish, easy to use interface mean that the new version of Windows will really have to go some to catch up. We all wait with baited breath for Steve Jobs to stand on a stage and say, ‘Tiger is great, but for all the developers in this room – it’s old hat, because look what Leopard can do…’ Well actually, the most exciting part is what he says after that. New tweaks to the Aqua interface will probably filter in, great new features to make your life easier will crop up – and if they don’t, we’ll all be very disappointed.

So what is to come in Leopard? Well as I gaze into my crystal ball, I am at a loss to speculate what might appear. Better optimisation on the Intel Macs that will be commonplace by then is a strong likelihood. Even more extreme graphics, with support for PDF spec 1.6 built into Preview, more real time effects filters in Core Image and probably some other graphical innovations that none of us can guess.

I’m sure that anyone who regularly watches the reports from Apple Keynotes would have drawn similar conclusions.

There’s only one rumour that I’ve heard that would seem a likely addition to Leopard – making the interface resolution independent of the screen resolution. Back in 1984, the Mac had a 10” screen with 512 pixels across it. That worked out as around 50 pixels-per-inch. As computers evolved and screens got better, the standard definition became 72ppi. The 32×32 pixel icons of the past worked fine at this size, but Apple acknowledged that on high resolution displays icons were becoming very small and in Mac OS X bumped the size of icons up to 128×128 pixels. These days, the LCD technology of screens sets the limit on the maximum resolution. Each pixel generated by the Mac is precisely mapped to a dot in the LCD’s matrix. This is great for precision, and has led to 100ppi becoming the standard resolution for work.

Now, this works fine because Mac OS X was built to look best at this resolution – its menu is about 1.4 times bigger than the Mac OS 9 menu so that Mac OS X’s menu would look about the same on a modern screen as the Mac OS 9 menu looked on a contemporary display. However, the problem comes when people are running screens at different resolutions. My cousin is a case in point. When I visit her and turn on her iMac, I am disturbed to see its 15” screen displaying 800×600 pixels rather than the standard 1024×768 resolution. However, being older than myself and suffering the associated eyesight issues, she finds the lower resolution easier to use because it makes everything look bigger – including the buttons on windows and the menu. But of course, her biggest loss is in image quality – iPhoto might have larger buttons, but the pictures suffer and appear pixelly. The new system of screen independent interface resolution will help people in this case by allowing the menus, buttons and text to scale up and appear larger even when running at higher resolutions. By including this technology in Leopard, Apple will be future-proofing Mac OS X against increases in LCD quality and higher ppi resolutions.

It’s hard to see what else could be done in terms of simplifying the operating system for beginner users. Features like Exposé, the Finder’s side bar, and spring-loaded folders make the Mac’s interface better than anything else out there. That’s not to say that Apple don’t have some really incredible plans for future operating systems to keep one step ahead of the competition – or several steps ahead of Microsoft.

But if we look into the far future – beyond Mac OS X, what could happen. Well, a few years ago, a friend and I postulated on this and developed some key ideas of how operating systems may change beyond recognition. One possibility is that the way we interact with computers may completely change in a few years time. Apple’s Mighty Mouse shows that there’s still plenty of room for innovation in the field of input devices. The mouse and keyboard have been largely unaltered since the 1950’s. The basic technology of mice has improved, but not the way the device is used. The stylus is another advance in input methods, but tablet PC’s have never really caught on. The Apple Newton and other PDA’s have shown that a stylus can be useful. Applications like Painter thrive best when coupled with a pressure sensitive stylus input, but it’s not suited for every day use. Most of us prefer a keyboard and can type faster than we can write.

Those of you who want to experience a taste of the future now might like to head over towards http://www.inference.phy.cam.ac.uk/dasher/.
Dasher is a completely new way of typing, developed by Cambridge University. It uses an innovative graphical system in which letters zoom across the screen as you move your mouse towards them. Dasher then uses text prediction to help you find the right letter to follow. Dasher can learn words that you frequently use to make them easier to type. It takes some getting used to but once you’ve got the knack, Dasher can become surprisingly quick to use. Dasher was developed partly as an input method for people with disabilities which mean they cannot use a keyboard, one version can track eye movements of the user and translate this into text – this technology may become useful to us all in the future. And, yes, this paragraph is being written with Dasher.

There has been much speculation about computers that can respond to your voice and talk back to you – this will probably come one day, and when it does our whole way of living will change as computers watch and listen to us and can remind us why we came into the room when we forget! In fact, back in 1988, Apple released a movie demonstrating the way they thought computers may act in the future and in particular how it could affect the disabled, making life easier for everyone. (The movie’s called ‘Future Shock’ you may be able to find it online). But this movie also showed some technology that I think will never catch on. The woman in this film uses hand gestures to control her computer – moving her hand up and down to scroll through the menus. Now, I know that computers at this stage will have powerful artificial intelligence, but we all know what computers are like and you can bet that you go out the room for a minute and when you get back your dog has been sniffing the computer screen which has triggered its motion sensor to type half a page of gibberish. A computer that talks to you will require a lot of software to understand the syntax of language. While computers can already translate spoken words to text, very powerful software would be needed to understand what words mean and what switching two words over like ‘It is’ and ‘Is it?’ can do to the meaning of a sentence.
Anyone who’s ever listened to their Mac talking will probably think that this is an area that needs a lot of work done to it – Apple’s speech synthesis is pretty cool, but is a long way from a computer that can carry out a conversation with the user. And face it, anything less than that would just get annoying after the tenth time you hear “I’m sorry, that option is unavailable, please rephrase your command…”

Display technology may also change in the future. One futuristic form of display being developed is called a fog screen. In a fog screen, the images are projected onto a thin column of water vapour that form a wall of virtual screen. It might sound far-fetched but, such devices are already being tested. You’re probably wondering – Why? Well, the fog screen can be used to create a virtual partition between two spaces, and can even have different things projected on either side of it Imagining how this could be used in a couple of example scenarios may help to explain the advantages. The first example, is of a games arcade – the space could be divided up using fog screens and a different game could be running on each wall. Once you finish the game, you could walk through the wall and see what’s playing on the other side. My other example, is an office situation. We all need to collaborate with the other people we work with, but what if the the walls between your workspace and your colleagues were virtual? To show your work to others, you could simply invite people to walk through the fog barrier between offices and look at your side of the screen.

Another futuristic vision of the evolution of screen technology, is building displays into the surfaces of the desks, tables, and even kitchen work tops that we deal with everywhere. Up until recently, there were clear reasons that this could not be done. Traditional CRT displays were far too boxy, and even today’s flat screen have the draw back of needing thick and heat generating back lighting. The technology to build super flat and relatively cool running displays is in its early stage, and relies on the use of thousands of tiny Light Emitting Diodes. To generate coloured images, as most of you will know, requires using red, green and blue light to approximate all the other colours. Blue LEDs were in fact fairly hard to produce until recently, because blue light has a much shorter wavelength than the red or green LEDs that had been most common up to a few years ago. If you’ve bought a stereo or DVD player recently, you’ll probably know that LEDs now come in a multitude of colours. The next technical barrier to building screens from micro-LEDs was to provide a system to control each of the tiny components simultaneously when they are arranged into a grid. One of the first devices to use this type of display, is a digital camera – they are already available, but such screens are currently very small. Once more companies develop this technology, the screens will become larger, and cheaper to produce – much cheaper than current LCD displays. Because each tiny LED emits its own light, there is no need for hot and power hungry backlighting. Eventually such displays may become thin enough to fit on any surface or even be woven into a flexible membrane that can be fixed to any surface – like wrapping paper. By taking the display out of the traditional monitor format, the way people interact with computers can be freed up to allow for more innovative designs, and greater integration of technology into our lives. Imagine the kitchen of tomorrow where a micro-LED display could be integrated into your work top for quick access to recipes. Maybe there will be a form of electronic wallpaper that you can put up in your living room, and then change the colour of the walls – or even define a space on the wall to use as a TV or computer work area. In fact there would be virtually no limit to where you could have a display, and accessing the internet will become as common as checking the time on your wrist watch.

However, the biggest advance in display technology to come will be three-dimensional. 3D monitors could be powered by laser systems, or with the projection of images into a parabolic mirror, which would create the optical illusion of a 3D image out of them. However this is achieved, a whole new dimension to working with computers will be possible and an interface to get the most benefit out of this will have to be created. Layers in a Photoshop document could actually have depth if looking at the screen from the side. Apple’s trademark cube transition may take on a whole new level. If Apple were to design this innovation, then they would be in a position to become industry leaders.

But beyond all this, the operating system as we know it could evolve . Looking at iLife, it is possible to demonstrate how integration makes computers easier to use. If all aspects of working with a document were integrated, it would improve the workflow for the end user. The only way to do this, is to break with the traditional application-based operating environment and developing instead a document-based system. Here a document could be edited with a variety of tools. Say for example, that I am editing a photo in Photoshop and then decided to apply an effect, using Painter. Currently this would require me to save my project and then open it in the other application. In a document orientated operating system, the image being worked on would stay constant while the toolbars of the corresponding applications could be loaded in around it. This would lead to an entirely new form of operating system without any applications but rather a series of tools that could be used to edit the current document.
In place of iPhoto, would be a system-wide image viewer built into the OS, which could flick from photos to movies to music and even work as a substitute to the Finder. Documents that the user is editing could be broken down into sub documents, and because tools would be application independent, any part of a document could be edited with any tool. For example, the OS would ship with a number of utilities built-in, such as a text editor and photo editor. If I was writing an email, the tools of the text editor would be available to me, but if I also had bought HTML editing tools which would compare to Dreamweaver in today’s terms, then I could use those tools on my email as well. In fact, with PDF technology becoming so predominant these days, and the lines between media type blurring, a document would generally start out as a blank canvas and what was created on it would largely depend on the tools used upon it. Going back to the example of sending an email, I would simply create a new document in the main OS. At this point, it could be a picture, a document to print, or even the start of a desktop publishing document. Next, I would call up a text editing tool – possibly the one included with the OS, or if I preferred, maybe a custom tool from another developer – this tool may include the basic typing system, font selection, and dictionary systems. Once I’d typed my document, I would be free to do with it what I wanted – I could turn it into a sticky note, embed it into another document as a column of text or send it as an email, just by choosing a different tool to operate on the text. This would remove all boundaries between documents such as Word, Text Edit and even Mail, because they would simply edit aspects of a document. An even bigger example of this would be that I could type a letter, but then decide that I wanted to add an effect to the heading. Rather than being bound by the tools of one application, I could dynamically select this text and edit it with something else – in this case maybe I could load in Photoshop’s tools around my heading and use effects to place a shadow and colour effects onto the heading. One last example may help you understand what I am describing, is that of a page with both text and images. I could use it much like a current desktop publishing application, but if I decided to, I could select a picture within the document and call up a different set of tools to alter the colours, or even completely retouch this section – all without leaving the page that I was originally working with.
It has multimedia advantages too – since the nature of documents and file types would have to become much more flexible, there would be no explicit difference between a movie and a document of images and text, and one could readily be transformed into the other simply with the application of tools. In this case, I could create a picture, and then choose to animate it, or build a slideshow with it – and by simply loading the appropriate tool, this would be possible. Of course this technology would need to employ a complex series of plug-ins to power it and all files would need a full resolution composite embedded into documents to ensure that your documents would work on another computer even if it didn’t have the same range of tools as your computer.
This approach will never make it into Mac OS X but may come to be the trademark feature in operating systems that follow it.

Whatever the future of operating systems in general and Mac OS X in particular holds, we can rest assured that Apple will be where they belong – where they’ve always been – at the cutting edge of computer innovation.

Please address all comments to hhggtm@mac.com


There are 9 comments on The HitchHiker’s Guide to the Macintosh: Part 10:

RSS Feed for these comments
  1. Matt Hoult | Oct 21 2005 - 03:23

    A great column, very interesting reading but I have my doubts. I will certainly be taking this to the forums later tonight if Richard wants to keep an eye on or even join in discussion based around these ideas over the coming days and weeks.

  2. tiiim | Oct 21 2005 - 04:53

    I thought that was a good article, one thing the article does bring out if computers will go beyond what we know them today. Its time like this its good to be a mac user because Apple are at the heart of innovation. Thanks for the thought provoking reading! :)

  3. Jason | Oct 21 2005 - 09:05

    WHY do people keep writing “Apple are…” Apple is a singular collective noun for a company.

    Apple IS. Just like water IS wet, Apple IS a company. (You can’t say ‘water ARE wet’, can you?! This candy bar ARE good, mmm.)

    Apple IS good.
    Apple IS innovative.
    Apple IS at the heart of innovation.
    Apple IS an amazing singular noun.

    Zoiks!

  4. GreenAlien | Oct 21 2005 - 09:28

    Just to clarify that Jason’s rant is in reply to the previous comment and not the article itself.

    So that’s an interesting overview of user interfaces, and a grammar lesson to boot. Bliss.

  5. maccast | Oct 21 2005 - 10:00

    Jason,
    Just a guess… The MacCast has a large International audience and for many listeners English may not be their first language.

  6. Matt Hoult | Oct 21 2005 - 02:29

    Additionally I think it is easy to think of Apple as a collective rather than a singular collective noun as you rightly point out. With so much knowledge of the team members, structure and workings of Apple I think it’s all to easy to think of the Apple engineers and call them Apple for the sake of conversation.

    You are correct in your statement, Jason and I agree with you; but I am trying to put forth reason as did, Adam.

  7. rickt42uk | Oct 21 2005 - 04:32

    I like Apple are… because to my mind it sounds right – largely because the company consists of a large group of people . However, Apple is… is probably technically correct. I apologise to all for causing such a grammatical debate.

    In regard to the column itself, I would just like to point out that some of the technologies – particularly the fog screen – are things that are currently in development, not necessarily things that I think Apple will adopt or that I personally think have merit – but I wanted to cover all the innovations that I knew about at time of writing.

  8. Jared Schwager | Oct 21 2005 - 07:54

    Very nice article. Apple is generations ahead of Microsoft when it comes to innovation and new ideas. I’m really surprised at the stuff Apple rolls out at every special event.

  9. Maedi Prichard | Oct 23 2005 - 05:51

    Thankyou very much Richard. I tried Dasher out, what a program. Pure Genius.