Modern computing

A short history and a shorter rant

In the beginning, computers were difficult to use. In order to use computers, you had to learn programming: there was no other way to do it. Programming was hard, and involved learning a new kind of symbolic language, designed just for giving instructions to machines.

Some people spent a long time making computers that were easier to use, like the Macintosh, but they were still difficult to program. (More difficult than ever, in fact, because the new style of programming used for building graphical applications was much more involved, so programs were much larger than the old command-line tools.) So, though lots of people bought the ‘easy-to-use’ computers, very few learned to program them.

Eventually, the people who made computers came to think that since nobody learned to program, it was completely irrelevant to most people. Nobody should have to learn programming to use a computer, only the people who want to become developers. They thought programming was always going to be hard, and nobody would ever learn it except the professional full-time programmers.

So they built computers which were ‘even easier to use’. So ‘easy-to-use’, in fact, that you couldn’t program them at all. Not only were a lot of the features you needed to write programs missing, they actually stopped you programming your computer unless you paid them extra money to enable ‘programmability’.

But way back when the first ‘easy-to-use’ computers were just coming out, some other people were working on making programming much easier, so that anyone could do it. Programming is easy, after all: all you need to understand is conditions and repetition. And eventually they worked out simple ways to program that didn’t even need you to understand those.

These people knew that programming isn’t intrinsically difficult, it was just that the craft was still in its infancy, and the early tools they knew were clumsy and unsophisticated. They knew they could build much better ones.

They also knew that programming is very powerful and enlightening, because it lets you control the world around you in exactly the way you want (instead of the way someone else, a ‘software designer’, thought you’d want). They knew this because they programmed, all the time. They could make computers do everything for them, from helping their autistic children, to helping them select the best photo to show to potential dates. They had the power to transform their own lives and those of others around them, and they wanted to give that power to everyone.


Of course, much of this should be in the present tense. But some of the ‘recent’ history is correctly in the past tense: people have been working on making programming really easy for nearly half a century, and their work has been mostly ignored since personal computing took off.

Instead, programming keeps getting harder on new computers all the time, especially ones made by Apple. This is a shame because they design things so well, but seem to view programmability as being in direct conflict with usability and good design. This kind of usability is nothing but illusionism, because the computer is the same underneath, but in a more boolean sense the computer is less usable: it can’t be used at all for anything except what certain people (professional programmers) have anticipated it being used for.

As long as the myth that programming is naturally difficult persists, the true power of computers will remain in the hands of the few, and the tools to make it easy will never become available.