BY Henry Stott in Profiles | 04 MAR 00
Featured in
Issue 51

Ask Me Another

Programming languages

H
BY Henry Stott in Profiles | 04 MAR 00

The computer programmer is a creator of universes for which he alone is the lawgiver... No playwright, no stage director, no emperor, however powerful, has ever exercised such absolute authority.

Joseph Weizenbaum, creator of ELIZA.1

We tend to think of a computer as content-neutral, with no perspective on the calculations it dispassionately executes on our behalf. But is this really so? We feed it a world-state or description of a problem in some previously agreed format. The computer then mechanically manipulates this input according to a system of rules, and the results can be translated back to provide us with handy insights into the implications of the original material. In this scenario, the computer has not only prescribed how we should view the world (through the filter of what data it will accept) but also controlled what we can then deduce about it. Computers can think of their environment in terms of maps, sounds, shapes, salaries or texts, simply as a function of the software they are carrying, but the way this software is designed implicitly constrains and transforms the content that passes through it. In this respect, it embodies a kind of a private, imaginary universe, with its own ontology and laws of interaction. Suddenly it seems quite plausible that software could potentially house Mankind's most compelling ideologies, casting the engineers who create it as the Karl Marxes of our time.

It is generally accepted that the first programmer was Ada Byron, Countess of Lovelace and daughter of the notorious poet. A mathematician, she met Charles Babbage in 1833 and together they set about designing the Analytic Engine, a device that would be able, for example, to generate a particular series of numbers called the Bernoulli sequence. Though their ideas outstripped the technology of the time, her programme designs incorporated many techniques - such as looping and branching - that were subsequently adopted. At this stage though, programming really only meant setting switches on a machine, and the technology represented more of a novelty than a potent adjunct to human cognition. Nevertheless, Babbage clearly envisaged such a future, describing how man's use of increasingly complex tools should result 'in the substitution of machinery, not merely for the skill of the human hand, but for the relief of the human intellect'. 2

Since that time, software development has obviously undergone a radical evolutionary process. Literally thousands of different programming environments have been promulgated from which just a few survived to form the foundations for the next generation. A good example is seen in the ideas of US Navy Admiral Grace Murray Hopper. She began software programming in the late 40s, just after the flurry of activity that had heralded the onset of computing proper. At that time, most programmes were still written in machine code, binary sequences not unlike Byron's. These were punched into cards and then fed into machines that used switches and relays or vacuum tubes to perform their data manipulation. Computers like ENIAC, built at the University of Pennsylvania between 1943 and 1946 by John William Mauchly and J. Presper Eckert Jr, were typical of the period - a ten-foot tall monster sprawling over 1,000 square feet and weighing around 30 tons.

Hopper's first programme, A-O, was designed in 1952 for use on the UNIVAC I. A-O was an early assembler language that allowed the computer to be controlled using symbols rather than binary sequences. A typical instruction would tell the computer to move data from one memory location to another or perform some mathematical operation. This simple first step developed into FLOW-MATIC, a business application for running automated billing and payroll operations. In turn, FLOW-MATIC inspired the design of COBOL (COmmon Business Oriented Language) a proper procedural language, similar to BASIC, and one of the most widely used programming languages created.

From using vacuum tubes and discrete transistors, computers now have all those transistors and switches laid out on a small silicon wafer. Today, a standard chip like the Pentium II Processor contains around 7,500,000 transistors, compared to the Intel 4004 of 1971, which had merely 2,300. Likewise, software has developed from first generation machine code and assemblers through to second and third generation languages like LISP, PASCAL, and C++. The period from 1960, when Admiral Hopper was first designing COBOL, to the present, has seen something like 2,600 different programming languages - a figure which includes neither the many dialects nor the different applications written in these languages (and with which most of us are probably more familiar).

As you would expect, all this software embodies a host of analytic objectives and perspectives - the programmer world-views, if you like. There are now endless debates about which software is more appropriate for a particular task, and software engineers become quite heated over the features of different languages. For example, in 1968 Edsger Dijkstra published the letter 'GO TO considered harmful', a text that was subsequently described as the first salvo in the structured programming wars. 3 This may all sound rather arcane, but, in a modest way, what really lies behind these discussions is an attempt to converge on some kind of manifesto for the science of good and clear thinking. These foundations for a computer-based society were the first elements of what we might call a Robotopia, in fact.

Software engineers now lock horns on a host of narrow dimensions - such as language uniformity, which refers to the consistency of a language's notation. A typical example is FORTRAN's use of parentheses in a wide variety of contexts, which leads to those subtle and difficult-to-find errors caused by having one out of place. Likewise, we could consider a language's compactness: APL is famously compact and therefore famously impenetrable. Then there are issues around the control structures and the general linearity of the software, since too much branching and looping becomes confusing. Debates also rage over how to modularise programmes into cohesive components. Finally, people can be divided into those who see the world as data flows and those who see it as full of data objects.

One of the more ambitious attempts to embody a far-reaching world-view into software was LISP. It was designed to tackle head-on the kind of commonsense reasoning problems that would pave the way to computer wisdom. It was built in the late 50s at MIT by John McCarthy (a key figure in the genesis of artificial intelligence) and was primarily intended to process logic. McCarthy published a paper entitled 'Programs with Common Sense' in which he described a hypothetical programme called Advice Taker, which was designed to use general knowledge to search for solutions to problems. In other words, rather than approaching a database with demands for total revenue figures by quarter and by region in order to analyse the trends, McCarthy's idea was that you could simply ask the computer an innocuous question like 'anything to worry about in the new sales figures?' or 'what's the boss so worked up about?'

A crucial feature of McCarthy's perspective is that the software should be able to accept new information in the course of operation. This means the programme would be able to achieve competence in new areas without being reprogrammed. Ponder the implications for mankind of reaching that watershed - having software that can create better software without human intervention. As such, LISP represents an important attempt to lay out the central principles of knowledge representation and reasoning. Of course, the approach has its critics and other avenues are being pursued to achieve the same objectives of programming without programmers. There are elements of the Windows environment, for example, that were built using so-called 'genetic algorithms'. Similarly, in the pursuit of common-sense reasoning, the CYC project has incorporated around ten million everyday facts into a global ontology that describes what kinds of things exist and what properties they have. These descriptions, together with the user's data and queries, can then be combined and manipulated by 20 different types of inference mechanism. One way or another, we strive to achieve a common-sense programming environment.

Back in the mainstream world of programming for business applications, the world-views expressed are perhaps less ambitious, but still important. Reflect on how using a spreadsheet or word processor has subsequently shaped the way you think about a problem. Once we understand the way a tool is structured, it becomes inevitable that we start to subtly adopt its perspective when mulling over how to apply it to a situation. The degree to which this influence carries over into other aspects of your life is probably a matter of personal choice. However, we all know people who have made that transition. A typical case, mentioned earlier, is the difference between perceiving a problem in terms of data flows or data objects. This has its roots in the way people conceptually analyse their systems prior to programming. Some will emphasise flow diagrams, with bits of data moving around and being acted upon. Others will draw structural diagrams, describing different classes, their attributes and listing their interactions.

Perhaps the best example of the object-oriented approach is C++, a language built in the early 80s by Bjarne Stroustrup, a Danish programmer working at Bell Labs in New Jersey. The main idea of object orientation is that it facilitates programming by emphasising the handling of object classes, defined as a combination of a data structure and a set of properties and methodologies to go with it. The simplest examples already existed in the data typing architectures of the original procedural software. For instance, you might be able to enter dates in a xx/xx/xxxx format and not only could the software understand what you were talking about, but it could also support a set of associated operations. In this case, it might provide for the calculation of a date's day or be able to subtract two dates to get the number of days in between (allowing for leap years, of course). The object class concept took this data typing a stage further. C++ can support the creation of any kind of data object together with associated methodologies: we could create the class of three dimensional maps together with instructions on how to calculate distances and optimal routes, or build a description of an asteroid with algorithms for its kinetic energy and momentum. This may seem narrow, but it is an issue that has semantic meaning and the nature of categorisation right at its heart. As experimental psychologists struggle to understand the way people cope with these issues, so our software engineers are defining how our computers will accomplish the same.

So that is the genesis of computing: a kind of accretion process driven by the powerful economic forces of automation. Significantly, computers were built to automate the same activities - accounting, administration, inventory management and document handling - that first drove peoples such as the Sumerians to devise writing around 5,000 years ago, using whole word symbols for numbers and lexical concepts such as man, wheat, ox, and so on. Without writing there could be only the most rudimentary storage and dissemination of knowledge, just as computers are similarly transforming our information processing capabilities. It is intriguing to speculate on what simplifications and abstractions will now flow from our first faltering robographic representations of the world. Perhaps computers are the natural heirs to the throne of writing. In them, as within literature, we find the tools to create and explore universes of our own making.

Aristotle wrote that 'The search for the truth is in one way hard and in another easy - for it is evident that no one of us can master it fully, nor miss it wholly. Each one of us adds a little to our knowledge of nature, and from all the facts assembled arises a certain grandeur.' 4 The attraction of computers has historically been the speed and accuracy with which they can execute relatively simple tasks, rather than for their wisdom and the 'grandeur' of their perspective. But they represent the means by which Mankind will assemble and pool its observations in future. As their world-views become more integrated and complete, our relationship with them will inevitably start to change. Perhaps, at this moment, we are progressing towards a Robotopia in which the envelope of humanity's best practices and insights will exist as a theoretical construct within our machines, an anchor to our worldly activities.

It seems clear that computers will increasingly migrate from providing highly focussed information processing leverage towards undertaking the management of whole, discrete aspects of our affairs. In the short term it will be our basic shopping and financial planning, then our transportation and foreign language communication needs, finally our overall education and household management. Each area will evolve from a purely manual process, through various stages of computer assistance, until eventually it will become almost entirely hands-free. At this point, within these specific domains, individual human intelligence will have been superseded. In certain contexts, it is inevitable that computers will increasingly manage the world and it should not surprise us. As Aristotle noted, an individual human was never going to be the final word on good thinking, so better thinking should come as no surprise. Nor should it surprise us when these systems' opinions start to diverge from our own. After all, that is the point. Consider a very specific example - with a fiduciary responsibility to both you and the future you in almost equal measure, the robotic financial advisor that comes with your credit card continually bemoans your clothing expenditure. Why, it whinges, do you insist on irrationally running up your borrowings and effectively foisting a larger problem onto yourself for later? Well, we're only human, you reply. The immediate gratification we feel of spending money outweighs the dim dread of paying back some usuriously larger amount tomorrow. I agree there's nothing necessarily wrong with wanting things immediately, the card concedes, in fact when you were a hunter-gatherer who had to maintain a constant rate of intake, and may not have survived to foot the bill anyway, it was a pretty sound strategy. However, in other contexts, such as clothes shopping in the 21st century, it is probably not the way to go.

With computer cognition scaling new levels of coherence and optimality in select domains, when thinking does diverge, the bulk of humans would be wise to follow their computer's advice. Learn from your software, because in the long run, those who do will outmanoeuvre those who do not. Perhaps the best way to receive the bad news that your behaviour is sub-optimal is to have it broken to you gently by a patriotic interface between your own bizarre belief system and the dizzyingly complex and alienating world in which you live. In the context of the invention of writing and the development of a literary canon, not only is the computer canon an order of magnitude more extensive, but it will come to you in an entirely personalised form. No more difficult to digest parables and principles - the future is about succinct and actionable in-the-field advice.

Yet Robotopia is likely to be as partial and identifiably human in origin as all our other artefacts. It will always contain some of the particular idiosyncrasies of the intelligences that created it, in some areas perhaps just the foibles of one or two instrumental individuals such as McCarthy or Stroustrup. The seeds of these future world-views are being sown today: whilst contemporary software engineers may perhaps appear to be engaged in discussions of unbridled pedantry, they may actually be in the process of setting the foundations of something far more important and enduring that may subtly influence every subsequent generation. In fact, these may be the only surviving artefacts of our current civilisation in the far distant future - immortalised by their integral relationship to the platform on which a self-perpetuating sentience is built: the decision to code everything in binary, for example, or perhaps the adoption of discrete rather than analogue technologies in the first place. Whilst the rocks erode and species come and go, there is a slim chance that it will remain too expensive and unrewarding to change some of the humblest original assumptions.

1. Joseph Weizenbaum, Computer Power and Human Reason: From Judgement to Calculation, W. H. Freeman, San Francisco, 1976

2. Charles Babbage, as quoted in Ray Kurzweil, The Age of Intelligent Machines, MIT Press, Cambridge Mass., 1990

3. Edsger Dijkstra, 'GO TO considered harmful', Communications of the ACM (Association for Computing Machinery), vol. 11,

no. 3, March 1968

4. Aristotle, Metaphysics, book II, chapter 1, trans.

H. C. Lawson-Tancred, Penguin Classics, London 1998.

SHARE THIS