The Origins of The Term "Software"

Within the Context of Computing




The first published use of the term software in a computing context is often credited to American statistician John W. Tukey, who published the term in "The Teaching of Concrete Mathematics," American Mathematical Monthly January 9, 1958. Tukey wrote:

"Today the 'software' comprising the carefully planned interpretive routines, compilers, and other aspects of automative programming are at least as important to the modern electronic calculator as its 'hardware' of tubes, transistors, wires, tapes accessed 02-02-2010).

Note that Tukey referred to computers as "calculators." Up to this time the word "computer" typically referred to people, and the use of the word computer for a machine was just coming into popular use. In August 2014 William J. Rapaport of the Department of Computer Science at SUNY Buffalo emailed me the text of the paragraph in which Carhart used the word software. Carhart wrote:

"In short, we need a total systems approach to reliability. There are four aspects of such an approach which have an important bearing on how a reliablity program is shaped. First, the scope of the program should include the entire system. As an example a missile system includes the vehicle and warhead, the auxiliary ground or airborne equipment, the support and test equipment, and the operating personnel.

In addition, the interactions between these various elements, hardware and software (people), must be recognized and included as the glue that holds the system together." From this it is clear that Carhart did not use the term "software" within the specific context of programming, and priority for the term in that context may rest with Tukey. It is, of course, possible – even likely – that others used the word in spoken, rather than printed, context before either Carhart or Tukey. Paul Niquette stated that he used the term as early as 1953



The two key technologies in computing, hardware and software, exist side by side. Improvements in one drive improvements in the other. Both are full of major advances and technological dead ends, and both are replete with colourful characters and dynamic start-up companies.

But there are key differences between the hardware and the software industries. Hardware design and manufacture is a comparatively costly exercise, with a consequently high cost of entry. Nowadays only large, or largish, companies can do hardware. But many of the major software advances have been the results of individual effort.

Anybody can start a software company, and many of the largest and most successful of therm have come from nowhere, the result of one or a few individual’s genius and determination. There are many different types of software. There is applications software, such as financial programs, word processors and spreadsheets, that let us do the sort of work we buy computers for.

There is systems software, such as operating systems and utilities, that sit behind the scenes and make computers work. There are applications development tools, such as programming languages and query tools, that help as develop applications.

Some types of software are mixtures of these – database management systems (DBMSs), for example, are a combination of applications, systems, and applications development software.

The software industry has made thousands of millionaires and not a few billionaires. Its glamour, its rate of change, its low cost of en try, and the speed at which a good idea can breed commercial success have attracted many of the brightest technical minds and sharpest business brains of two generations.

Hardware is important, but in a very real sense the history of information technology is the history of software.



Software Before Computers


The first computer, in the modern sense of the term, is generally agreed to be the ENIAC, developed in the USA in the final years of World War II (see below). But the concept of software was developed well over 100 years earlier, in 19 century England. Charles Babbage (1791-1871) was the son of a wealthy London banker.

He was a brilliant mathematician and one of the most original thinkers of his day. His privileged background gave him the means to pursue his obsession, mechanical devices to take the drudgery out of mathematical computation. His last and most magnificent obsession, the Analytical Engine, can lay claim to being the world’s first computer, if only in concept (Augarten, 1985: 44).

By the time of Babbage’s birth, mechanical calculators were in common use throughout the world, but they were calculators, not computers – they could not be programmed. Nor could Babbage’s first conception, which he called the Difference Engine.

This remarkable device was designed to produce mathematical tables. It was based on the principle that any differential equation can be reduced to a set of differences between certain numbers, which could in turn be reproduced by mechanical means.

The Difference Engine was a far more complex machine than anything previously conceived. It was partially funded by the British govern ment, and partially by Babbage’s sizeable inheritance. He laboured on it for nearly twen ty years, constantly coming up against technical problems. But the device too complex to be made by the machine tools of the day.

He persevered, and was eventually able to construct a small piece of it that worked perfectly and could solve second-level differential equations.

The whole machine, had it been completed, would have weighed two tonnes and been able to solve differential equations to the sixth level. After battling with money problems, a major dispute with his grasping chief engineer, the death of his wife and two sons, and arguments with the government, the whole project collapsed (Augarten, 1985: 48).

Part of the problem was Babbage’s perfectionism – he revised the design again and again in a quest to get it absolutely right.

By the time he had nearly done so he had lost interest. He had a far grander idea – the Analytical Engine – which never came close to being built. This remarkable device, which lives on mainly in thousands of pages of sketches and notes that Babbage made in his later years, was designed to solve any mathematical problem, not just differential equations.

The Analytical Engine was a complex device containing dozens of rods and hundreds of wheels. It contained a mill and a barrel, and an ingress axle and egress axle. Each of these components bears some relationship to the parts of a modern computer.

And, most importantly, it could be programmed, by the use of punched cards, an idea Babbage got from the Jacquard loom. The first programmer was Ada, Countess of Lovelace, daughter of the famously dissolute English poet Lord Byron. Augusta Ada Byron was born in 1815 and brought up by her mother, who threw Byron out, disgusted by his philandering (O’Connor and Ro bertson, 2002). She was named after Byron’s half sister, who had also been his mistress.



Her mother was terrified that she would become a poet like her father, so Ada was schooled in mathematics, which was very unusual for a woman in that era. She met Charles Babbage in 1833 and became fascinated with the man and his work (Augarten, 1985: 64). In 1843 she translated from the French a summary of Babbage’s ideas which had been written by Luigi Federico Mana brea, an Italian mathematician.

At Babbage’s request she wrote some “notes” that ended up being three times longer than Manabrea’s original.

Ada’s notes make fascinating reading. “The distinctive characteristic of the Analytical Engine ... is the introduction into it of the principle which Jacquard devised for regulating, by means of punched cards, the most complicated patterns in the fabrication of brocaded stuffs

... we may say most aptly that the Analytical Engine weaves algebraical patterns just as the Jacquard-loom weaves flowers and leaves.” (quoted in O’Connor and Robertson, 2002) Her notes included a way for the Analytical Engine to calculate Bernoulli numbers. That description is now regarded as the world’s first computer program.

Ada Lovelace’s life was beset by scandalous love affairs, gambling and heavy drinking. Despite her mother’s best effort s, she was very much her father’s daughter. She considered writing a treatise on the effects of wine and opium, based on her own experiences.

She died of cancer in 1852, aged only 37. Ada’s name lives on in the Ada programming language, devised by the US Department of Defense


Alan Turing and the Turing Machine


Alan Turing was a brilliant English mathematician, a homosexual misfit who committed suicide when outed, and one of the fathers of modern computing. He was one of the driving forces behind Britain’s remark able efforts during World War II to break the codes of the German Enigma machines. He is best known for two concepts that bear his name, the Turing Machine and the Turing Test.

He conceived the idea of the Turing Machine (he did not call it that – others adopted the name later) in 1935 while pondering German mathematician David Hilbert’s Entscheidungsproblem , or Decision Problem, which involved the relationship between mathematical symbols and the quantities they represented (Hodges, 1985: 80).

At the time Turing was a young Cambridge graduate, already recognised as one of the brightest of his generation. The Turing machine, as described in his 1936 paper “ On Computable Numbers, with an application to the Entscheidungsproblem”, was a theoretical construct, not a physical device.



At its heart is an infinitely long piece of paper, comprising an infinite number of boxes, within which mathematical symbols and numbers could be written, read and erased. Any mathematical calculation, no matter how complex, could be performed by a series of actions based on the symbols (Hodges, 1985: 100)

The concept is a difficult one, involving number theory and pure mathematics, but it was extremely influential in early thinking on the nature of computation. When the first electronic computers were built they owed an enormous amount to the idea of the Turing Machine.

Turing’s “symbols” were in essence computer func tions (add, subtract, multiply, etc.), and his concept of any complex operation being able to be reduced to a series of simple sequential operations is the essence of computer programming.

Turing’s other major contribution to the theory of computing is the Turing Test, used in artificial intelligence. Briefly, it states that if it is impossible for an observer to tell whether questions she asks are being answered by a computer or a human, and they are in fact being answered by a computer, there for all practical purposes that computer may be assumed to have reached a human level of intelligence.

The Birth of Electronic Computing The first true electronic computer was the ENIAC (Electronic Numerator, Integrator, Analyzer and Computer). In 1942 a 35 year old engineer named John W. Mauchly wrote a memo to the US government outlining his ideas for an “electronic computor” (McCartney, 1999: 49).

His ideas were ignored at first, but they were soon taken up with alacrity, for they promised to solve one of the military’s most pressing problems. That was the calculation of ballistics tables, which were needed in enormous quantities to help the artillery fire their weapons at the correct angles. The US government’s Ballistics Research Laboratory commissioned a project based on Mauchly’s proposal in June 1943.

Mauchly led ateam of engineers, including a young graduate student called J. Presper Eckert, in the construction of a general purpose computer that could solve any ballistics problem and provide the reams of tables demanded by the military.

The machine used vacuum tubes, a development inspired by Mauchly’s contacts with John Atanasoff, who used them as switches instead of mechanical relays in a device he had built in the early 1940s (Augarten, 1985: 114).

Atanoso ff’s machine, the ABC, was the first fully electronic calculator. ENIAC differed significantly from all devices that went before it. It was programmable. Its use of stored memory and electronic components, and the decision to make it a general purpose device, mark it as the first true electronic computer.

But despite Mauchly and Eckert’s best efforts ENIAC, with 17,000 vacuum tubes and weighing over 30 tonnes, was not completed before the end of the war. It ran its first program in November 1945, and proved its worth almost immediately in running some of the first calculations in the development of the H-Bomb (a later version, appropriately named MANIAC, was used exclusively for that purpose).



By modern day standards, programming ENIAC was a nightmare. The task was performed by setting switches and knobs, which told different parts of the machine (known as “accumulators”) which mathematical function to perform. ENIAC operators had to plug accumulators together in the proper order, and preparing a program to run could take a month or more (McCartney, 1999: 90-94).

ENIAC led to EDVAC (Electronic Discrete Variable Computer), incorporating many of the ideas of John von Neumann, a well-known and respected mathematician who lent a significant amount of credibility to the project (Campbell-Kelly and Aspray, 1996:92). Neumann also bought significant intellectual rig our to the team, and his famous paper “report on EDVAC” properly outlined for the first time exactly what an electronic computer was and how it should work.

Von Neumann’s report defined five key components to a computer – input and output, memory, and a control unit and arithmetical unit. We still refer to the “Von Neumann architecture” of today’s computers.

When the war was over, Mauchly and Eckert decided to commercialise their invention. They developed a machine called the UNIVAC (Universal Automatic Computer), designed for general purpose business use.

But they were better engineers than they were businessmen, and after many false starts their small company was bought by office machine giant Remington Rand in 1950. The first commercial machine was installed in the US Census Bureau.

UNIVAC leapt to the forefront of public consciousness in the 1952 US presidential election, where it correctly predicted the results of the election based on just one hour’s counting. It was not a particularly impressive machine by today’s standards (it still used decimal arithmetic, for a start), but nearly 50 of the original model were sold.

The 1950s was a decade of significant improvements in computing technology. The efforts of Alan Turing and his Bletchley Park codebreakers during World War II led to a burgeoning British computer industry. Before his death, after studying von Neumann’s EDVAC paper

Turing designed the ACE (Automatic Computing Engine), which led to the Manchester Mark I, technically a far superior machine to ENIAC or EDVAC (Augarten, 1984: 148). It was commercialised by Ferranti, one of the companies that was later to merge to form ICL, the flag bearer of the British computer industry.

The most significant US developments of the 1950s were the Whirlwind and SAGE projects. MIT’s Whirlwind was smaller than ENIAC, but it introduced the concepts of real-time computing and magnetic core memory.

It was built by a team lead by Ken Olsen, who later founded Digital Equipment Corporation, the comp any that led the minicomputer revolution of the 1970s (Ceruzzi, 1999: 140). SAGE (Semi-Automatic Ground Environment) was a real-time air defence system built for the US government in the Cold War.

The project was accorded top priority, with a virtually unlimited budget. In a momentous decision, the government awarded the contract to a company that had only just decided to enter the computer industry. That company’s name was IBM.

SAGE broke new ground on a number of fronts. The first was its sheer size. There were 26 data centres, each with a 250 tonne SAGE mainframe. It was built from a number of modules that could be swapped in and out. It was the world’s first computer network, using the world’sfirst fault-tolerant computers and the world’s first graphical displays.

And it gave IBM a head start in the computer industry that it has retained ever since (Augarten, 1985: 204). By the end of the 1950s there were dozens of players in the computer industry.



Remington Rand had become Sperry Rand, and others like RCA, Honeywell, General Electric, Control Data and Burroughs had entered the field. The UK saw the likes of Ferranti and International Computers and Singer, and continental Europe Bull and Siemens and Olivetti. In Japan, a 40 year old company called Fujitsu moved into computers.

All these machines, of course, ran software, but there was no software industry as we understand it today. Early commercial machines were programmed mechanically, or by the use of machine language. In the early days there was little understanding of the distinction between hardware and software.

That was to change with the development of the first programming languages. Programming Languages and Operating Systems The term “software” did not come into use until 1958. It is probable that it was coined by Princeton University professor John W. Tukey in an article in The American Mathematical

Monthly in January of that year (Peterson, 2000). The word “computer” was originally applied to humans who worked out mathematical problems. ENIAC was designed to take over the work of hundreds of human “computers” who were working on ballistics tables.

Most of them were women, recruited from the best and brightest college graduates when the men went off to war. Thus, the first computer programmers were, like Ada Lovelace, women. The most famous and influential of them was Grace Murray Hopper, a mathematician who joined the US naval reserve during the war and who rose to become an Admiral. She died in 1992.

In 1951 Hopper joined Eckert and Mauchly’s fledgling UNIVAC company to develop an instruction code for the machine. She devised the term “automatic programming” to describe her work (Campbell-Kelly and Asprey, 1996: 187). She also used the word “compiler” to describe “a program-making routine, which produces a specific program for a particular problem” (quoted in Cerruzi, 1999: 85).

Today the term “compiler” means a program that translates English-like instructions into binary code, but Hopper used the term to describe a way of handling predefined subroutines, such as those hard-wired into the ENIAC.

The first compiler in the modern sense of the word was devised for the MIT Whirlwind project in 1954 by J.H Laning and N. Zierler (Ceruzzi, 1999: 86) Hopper became a tireless proselytiser for the concept of automatic programming.

Her work led directly to the development of FORTRAN (“FORmula TRANslator”), the world’s first true computer language. FORTRAN was developed in the mid-50s by an IBM development team led by a young researcher named John Backus.

The first version of FORTRAN was released in 1954. There were many skeptics who believed that it would be impossible to develop a high-level language with anything like the efficiency of machine language or assembler, but Backus argued for FORTRAN on economic grounds.

He estimated that half the cost of running a computer centre was the programming staff (Cambell-Kelly and Asprey, 1996: 188), and he saw in FORTRAN a way of vastly improving programming productivity.

He was right. FORTRAN enabled people to program computers using simple English-like instructions and mathematical formulas. It led to a number of other languages, the most successful of which was COBOL (Common Business-Oriented Language), initially developed by the US government Committee on Data Systems and Languages (CODASYL) and strongly promoted by Grace Hopper.

COBOL and FORTRAN dominated programming until the late 1970s. Other languages, such as ALGOL (Algorithmic Language), PL/1 (programming language 1), RPG (Report Program Generator) and BASIC (Beginner’s All-purpose Symbolic Instruction Code) also became popular, inspired by the success of FORTRAN and COBOL.

These languages became known as 3GLs (third generation languages), so called because they were an evolution from the first and second generations of computer language – machine code and assembler. Some people wrote bits of applications in those more efficient but much more cryptic languages, but the English-like syntax of the 3GLs made them easier to learn and much more popular.

3GLs brought a discipline and a standardisation to program design. They enabled programs to be structured, or ordered in a hierarchical fashion, usually comprising modules with a limited number of entry and exit points.



Structured programming led to structured design, comparatively simple methodologies which set out ways in which these modules could be strung together. Soon, the term “systems analysis” came to be used to describe the process of collecting information about what a computer system was intended to do, and the codification of that information into a form from which a computer program could be written.

Not only did they not have programming languages, early computers also did not have operating systems. Every function had to be separately programmed, and in the early days there was no distinction between systems and applications software.

Programming languages such as FORTRAN and COBOL greatly impr oved general programming functions, but the task of handling machine-specific functions such as control of peripherals was still left up to individual programmers.

Most of the innovative early work on what we now call operating systems was done by individual users (Ceruzzi, 1999: 96). One such system, designed at General Motors in 1956, evolved into IBM’s well-known JCL (Job Control Language), a basic operating system designed for punch card systems that would tell the computer which cards were data and which were instructions.

The first true operating system is generally agreed to be MAD (Michigan Algorithmic Decoder), developed at the University of Michigan in 1959 (Ceruzzi, 1999: 98). MAD was based on the ALGOL 3GL, and was designed to handle the various details of running a computer that were so tedious to code separately.

But the concept of the operating system was still largely unknown, until the momentous development of IBM’s S/360. The IBM System/360 and OS/360 April 1964 marks the beginning of the modern computer industry, and by extension the software industry.

In that month IBM released the System/360, its revolutionary mainframe architecture. The 19 models in the S/360 range comprised the first ever family of computers, the first with a consistent architecture across computers of a different size.

They could use the same peripherals and software, making it very easy to move to a larger computer within the range, and to move on to new models as they were released. The idea of a family of computers seems quite normal now, but back in the early 1960s it was revolutionary.

Previous IBM machines, such as the 1401, were incompatible with other machines in IBM's range. Every time IBM, or anybody else, brought out a new computer, users had to rewrite their entire applications suite and replace most of their peripherals.

IBM bet the company on the S/360, investing over $US5 billion and 350,000 man-years (Watson, 1990: 340). It was the largest R&D project ever undertaken by a commercial organisation. It was an enormous gamble, but it paid off.

The S/360 (the numbers were meant to indicate the points of the compass) was more than twice as successful as IBM had hoped, and it became virtually the standard operating environment for large corporations and government agencies.

The S/360 evolved into the S/370, and then into the S/390, and then into today’s zSeries, with many of its features intact. A piece of software written for the first S/360 will still run on today’s zSeries machines. Predictions of the death of the mainframe, common 10 to 15 years ago, have proved wildly inaccurate, and the S/360’s successors still power most large transaction processing systems today.

But the S/360’s success was not pre-ordained. Many within the company argued against it. The machine was rushed into production, and IBM could not handle the demand, its internal accounting and inventory control systems buckling under the strain.

IBM’s chairman at the time, Thomas J. Watson Jr, recounts the story (Watson, 1990: 349) “By some miracle hundreds of medium-sized 360s were delivered on time in 1965. But ... behind the scenes I could see we were losing ground.

The quality and performance ... were below the standards we’d set, and we’d actually been skipping some of the most rigorous tests ... everything looked black, black, black. Everybody was pessimistic about the program ...“... we were delivering the new machines without the crucial software; customers were forced to use temporary programs much more rudimentary than what we’d promised ... with billions of dollars of machines already in our backlog, we were telling people they’d have to wait two or three years for computers they needed ... I panicked.”

But IBM recovered and the S/360 became the most successful computer in history. It introduced a number of innovations, such as the first transaction processing system and the first use of solid logic technology (SLT), but it was no technical miracle. Its peripherals performed poorly, its processor was slow, its communications capabilities were virtually non-existent.



Most importantly, the S/360 introduced the world’s first sophisticated operating system, OS/360. This was both the architecture’s biggest advance, and also its biggest problem. OS/360 was by far the largest software project ever undertaken, involving hundreds of programmers and more than a million lines of code (Campbell-Kelly and Asprey, 1996: 197).

In charge of the project was Fred Brooks, who was to become the father of the discipline known as software engineering. Brooks’s book on the development of OS/360,

The MythicalMan-Month, remains one of the all-time classic descriptions of software development. The task facing Brooks and his team was massive. They had to design an operating system that would work on all S/360 models, and which would be able to enable the machines to run multiple jobs simultaneously, a function known as “multitasking”.

At its peak the OS/360 project had more than 1000 programmers working on it, which led Brooks to his famous conclusion that software cannot be designed by committee, and that there are no necessary economies of scale: “the bearing of a child takes nine months, no matter how many women are assigned” (Brooks, 1995: 17).

OS/360 was eventually delivered, years late and millions of dollar over budget. It was never completely error-free, and bugs kept surfacing throughout its life. IBM eventually spent over half a billion dollars on the project, the biggest single cost of the entire S/360 project (Campbell-Kelly and Asprey, 1996: 200).

But it was the archetype of the operating system, the precursor of all that have followed it. With all its problems the S/360 was an architecture, the first the computer industry had ever seen. It excelled nowhere, except as a concept, but it could fit just about everywhere. For the first time upward and downward compatibility was possible.

The phrase “upgrade path”entered the language. With the S/360, IBM also promised compatibility into the future, protecting customers’ investment in their applications and peripherals. With the S/360 and OS/360, the operating system became the key distinguishing factor ofa computer system.

Soon other companies began making “plug-compatible computers” that would run S/360 software. The best known of these was Amdahl, started by IBM engineer Gene Amdahl, who had led the deign team for the S/60. Amdahl released its first IBM-compatible mainframe in 1975....


Tech-Know


Pixabay.com