The History of Computers

Computers represent man’s search for a tool to simplify the task of quantifying things in the world around him. As this need became more widespread and pronounced, the technology of the time developed machines to meet that need. However, the inventors of the various calculating machines often developed machines so advanced that they were not applicable to the needs of their time. In fact the theoretical outlines of the modern electronic digital computer was in place long before the ability to manufacture it was.

The first calculating machine used was the abacus, invented in China over 2,000 years ago. It was made up of a wooden rack with parallel beads that were used to add subtract, multiply and divide. Abacuses are still widely used in China and I remember reading a news article not long ago of a competition there between an abacus and calculator. It was found that one could add and subtract faster on an abacus but multiply and divide faster with a calculator.

The next notable calculating invention was by Blaise Pascal in 1642. He invented a mechanical calculating machine that could add and subtract large quantities of numbers. This was followed by an invention by Gottfried von Liebnez, the co-inventor of calculus, that could add subtract, multiply and divide.

In 1822, Charles Babbage invented a machine called a difference engine. This was notable in that it had something called “conditional control transfer,” which means that it would change its sequence of operations depending on the outcomes of previous operations. This is the concept behind the IF...THEN operation in modern computer programming and provides calculating machines with much greater flexibility and power. Babbage’s inventions were all ahead of their time. So far ahead in fact, that there were no practical uses for them and they could not be efficiently manufactured due to the lack of precision machining methods.

The next advance in computing came about through the use of punched cards by Herman Hollerith in his work for the U.S. Census Bureau in 1890. Punched cards had been used earlier in weaving looms to create intricate repeatable patterns in clothing but this was their first use in information organization. Punched cards reduced input errors, increased work flow and provided a cheap, plentiful memory storage system. Punched cards have been used in computing until quite recently. I remember as a first year Information Systems student at San Diego State having to write assembly language programs on punched cards that would run on an IBM 360 with a one day turnaround. A significant portion of my disposable income went to the purchase of punched cards.

Computing Generations
The balance of the history of computing has been divided into generations. The First Generation (1951-1959), was brought about by the needs of the military during World War II to calculate the trajectories of new weapons systems. John Eckert and John Mauchly invented the ENIAC, (Electrical Numerical Integrator And Calculator) It required 18,000 vacuum tubes and occupied 1,800 square feet. Its main drawback was that in order to reprogram it for a different task required rewiring its hardware. John von Neumann, a professor of mathematics at Princeton described a method where new programs could be stored electronically, and not require rewiring of the hardware. This brought about the concept of software as something distinct from hardware and greatly enhanced the efficiencyP and commercial viability of computers.

The Second Generation (1959-1963) was brought about by the use of transistors instead of vacuum tubes as the basic switching devices in computers. Transistors were faster, more reliable and more efficient than vacuum tubes and made computers practical for business use.

The Third Generation (1963-1975) was characterized by the use of integrated circuits which were many transistors produced on one small circuit. These made complicated computers easier and more practical to produce and brought the cost down to the point where computers were beginning to be affordable to the average person.

The Fourth Generation (1975-present) is typified by the use of microprocessors-devices where the entire arithmetic-logic and control units of a computer is contained on one small chip. The power of these devices have been described by “Moore’s Law,” which states that their processing speed will double every eighteen months.

The Fifth Generation is yet to be fully defined. One aspect of it is “artificial intelligence” where computers demonstrate more and more of the higher level functions of human intelligence. The extent to which computers can simulate the capabilities of the human mind is yet to be determined and will be a fascinating area to observe in the coming years.

Peter Honan