Introduction
The word "computer" was first
recorded as being used in 1613 and originally was used to describe a human who
performed calculations or computations. The definition of a computer remained
the same until the end of the 19th century, when the industrial revolution gave
rise to machines whose primary purpose was calculating.
In the 20th century breakthroughs in technology
allowed for the ever-evolving computing machines we see today. But even prior
to the advent of microprocessors and supercomputers, there were certain notable
scientists and inventors that helped lay the groundwork for a technology that
has since drastically reshaped our lives.
Pre-20th century
The
Ishango bone
Devices have been used to aid computation for
thousands of years, mostly using one-to-one
correspondence with fingers. The earliest counting
device was probably a form of tally stick. Later record keeping aids
throughout the Fertile Crescent included calculi (clay
spheres, cones, etc.) which represented counts of items, probably livestock or
grains, sealed in hollow unbaked clay containers.[3][4] The use of counting rods is
one example.
The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BC. Since then, many other forms of
reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a
table, and markers moved around on it according to certain rules, as an aid to
calculating sums of money.
The Antikythera mechanism is believed to be
the earliest mechanical analog "computer", according to Derek J. de Solla Price.[5] It was
designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has
been dated to circa 100 BC. Devices of a level of complexity comparable
to that of the Antikythera mechanism would not reappear until a thousand years
later.
Many mechanical aids to calculation and
measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented
by Abū Rayhān al-Bīrūnī in the early 11th century.[6] The astrolabe was
invented in the Hellenistic world in either the 1st or 2nd centuries BC
and is often attributed to Hipparchus. A
combination of the planisphere and dioptra, the
astrolabe was effectively an analog computer capable of working out several
different kinds of problems in spherical astronomy. An astrolabe incorporating a
mechanical calendar computer[7][8] and gear-wheels
was invented by Abi Bakr of Isfahan, Persia in 1235.[9] Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe,[10] an early
fixed-wired knowledge
processing machine[11] with a gear train and
gear-wheels,[12] circa
1000 AD.
The sector, a calculating instrument used for solving
problems in proportion, trigonometry, multiplication and division, and for
various functions, such as squares and cube roots, was developed in the late
16th century and found application in gunnery, surveying and navigation.
The planimeter was a
manual instrument to calculate the area of a closed figure by tracing over it
with a mechanical linkage.
The
Slide Rule
The slide rule was
invented around 1620–1630, shortly after the publication of the concept of the logarithm. It is a
hand-operated analog computer for doing multiplication and division. As slide
rule development progressed, added scales provided reciprocals, squares and
square roots, cubes and cube roots, as well as transcendental functions such as logarithms
and exponentials, circular and hyperbolic trigonometry and other
functions. Aviation is one of the few fields where
slide rules are still in widespread use, particularly for solving time–distance
problems in light aircraft. To save space and for ease of reading, these are
typically circular devices rather than the classic linear slide rule shape. A
popular example is the E6B.
In the 1770s Pierre Jaquet-Droz, a Swiss watchmaker, built a
mechanical doll (automata) that
could write holding a quill pen. By switching the number and order of its
internal wheels different letters, and hence different messages, could be
produced. In effect, it could be mechanically "programmed" to read
instructions. Along with two other complex machines, the doll is at the Musée
d'Art et d'Histoire of Neuchâtel, Switzerland, and
still operates.[13]
The tide-predicting machine invented by Sir William Thomson in 1872
was of great utility to navigation in shallow waters. It used a system of
pulleys and wires to automatically calculate predicted tide levels for a set
period at a particular location.
The differential analyser, a mechanical analog
computer designed to solve differential equations by integration, used
wheel-and-disc mechanisms to perform the integration. In 1876 Lord Kelvin had already discussed the possible
construction of such calculators, but he had been stymied by the limited output
torque of the ball-and-disk integrators.[14] In a
differential analyzer, the output of one integrator drove the input of the next
integrator, or a graphing output. The torque amplifier was the
advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential
analyzers.
First mechanical computer or automatic computing engine concept
In 1822, Charles Babbage
conceptualized and began developing the Difference Engine,
considered to be the first automatic computing machine. The Difference Engine
was capable of computing several sets of numbers and making hard copies of the
results. Babbage received some help with development of the Difference Engine
from Ada Lovelace,
considered by many to be the first computer programmer for her work and notes
on the Difference Engine. Unfortunately, because of funding, Babbage was never
able to complete a full-scale functional version of this machine. In June of 1991, the
London Science Museum completed the Difference Engine No 2 for the bicentennial
year of Babbage's birth and later completed the printing mechanism in 2000.
In 1837, Charles Babbage
proposed the first general mechanical computer, the Analytical Engine. The
Analytical Engine contained an Arithmetic Logic Unit (ALU), basic flow control, punch cards (inspired
by the Jacquard Loom), and
integrated memory. It is the first general-purpose
computer concept. Unfortunately, because of funding issues, this computer was
also never built while Charles Babbage was alive. In 1910, Henry Babbage,
Charles Babbage's youngest son, was able to complete a portion of this machine
and was able to perform basic calculations.
The machine was about a century ahead of its
time. All the parts for his machine had to be made by hand — this was a
major problem for a device with thousands of parts. Eventually, the project was
dissolved with the decision of the British Government to cease funding. Babbage's failure
to complete the analytical engine can be chiefly attributed to difficulties not
only of politics and financing, but also to his desire to develop an
increasingly sophisticated computer and to move ahead faster than anyone else
could follow. Nevertheless, his son, Henry Babbage, completed a simplified
version of the analytical engine's computing unit (the mill) in 1888. He
gave a successful demonstration of its use in computing tables in 1906.
Analog computers
During the first half of the 20th century,
many scientific computing needs
were met by increasingly sophisticated analog computers, which
used a direct mechanical or electrical model of the problem as a basis for computation. However,
these were not programmable and generally lacked the versatility and accuracy
of modern digital computers.[18] The first
modern analog computer was a tide-predicting machine, invented by Sir William Thomson in 1872.
The differential analyser, a mechanical analog
computer designed to solve differential equations by integration using
wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the brother of the more famous Lord Kelvin.[14]
The art of mechanical analog computing
reached its zenith with the differential analyzer, built by H. L.
Hazen and Vannevar Bush at MIT starting
in 1927. This built on the mechanical integrators of James Thomson and the torque amplifiers invented by H. W.
Nieman. A dozen of these devices were built before their obsolescence became
obvious. By the 1950s the success of digital electronic computers had spelled
the end for most analog computing machines, but analog computers remained in
use during the 1950s in some specialized applications such as education (control systems) and aircraft (slide rule).
Digital computers
Electromechanical
By 1938 the United States Navy had developed an electromechanical
analog computer small enough to use aboard a submarine. This was
the Torpedo Data Computer, which used
trigonometry to solve the problem of firing a torpedo at a moving target.
During World War II similar
devices were developed in other countries as well.
Early digital computers were
electromechanical; electric switches drove mechanical relays to perform the
calculation. These devices had a low operating speed and were eventually
superseded by much faster all-electric computers, originally using vacuum
tubes. The Z2, created
by German engineer Konrad Zuse in 1939,
was one of the earliest examples of an electromechanical relay computer.[19]
In 1941, Zuse followed his earlier machine up
with the Z3, the
world's first working electromechanical programmable, fully automatic digital computer. The Z3
was built with 2000 relays, implementing a
22 bit word length that operated at a clock frequency of about
5–10 Hz. Program
code was supplied on punched film while
data could be stored in 64 words of memory or supplied from the keyboard. It
was quite similar to modern machines in some respects, pioneering numerous
advances such as floating point numbers. Rather than the
harder-to-implement decimal system (used in Charles Babbage's earlier
design), using a binary system meant that Zuse's machines were
easier to build and potentially more reliable, given the technologies available
at that time.[23] The Z3
was Turing complete.[24][25]
Vacuum tubes and digital electronic circuits
Purely electronic circuit elements soon replaced their
mechanical and electromechanical equivalents, at the same time that digital
calculation replaced analog. The engineer Tommy Flowers, working
at the Post Office Research Station in London in the
1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built
in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data
processing system, using thousands of vacuum tubes.[18] In the
US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed
and tested the Atanasoff–Berry Computer (ABC) in 1942,[26] the first
"automatic electronic digital computer".[27] This
design was also all-electronic and used about 300 vacuum tubes, with capacitors
fixed in a mechanically rotating drum for memory.
During World War II, the British at Bletchley Park achieved a number of successes at breaking
encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the
electro-mechanical bombes. To crack the more
sophisticated German Lorenz SZ 40/42 machine,
used for high-level Army communications, Max Newman and his
colleagues commissioned Flowers to build the Colossus.[28] He spent
eleven months from early February 1943 designing and building the first
Colossus.[29] After a
functional test in December 1943, Colossus was shipped to Bletchley Park, where
it was delivered on 18 January 1944[30] and
attacked its first message on 5 February.[28]
Colossus was the world's first electronic digital programmable computer.[18] It used a
large number of valves (vacuum tubes). It had paper-tape input and was capable
of being configured to perform a variety of boolean logical
operations on its data, but it was not Turing-complete. Nine Mk
II Colossi were built (The Mk I was converted to a Mk II making ten machines in
total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II
with 2,400 valves, was both 5 times faster and simpler to operate than Mark I,
greatly speeding the decoding process.
The U.S.-built ENIAC[33]
(Electronic Numerical Integrator and Computer) was the first electronic
programmable computer built in the US. Although the ENIAC was similar to the
Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program" on
the ENIAC was defined by the states of its patch cables and switches, a far cry
from the stored program
electronic machines that came later. Once a program was written, it had to be
mechanically set into the machine with manual resetting of plugs and switches.
It combined the high speed of electronics
with the ability to be programmed for many complex problems. It could add or
subtract 5000 times a second, a thousand times faster than any other machine.
It also had modules to multiply, divide, and square root. High speed memory was
limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the
University of Pennsylvania, ENIAC's development and construction lasted from
1943 to full operation at the end of 1945. The machine was huge, weighing 30
tons, using 200 kilowatts of electric power and contained over 18,000 vacuum
tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and
inductors.[34]
Modern computers
Concept of modern computer
The principle of the modern computer was
proposed by Alan Turing in his
seminal 1936 paper,[35] On Computable
Numbers. Turing proposed a simple device that he called "Universal
Computing machine" and that is now known as a universal Turing machine. He proved that such
a machine is capable of computing anything that is computable by executing
instructions (program) stored on tape, allowing the machine to be programmable.
The fundamental concept of Turing's design is the stored program, where
all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the
modern computer was due to this paper.[36] Turing
machines are to this day a central object of study in theory of computation. Except for the
limitations imposed by their finite memory stores, modern computers are said to
be Turing-complete, which is
to say, they have algorithm execution
capability equivalent to a universal Turing machine.
Stored programs
A section of the Manchester Small-Scale Experimental Machine, the first stored-program computer.
Early computing machines had fixed programs.
Changing its function required the re-wiring and re-structuring of the machine.[28] With the
proposal of the stored-program computer this changed. A stored-program computer
includes by design an instruction set and can
store in memory a set of instructions (a program) that
details the computation. The
theoretical basis for the stored-program computer was laid by Alan
Turing in his 1936 paper. In 1945 Turing joined the National Physical Laboratory and began
work on developing an electronic stored-program digital computer. His 1945
report "Proposed Electronic Calculator" was the first specification
for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945.
The Manchester Small-Scale Experimental
Machine, nicknamed Baby, was the world's first stored-program computer. It was built at the
Victoria University of Manchester by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948.[37] It was
designed as a testbed for the Williams tube, the first random-access digital storage device.[38] Although
the computer was considered "small and primitive" by the standards of
its time, it was the first working machine to contain all of the elements
essential to a modern electronic computer.[39] As soon
as the SSEM had demonstrated the feasibility of its design, a project was
initiated at the university to develop it into a more usable computer, the Manchester Mark 1.
The Mark 1 in turn quickly became the
prototype for the Ferranti Mark 1, the
world's first commercially available general-purpose computer.[40] Built by Ferranti, it was
delivered to the University of Manchester in February 1951. At
least seven of these later machines were delivered between 1953 and 1957, one
of them to Shell labs in Amsterdam.[41] In
October 1947, the directors of British catering company J. Lyons & Company decided to take an
active role in promoting the commercial development of computers. The LEO I computer became operational in April 1951[42] and ran
the world's first regular routine office computer job.
The bipolar transistor was
invented in 1947. From 1955 onwards transistors replaced vacuum
tubes in computer designs, giving rise to the "second
generation" of computers. Compared to vacuum tubes, transistors have many
advantages: they are smaller, and require less power than vacuum tubes, so give
off less heat. Silicon junction transistors were much more reliable than vacuum
tubes and had longer, indefinite, service life. Transistorized computers could
contain tens of thousands of binary logic circuits in a relatively compact
space.
At the University of Manchester, a team under the
leadership of Tom Kilburn designed
and built a machine using the newly developed transistors instead
of valves.[43] Their
first transistorised computer and the first in the
world, was operational by 1953, and a second version was completed there
in April 1955. However, the machine did make use of valves to generate its
125 kHz clock waveforms and in the circuitry to read and write on its
magnetic drum memory, so it was
not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955,[44] built by
the electronics division of the Atomic Energy Research Establishment at Harwell.[44][45]
Integrated circuits
The next great advance in computing power
came with the advent of the integrated circuit. The idea of the integrated circuit
was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public
description of an integrated circuit at the Symposium on Progress in Quality
Electronic Components in Washington, D.C. on 7 May
1952.[46]
The first practical ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor.[47] Kilby
recorded his initial ideas concerning the integrated circuit in July 1958,
successfully demonstrating the first working integrated example on 12 September
1958.[48] In his
patent application of 6 February 1959, Kilby described his new device as
"a body of semiconductor material ... wherein all the components of
the electronic circuit are completely integrated".[49][50] Noyce
also came up with his own idea of an integrated circuit half a year later than
Kilby.[51] His chip
solved many practical problems that Kilby's had not. Produced at Fairchild
Semiconductor, it was made of silicon, whereas
Kilby's chip was made of germanium.
This new development heralded an explosion in
the commercial and personal use of computers and led to the invention of the microprocessor. While the subject of exactly which device
was the first microprocessor is contentious, partly due to lack of agreement on
the exact definition of the term "microprocessor", it is largely
undisputed that the first single-chip microprocessor was the Intel 4004,[52] designed
and realized by Ted Hoff, Federico Faggin, and Stanley Mazor at Intel.[53]
Mobile computers
become dominant
With the continued miniaturization of
computing resources, and advancements in portable battery life, portable computers grew in popularity in the 2000s.[54] The same
developments that spurred the growth of laptop computers and other portable
computers allowed manufacturers to integrate computing resources into cellular
phones. These so-called smartphones and tablets run on a variety of operating systems and
have become the dominant computing device on the market, with manufacturers
reporting having shipped an estimated 237 million devices in 2Q 2013.
The
Language Before the Hardware
The universal language in which computers use to
carry out processor instructions originated in 17th century in the
form of the binary numerical system. Developed by German philosopher and
mathematician Gottfried Wilhelm Leibniz, the system came about as way to
represent decimal numbers using only two digits, the number zero and the number
one. His system was partly inspired by philosophical explanations in the
classical Chinese text the “I Ching,” which understood the universe in terms of
dualities such as light and darkness and male and female. While there was no
practical use for his newly codified system at the time, Leibniz believed that
it was possible for a machine to someday make use of these long strings of
binary numbers.
In 1847, English mathematician George
Boole introduced a newly devised algebraic language built on Leibniz work.
His “Boolean algebra” was actually a system of logic, with mathematical
equations used to represent statements in logic.
Just as important was that it employed a
binary approach in which the relationship between different mathematical
quantities would be either true or false, 0 or 1. And though there was no
obvious application for Boole’s algebra at the time, another mathematician, Charles
Sanders Pierce spent decades expanding the system and eventually found in
1886 that the calculations can be carried out with electrical switching
circuits.
And in time, Boolean logic would become
instrumental in the design of electronic computers.
The Earliest Processors
English mathematician Charles Babbage is credited with having
assembled the first mechanical computers – at least technically speaking.
His early 19th century machines featured a way to input numbers,
memory, a processor and a way to output the results. The initial attempt to
build the world’s first computer, which he called the “difference engine,”
was a costly endeavor that was all but abandoned after over 17,000 pounds
sterling were spent on its development. The design called for a machine that
calculated values and printed the results automatically onto a table. It was to
be hand cranked and would have weighed four tons. The project was eventually
axed after the British government cut off Babbage’s funding in 1842.
This forced the inventor to move on to
another idea of his called the analytical engine, a more ambitious machine for
general purpose computing rather than just arithmetic. And though he wasn’t
able to follow through and build a working device, Babbage’s design featured
essentially the same logical structure as electronic computers that would come
into use in the 20th century.
The analytical engine had, for instance,
integrated memory, a form of information storage found in all computers. It
also allows for branching or the ability of computers to execute a set of
instructions that deviate from the default sequence order, as well as loops,
which are sequences of instructions carried out repeatedly in succession.
Despite his failures to produce a fully
functional computing machine, Babbage remained steadfastly undeterred in
pursuing his ideas. Between 1847 and 1849, he drew up designs for a new and
improved second version of his difference engine. This time it calculated
decimal numbers up to thirty digits long, performed calculations quicker and
was meant to be more simple as it required less parts. Still, the British
government did not find it worth their investment.
In the end, the most progress Babbage ever
made on a prototype was completing one-seventh of his first difference engine.
During this early era of computing, there
were a few notable achievements. A tide-predicting machine, invented by
Scotch-Irish mathematician, physicist and engineer Sir William Thomson in
1872, was considered the first modern analog computer. Four years later,
his older brother James Thomson came up with a concept for a computer
that solved math problems known as differential equations. He called his device
an “integrating machine” and in later years it would serve as the
foundation for systems known as differential analyzers. In 1927, American
scientist Vannevar Bush started development on the first machine to be
named as such and published a description of his new invention in a scientific
journal in 1931.
0 Comments