Numerical Life at the Dawn of the Digital Computer Era

March 16, 2000


Built by a team of researchers at the University of Pennsylvania's Moore School of Engineering between July 1943 and February 1946, the ENIAC---the Electronic Numerical Integrator and Computer---was put to use immediately for the calculation of firing tables for U.S. Army artillery weapons. In Philadelphia on February 14, 1996, for the celebration of the 50th anniversary of the ENIAC, Vice President Al Gore (shown here with Judith Rodin, president of the University of Pennsylvania) credited government funding for providing the spark that led a team headed by Penn researchers J. Presper Eckert, an engineer, and John W. Mauchly, a physicist (and the fourth SIAM president), to do the groundbreaking work that produced the world's first electronic programmable computer. (Photograph from the Philadelphia Daily News.)

This article appeared in SIAM News, Volume 29, No. 3, April 1996

Philip J. Davis

As part of a celebration of the 50th anniversary of the ENIAC, I've been asked by the editor of SIAM News to contribute some personal reminiscences of the period. I'm glad I was asked, and if some movie maker wants to put together a documentary about progress in numerical analysis, much as they've documented the Civil War, Baseball, Winston Churchill, and other significant things, then there is one episode in what I am about to relate that's worth a five-second in-depth treatment.

My first research job was at NACA (predecessor of NASA) in Langley Field, Virginia, in the spring of 1944. For a while I lived in a trailer park in Newport News, erected to house workers in the vast nearby shipyards. Newport News contained a frothy infusion of soldiers, sailors, airmen, scientists, and construction workers, their wives, husbands, children, and sweethearts, from all over the country. My trailer was adjacent to a camp that housed Italian POWs.

The NACA laboratory was staffed by old-timers in the business plus many young draftees, hotshots in mathematics and physics who were assigned to this work as their military service. After the war, many of them left Langley Field and went on to become some of the leading scientists and technologists in the country. I worked in the Aircraft Loads Division, which carried out experiments and analysis of the aerodynamic loading of wings and tails of fighter aircraft. We were particularly interested in stresses under maneuvers: dives, pullouts, fishtailing, evasive actions. We obtained our data from tubes inserted in small holes that had been drilled along a cross section of a wing or the tail; the tubes were connected to air pressure recorders. On the basis of these records, we could reconstruct the profile of air pressure across the section as the aircraft performed a variety of maneuvers. From the pressure profile we went on to compute two very important aerodynamic quantities related to wing strength and aircraft stability: the lift coefficient and the coefficient of pitching moment.

Today a microcomputer would provide these numbers instantaneously as the experiment proceeded. The hope of theoretical aerodynamicists is that theory plus large computers can replace experiment entirely (the so-called numerical wind tunnel), but we haven't reached that stage yet. At any rate, 50 years ago the process of computing these numbers was a laborious one, and thousands of man and woman hours were devoted to it.

Before describing how we did this computation, let me say a few inadequate words about the state of numerical analysis in those predigital days. The very term "numerical analysis" would not exist until its invention by George Forsythe in the early 1950s. We worked with pencil and paper, tables of squared paper, slide rules, Marchant and Frieden calculators (if you were lucky you were able to commandeer one that flipped from the unit's place to the ten's place to . . . automatically). Some special computations were done by nomography---dead as a doornail now, but the masterful turn-of-the-century treatises on the subject by Maurice d'Ocagne are worth a look---and other computations, as we shall see, were done by planimetry.

Numerical analysis was taught at only a few colleges-and then in the departments of astronomy or experimental psychology. If you needed it, you picked it up "behind the garage." Mathematicians despised it, finding it either too boring or too Mickey Mouse. And no theorems. No theorems? Little did they suspect. Some mathematicians thought it was obscene to display a specific real number. If you had to go public with a number, put a fig leaf over it.

A considerable body of theory was available. Whittaker and Robinson (1924) was a bible. Frazer, Duncan, and Collar (1938) introduced matrices to the computing fraternity. The fast Fourier transform (harmonic analysis) was present in the form of stencils prepared at the turn of the century by Runge for n = 8, 16, 24, 32, 64. Numerical instability was experienced but was yet to be developed as a theory. Eigenanalysis was difficult beyond n = 3, "impossible" beyond n = 5.

To return to the computation of lift and pitching coefficients, we did it in the following way: The pressures at the various stations across the wing or tail were plotted up. The isolated values were then faired in with a French curve to give a complete pressure profile. The area under the pressure curve was essentially the lift coefficient, and the horizontal moment of the area was essentially the pitching coefficient.

Finally, as a last step, a special instrument called a planimeter was used to obtain the graphical areas. After tracing the stylus of this instrument around the boundary of the area, we could read the area off on a dial. If the planimeter was a real fancy superduper one, and ours was, it would have two dials, the first giving area and the second giving the horizontal moment.

The superduper planimeter of NACA's Aircraft Loads Section was in the permanent care of a character I shall call Swindells Royce-Dell for short. Dell treated the shining precision instrument with the care of a mother hen or, better still, with the care of a jeweler in charge of a fancy Swiss watch. He was always polishing it, oiling it carefully, and he would always lock it up when it was not in use. And it was well that he did so, for it was a sensitive, temperamental, and rare thing. The prototype was German, the Germans in those days being the finest instrument makers. But German instruments being unavailable during the war, the machinists at NACA took apart the one they had and made ten copies at a cost, I was told, of $5000 apiece. This was at a time when a Chevrolet, if you could buy one, would have cost $600 to $800.

Now it occurred to me, a fledgling mathematician, that the whole expenditure was unnecessary, that a little bit of arithmetic carefully laid out (numerical integration) could have replaced both the drawing of the diagrams and the subsequent planimetry and would have yielded answers that were just as accurate. But planimetry was the way it had been done, and it was the way it was going to be done, and I sensed that I had better shut up about it. And so I pushed the stylus around many a pressure diagram before the war had ended. This experience left me with an intense interest in the theoretical aspects of approximate numerical integration.

Ten years later, in 1954, with a PhD in pure mathematics in my pocket, I was working at the National Bureau of Standards in Washington. I was employed as a numerical analyst and functioned to some extent as an acolyte of the SEAC, one of the earliest of the digital computers built in this country. As part of an extensive project, I was confronted with the necessity of doing some integrations in the complex plane very accurately.

I thought a good strategy would be to use a very subtle and accurate scheme derived in the early 1800s by the great Carl Friedrich Gauss. Prior to 1954, the Gaussian integration rules were available only up to 16 points. The values had been calculated on desk calculators---an extremely laborious task---by Lowan, Davids, and Levenson. It was also the case that the Gaussian rules were out of favor in the days of paper-and-pencil scientific computation, as the numbers involved were helter-skelter irrational decimals, impossible to remember and difficult to enter on a keyboard without error.

It was my plan to carry the computation beyond 16. At that time I was working with Phil Rabinowitz (who later became one of the first computer scientists in Israel and a professor at the Weizmann Institute). I suggested to him that we attempt the Gaussian computation on the SEAC. He was game. I anticipated that it would be desirable to work in double-precision arithmetic to about 30 decimal places, and Phil, who was much more skillful at SEAC coding than I, agreed to write the code that would effectuate the double precision.

But first I had to devise a numerical strategy. The n abscissas of the Gaussian integration rules are the roots of the Legendre polynomials of degree n. The weights corresponding to the abscissas can be obtained from the abscissas by a number of relatively simple formulas. I proposed to get the Legendre polynomials pointwise by means of the known three-term recursion relation. I would get their roots by using Newton's iterative method, starting from good approximate values. These starting values would be provided by a beautiful asymptotic formula that had been worked out in the 1930s by the Hungarian-American mathematician Gabor Szeg�.

I didn't know whether this strategy would work. It might fail for three or four different reasons. I was willing to try, and if it worked, good; if it didn't---well, something is always learned by failure. We could publish the failures, and other mathematicians would avoid the pitfalls and might then be able to suggest more successful strategies.

I wrote the code (with the exception of the double-precision part). In those days SEAC coding was done in the four-address system (take the number in cell 9, combine it with the number in cell 17, store the answer in cell 23, and go to cell 45 for further instructions). The computation was also in fixed-point arithmetic, so that scalings had to be introduced to keep the numbers in bounds. I reread my code and checked it for bugs.

I (or Phil Rabinowitz) punched up the code on teletype tape and checked that out. The tape was converted automatically to a wire, and the wire cartridge was inserted in the SEAC. We manually set n equal to 20, crossed our fingers, held our breath, and pushed the button to run the program.

"Heroes of the SEAC": Working at the National Bureau of Standards in the 1950s, Phil Davis (left) provided the numerical strategy and Phil Rabinowitz the double-precision SEAC code that led to the first computation of the Gaussian integration rules beyond 16. This was the first electronic digital computation of the Gaussian rules.

The SEAC computed and computed. It computed and computed; computed and computed. Our tension mounted. Finally, the computer started to output the Gaussian abscissas and weights. Numbers purporting to be such started to spew out at the teletype printer. The numbers had the right look and smell about them. We punched in n = 24 and again pushed the run button. Again, success.

The staff of the NBS computing lab declared us "Heroes of the SEAC," a title awarded in those days to programmers whose programs ran on the first try, and for some while we had to go around wearing our "medals," which were drawn freehand in crayon on the back of used teletype paper.

This was the first electronic digital computation of the Gaussian integration rules. In the years since, alternative strategies have been proposed, simplified, and sharpened (by Gautschi, Golub, and others). And though all the theoretical questions that kept us guessing in 1955 have been decided positively, there are many problems as yet unsolved surrounding the Gauss idea. For Phil Rabinowitz and me, our success and our continued interest in approximate integration led to a book on the topic. The book (Methods of Numerical Integration, Academic Press) has gone through three editions, and a fourth, now being contemplated, will see the light of day if our designated co-author can spare the time.

Well, here it is some 50 years after I pushed the planimeter around an airfoil pressure diagram. I am working on a word processor, a machine whose pedigree lies in part in the difficulties humans have experienced in doing vast quantities of simple arithmetic. Mathematics has changed the world in absolutely surprising ways. It has, I am sure, many more surprises for us, for mathematics cannot predict its own future.

Philip J. Davis, professor emeritus of applied mathematics at Brown University, was at the National Bureau of Standards from 1951 to 1963, as head of numerical analysis from 1958 to 1963. Currently an independent writer, scholar, and lecturer he lives in Providence, Rhode Island.


Donate · Contact Us · Site Map · Join SIAM · My Account
Facebook Twitter Youtube linkedin google+