Numerical Analysis Turns Sixty: Birthday Celebration Held at K.U.Leuven

January 6, 2008


Symposium speakers, standing, from left: Herbert Keller, Claude Brezinski, Gerhard Wanner, Brian Ford, Alistair Watson, Kendall Atkinson, Rolf Jeltsch, Robert Plemmons, Michael Powell, James Lyness, and Jack Dongarra. Kneeling: Ronald Cools and Adhemar Bultheel. Photo � Adhemar Bultheel.

By Adhemar Bultheel and Ronald Cools

The first appearance of a careful and systematic analysis of rounding errors in numerical computations on a digital computer came in John von Neumann and Herman Goldstine's paper "Numerical Inverting of Matrices of High Order" (Bulletin of the AMS, November 1947). This publication date is sometimes identified as the birthday of modern numerical analysis---see, for example, the SIAM Web site on the history of numerical analysis. Of course, numerical analysis has evolved dramatically since 1947, and it is debatable whether today's activity in this area should still be called numerical analysis. But whatever it is called, we could not agree more with a statement made by Volker Mehrmann during a round table discussion at ICIAM '07: "The golden age of numerical analysis is yet to come" (see SIAM News, October 2007). Numerical analysis may have turned sixty in November 2007, but it is still vibrant and more challenging than ever.

The Scientific Research Network on "Advanced Numerical Methods for Mathematical Modeling" is a consortium of all numerical analysis research groups in Flanders, Belgium, along with some collaborating external groups; it is sponsored by the Research Foundation Flanders. Members of this community took the initiative to invite a number of "big shots" in numerical analysis who were active in the middle of the 20th century and shortly afterward. The idea was to have a two-day symposium to celebrate the sixtieth birthday of numerical analysis, personified by some of the early leaders, all but one of whom, of course, are well over sixty. The symposium was held October 29�30, with 11 one-hour lectures providing surveys of different subdomains of numerical analysis, laced with historical anecdotes and personal reminiscences of the early days.

Not all the speakers were willing to recognize 1947 as the year their areas of numerical analysis came into existence. Gerhard Wanner, who treated the evolution of several methods for non-stiff differential equations, and Rolf Jeltsch, who discussed various stability notions that apply to sol-vers of stiff equations, would place the origins of numerical methods for the solution of differential equations much earlier. On the other hand, Kendall Atkinson situated the birth of his discipline with the 1948 Kantorovich paper "Functional Analysis and Applied Mathematics" (Uspehi Mat. Nauk), which introduced functional analysis into numerical analysis. Atkinson illustrated how the framework of functional analysis and operator theory, which incorporates the analysis of integral equations, has influenced research on this topic. The boost in computational aspects came near the end of the 1950s, stimulated by the intensive use of digital computers. But even if it is not the birth certificate for all aspects of numerical analysis, the 1947 paper of von Neumann and Goldstine can be said to have triggered modern numerical linear algebra.

Gerhard Wanner had devised a color code for his lecture: Light blue referred to pure mathematics and a somewhat dark brown to the numerical aspects. (Whether this was a reference to the heavens and earth is an interpretation left unspoken.) Jack Dongarra opened his lecture by pointing out that on Wanner's color scale his topic was probably pitch black, being a "digging into" the software, which is basically always trying to catch up with hardware development. To some extent, it is certainly true that hardware availability has greatly influenced numerical analysis. Indeed, the 1947 "birthday paper" mentioned earlier was written because of the emergence of digital computers. But developing numerical algorithms and designing a numerical software library are two different things, as Brian Ford explained in a lecture on the development of the NAG library.

In recounting the ongoing story of the evolution of numerical linear algebra software, Dongarra started with Algol and Fortran routines, without denying the influence of hardware. Classic slow single-processor machines were succeeded by machines with vector processors and then by machines with many processors, with shared or distributed memory. Hardware developments certainly influenced the design of numerical algorithms, forcing constant updating and redesign of the software.

Milestones cited from the 1960s include the first Gatlinburg meeting, Wilkinson's work on rounding errors and eigenvalue problems, and the Forsythe�Moler book with numerical algorithms in Algol and Fortran. By that time, Dongarra pointed out, operations per second (op/s) counts had already increased by a factor of a million over those of 1947 computers. With another factor of 100, machines near the end of the 70s approached the gigaop/s neighborhood. Among the numerical landmarks of that decade were the founding of NAG and IMSL, and the first releases of EISPACK, LINPACK, and the BLAS routines.

In the next decade, evolution switched to a faster track. IBM introduced the PC, and we got vector machines, hypercubes, Sun's RISC machines, and other supercomputers, now operating well into the gigaop/s domain. As to software, Netlib, the MathWorks, level-2 BLAS, LAPACK, and the IEEE floating-point standard were created. It was during that decade, however, that an increasing discrepancy between op/s counts and the performance that can really be attained in numerical computations became evident. According to Moore's law, the power of processors doubles approximately every 18 months, which is the good news. The bad news is that the speed of dynamic random access memory doubles only every 10 years! Thus, the speed at which an algorithm can run does not increase as fast as the power of the processor. To a large extent the speed of the algorithm is determined by the speed at which the data is processed.

With the 1990s came the Internet, and increasing numbers of processors on a chip. Message-passing (MPI) became an essential element for software design. In addition to the level-3 BLAS and LAPACK, combining LINPACK and EISPACK routines, came ScaLAPACK, the first portable software for distributed-memory machines based on PBLAS routines.

In the 21st century, we have quickly moved from 100 to 100,000 processors. All have to be kept busy all the time, in the most efficient way possible, if we want to solve our numerical problems faster. This requires new ways of thinking and will change the evolution in the speedup of numerical software that we have seen so far. Indeed, we can no longer expect major increases in processor clock speeds, but we have seen a drastic increase in the numbers of gates per processor. It can be more efficient to work on several slower devices simultaneously than on one superfast device. Multicore processors are already in the shops, but the number of cores per processor will increase significantly in the coming years. Some offer a mixture of very fast 32-bit arithmetic and slower 64-bit arithmetic. These constellations will force us once again to rethink our numerical software completely--if, that is, we want to take advantage of the increase in computing power of these architectures for our numerical computations. We are living at the time of "a rebirth of numerical linear algebra software" was the main conclusion of Dongarra's talk.

This message was perhaps most provocative for the younger researchers in the audience. But Dongarra was one of several speakers who did not restrict their stories to the purely historical context. Others, too, identified new trends and developments that should be pursued. Claude Brezinski, after discussing the early history of extrapolation techniques for speeding up the convergence of sequences, drew connections with current applications. His story linked up with the extrapolation techniques for higher-dimensional cubature evoked in the lecture of James Lyness. Herbert Keller pointed out that neglecting the singularity in the nature of a problem is an improper approach, one that has given rise to problems yet to be resolved. The use of nonnegativity constraints in scientific computing, the main topic in the lecture of Robert Plemmons, continues to be very important in many practical applications today. Michael Powell gave a structured survey of most of the classic quasi-Newton and related methods in unconstrained and constrained nonlinear optimization. Optimization was and is an active research domain, and its importance will continue for some time. Concluding the two days of lectures was an entertaining talk by Alistair Watson, who invoked the history of numerical analysis in Scotland, colored by his own experiences.

Speakers and audience members alike enjoyed the two-day birthday celebration. This Buena Vista Social Club of numerical analysis (the average age of the speakers at the conference was 68; the oldest was 82) is obviously still very much concerned with the well-being of their beloved discipline. Their sparkling enthusiasm has clearly been passed on to the younger generation.

For more information see http://www.cs.kuleuven.be/~ade/WWW/WOG/history.

Adhemar Bultheel and Ronald Cools are professors in the department of computer science at K.U.Leuven.


Donate · Contact Us · Site Map · Join SIAM · My Account
Facebook Twitter Youtube linkedin google+