Snapshots from Puerto Rico

December 21, 2000


Reading the papers submitted for the SIAM Outstanding Paper Prize, "you realize the incredible breadth of the papers that appear in SIAM journals," Christopher K.R.T. Jones said in July during the award ceremony at the SIAM Annual Meeting. Shown here (left) with prize recipient Arnd Scheel of the University of Berlin, Jones characterized the three winning papers as "giving new perspectives to their fields." Scheel's paper, "Bifurcation to Spiral Waves in Reaction-Diffusion Systems," appeared in SIAM Journal on Mathematical Analysis (Vol. 29, 1998). No complete theory of spiral waves is available, Jones said, but Scheel, using methods of dynamical systems to study waves and patterns, has taken a big step in that direction. The other two prize recipients, not present at the meeting to accept their awards, were Ralf Hiptmair of the University of T�bingen, for "Multigrid Method for Maxwell's Equations" (SIAM Journal on Numerical Analysis, Vol. 36, 1998), and David Karger of the Massachusetts Institute of Technology, for "A Randomized Fully Polynomial Time Approximation Scheme for the All-Terminal Network Reliability Problem" (SIAM Journal on Computing, Vol. 29, 1999). SIAM president Gilbert Strang chaired the prize committee, whose members, in addition to Jones, were Howard Elman and Margaret Wright.

Paul Davis

Turn any two people loose at a SIAM meeting, especially an annual meeting, and they'll return with widely varying views as to the most memorable sessions. Here, Paul Davis gives readers a glimpse of a few personal highlights from the 2000 SIAM Annual Meeting. tag heuer replica watches

Java in Scientific Computing
Cups of java can keep programmers working late at night, but Java is probably not what they're writing if they're scientific programmers. The Java programming language is well regarded for making things that go blink on a Web site, but it is seen as too limited and too slow for serious number crunching. At the 2000 SIAM Annual Meeting, Ron Boisvert and Roldan Pozo, both of the National Institute of Standards and Technology, Jos� Moreira of IBM, and Sid Chatterjee of the University of North Carolina at Chapel Hill described several projects and benchmarking studies that may make the idea of slow Java as outdated as the 25-cent cup of coffee.

Java was designed to be a platform-independent, object-oriented language, not to support multidimensional arrays, complex arithmetic, numerical libraries, and the other necessities of scientific computing. But its portability, its access to a rich library of graphics and GUIs, and its network savvy account for its widespread acceptance and, hence, a natural wish to see it used for scientific applications.

Boisvert reported on work of the Numerics Working Group (which he and Pozo chair) within the Java Grande Forum (www.javagrande.org), an open group of researchers and developers interested in using Java for applications with demanding numerical or I/O requirements. Issues include supporting alternative models for floating-point arithmetic and providing complex arithmetic and multidimensional arrays as efficient objects in Java, the challenges of "religious wars" with Java purists notwithstanding.

The theological conflicts arise, for example, from Java's requirement for strict array bound checking. While appealing to language purists, this rigid requirement frustrates the numerical analyst who wants to dispense with it to optimize the handling of sparse matrices.

"The use of Java in scientific computing is still in its infancy, and so few numerical libraries are yet available," Boisvert explained. "Most existing libraries focus on linear algebra. Java will reach a new level of usability when a good ODE solver written in pure Java is widely available." Current information is available at http://math.nist.gov/javanumerics/.

Pozo displayed benchmarking studies that showed the speed gap beginning to close. (The benchmarks were complex FFT, SOR, Monte Carlo integration, sparse matrix multiplication, and dense LU.) Roughly, Java can't compete on high-end work-stations, because it hasn't been optimized for such machines. However, it can be competitive with Fortran and C on what Pozo called "commodity platforms," e.g., cheaper PCs with Pentium III chips. And, he said, "new environments and new compilers are coming." Chatterjee described another case study and bench-marking effort, a Java version of NASA's fluid dynamics code LAURA.

As an example of the developments needed to make Java suitable for serious numerics, Moreira provided details on the effort to overcome Java's strict array bound checking, with the goal of optimizing loop nests. In Fortran, he explained, a compiler's first step toward speedup would be to apply code transformations like loop interchange and outer loop unrolling. His approach in Java, by contrast, is to first create "safe regions" of loop nests in which no array bound exceptions are possible, and then to apply the code transformations to those safe regions. That approach can compete with Fortran 90 on a 200-MHz POWER3, and it scales well on multiprocessor platforms.

Unfortunately, important developments like true multidimensional arrays and optimizing compilers for numerics still have uncertain commercial prospects, because those making the business decisions have not yet been persuaded of the market economics.

Ron Boisvert, Roldan Pozo, and Jos� Moreira will be giving a tutorial on Java for High Performance Computing at the SIAM Parallel Processing Meeting in Portsmouth, Virginia, this spring. Vladimir Getov of the University of Westminster and George K. Thiruvathukal of Hostway Corporation will also participate in the full-day tutorial.

"Middleware" for Very Large Data Sets
Communications between computers involved in the simulation of complex phenomena like chemical reactions in groundwater flow look a lot like a conversation between two friends, one of whom has just returned from a long trip. The traveler has much to say, but the listener cares about only one small section of the trip. The National Partnership for Advanced Computational Infrastructure (NPACI) is developing such "selective listening tools" for the sets of multi-scale, multiresolution, multiapplication data that arise from distributed multi-architecture simulation platforms---computing that is distributed over different codes running at different times on different CPUs in different parts of the world.

At the 2000 SIAM Annual Meeting, Joel Saltz of the University of Maryland, College Park, reported on one aspect of NPACI's work, the development of "middleware" to support visualization of disk- and tape-based data sets-tools that will let users query, filter, and find subsets of large data sets. Compilers under development will also permit easy specification of user-defined data transforms in, for example, a dialect of Java.

The Active Data Repository (ADR), one of the specific tools developed by Saltz's team, recognizes that large, irregular data sets are seldom used as-is. (Such data sets arise, for example, in groundwater flow simulation, the application provided by Mary Wheeler of the University of Texas, Austin, the team's collaborator in this work.) The data may require preprocessing to reduce their volume, or the next stage of work may need only a subset representing a particular region of interest. ADR provides run-time support for memory and processor management so that appropriate chunks of data can be scattered across various disks but kept together for the CPUs performing the next stage of analysis.

ADR could, for example, facilitate a detailed simulation of pollutant chemistry in groundwater in a limited geographic area whose flow in a larger region was described by the original data set. The flow calculations might have been performed on, say, 50 CPUs with an irregular spatial mesh, while simulation of the pollutant chemistry might have been coded for a regular spatial mesh and 62 CPUs. ADR manages the projection from the data structure for the flow to that for the chemistry and from the first set of processors to the second.
"ADR can also be used as a kind of database," Saltz suggested. A single groundwater flow calculation, for example, could be stored in ADR for repeated use in studies of oil releases at different locations.

Large-scale PDE-constrained Optimization
Since partial differential equations describe the stress distributions in an airplane wing and the flow of air across a wing, a designer who seeks either a wing of minimum weight or a wing profile with low drag (much less the two simultaneously) must solve an optimization problem whose constraints are the governing differential equations. There are sophisticated codes for optimization and sophisticated codes for solving PDEs, but few that talk to each other efficiently. In Puerto Rico, Paul Boggs of Sandia National Laboratories described joint work done with his colleague Kevin Long to address that problem.

Typically, the PDE code solves the constraint equations only for a given value of the control or design variables. It does not provide the gradient information required for use of more sophisticated optimization methods, e.g., sequential quadratic programming (SQP) algorithms. Computing these gradients by finite differences or by automatic differentiation is usually not practical. Thus, a designer is often limited to search-based optimization. Such an approach is effective only if the number of design parameters is small and the expense of solving the PDE is not too great.

The two researchers have asked how PDE software could be written to help PDE-constrained optimization over these obstacles. Typically, Boggs explained, a linearized SQP algorithm computes updates of the control variables and Lagrange multipliers by using the action of the gradient of the PDE constraints and the Hessian of the Lagrangian of the original problem. Such information could come from the PDE solver if that capability were built into it and if the files involved were small enough to keep communication time insignificant as compared with PDE solution time.

Boggs and Long are well down the road toward the development of a flexible XML-based problem-specification format and communication protocol to link SQP optimizers and parallel PDE solvers. The ultimate goal of their work: PDE solvers and optimizers that are more compliant with the needs of large-scale PDE-constrained optimization.

Climate Modeling: Challenges for Computational Mathematics
Applied mathematicians can't do much about the weather, but they can do plenty to help those who model it. Improved computational speed, resolution, and accuracy can be as helpful to environmental policymakers as to someone deciding whether to pack an umbrella for work tomorrow. After outlining some of the contributions of the Parallel Climate Model (PCM), invited speaker Warren Washington of the Climate Change Research Section of the U.S. National Center for Atmospheric Research gave his audience at the 2000 SIAM Annual Meeting a list of specific challenges for computational mathematicians.

Saharan dust storm. Future models of such events will need to incorporate more detailed interactions between the atmosphere and the land.

PCM was developed with support from both the National Science Foundation and the Department of Energy. The basic framework for the model is the Navier-Stokes equations, with the ideal gas law and a hydrostatic assumption. The model sweeps up a range of phenomena, including some atmospheric chemistry, aerosol physics, land and ocean effects, river transport, and sea ice action.

The spatial computational grid of PCM has its pole over Hudson Bay, a relatively new feature resulting from the introduction of generalized coordinates to improve resolution near the equator and to better track El Ni�o, which is an equatorial feature. The atmospheric model uses spherical harmonics, FFTs, and semi-implicit time stepping. The ocean model solves a Helmholtz-type equation with finite differences. The sea ice computations are similar, but with the elastic viscoplastic behavior of the material also incorporated.

A part of the ocean model is basically "solving the Helmholtz equation on distributed computers," Washington said. "It is slow because there is lots of communication and little computation. It still needs a speed-up on the order of 5-15 to accommodate higher-resolution models."

Resolution is a challenge if regional climates are to be modeled correctly. Accurate modeling of evaporation over lakes, for example, requires incorporating the Great Salt Lake and, for evaporation over land, more than the twenty types of vegetation now included.

Another challenge discussed by Washington arises from his long list of needed computational climate change experiments: investigations of the effects of greenhouse gases, sulfate aerosols over cities (the haze you see from airplanes), stratospheric ozone, and biomass burning; historical simulations; and tests of various energy-use and emission-control strategies, among others. Pointing to the complexity of the systems involved, Washington made clear the need for an ensemble of computational experiments that can correctly replicate natural variability of the climate system, for example, temperature anomalies. Future models must also incorporate more detailed interactions between the atmosphere and the land, as in dust storms in the Sahara desert.

Next-generation climate modeling will be performed on large clusters of parallel computers via distant collaborations. Specific contributions that can be made by computational mathematicians include remapping from various grids, efficiently solving the Laplace equation on clustered machines, performing global FFTs, recoding algorithms designed for vector machines for use on distributed clusters, designing algorithms that make better use of caching, and improving numerical methods for all the constituent model components.

Paul Davis is dean of interdisciplinary and global studies at Worcester Polytechnic Institute.


Donate · Contact Us · Site Map · Join SIAM · My Account
Facebook Twitter Youtube linkedin google+