On the Threshold of a New Era for Parallel Computing

June 25, 2004

Tamara G. Kolda

The skies may have been rainy, but spirits were sunny at the SIAM Conference on Parallel Processing for Scientific Computing, held at Fisherman's Wharf in San Francisco, February 25-27, 2004. One thing that made the meeting special was the large number of people who chose to attend---386, in fact, which is an 85% increase over the previous (2002) meeting in the series.

Parallel computing is on the threshold of a new era, Horst Simon observed in his opening remarks. (Simon, Mike Heroux, and Padma Raghavan co-chaired the organizing committee for the meeting.) In 1996, Simon pointed out, Sandia National Laboratories' ASCI Red parallel supercomputer, with 8000 processors, was the fastest computer in the world. Eight years later, the fastest computers still have fewer than 10,000 processors. For almost a decade, then, the parallel processing community has worked in the same regime of parallelism.

This is expected to change, by an order of magnitude, in 2005, when the 64,000-processor IBM Blue Gene/L is installed at the Lawrence Livermore National Lab. In the coming years, as systems with 100,000 or more processors become available, the parallel algorithms and tools community will face exciting new challenges.

Despite the beat of a heavy downpour on the windows above the audience, Charbel Farhat (University of Colorado, Boulder) got the meeting off to a great start with a plenary talk on the impact of large-scale simulation on the design and operation of supersonic aircraft.

Highlights of only three other plenary talks can be provided here. Chris Johnson (University of Utah) discussed the challenges of making codes high-performance and modular, and stressed the importance of software engineering. He described the BioPSE framework, a component-based problem-solving environment for biomedicine, and its application to cardiology and neuroscience. Johnson identified the top research challenges for biomedical computing as computing across the scales, e.g., from genome to cell to tissue to organ.

In a talk on performance modeling, one of the conference themes, Adolfy Hoisie (Los Alamos National Lab) said that performance is, in some sense, bottlenecked by the quest for a single number---e.g., for the best peak performance. Instead, he pointed out, performance depends (in some cases nonlinearly) on many factors. For example, bandwidth and latency alone do not predict the performance of the interconnect network. Using more sophisticated and realistic performance modeling, researchers at LANL have successfully diagnosed and corrected performance problems on DOE supercomputers.

Mike Heath (University of Illinois at Urbana-Champaign) stressed that the parallelization of individual numerical modules is no longer the main theme of parallel computing. The new push is to be able to use numerical modules together for a full simulation. Thus, interfaces for parallel numerical simulations are a major issue.

Additional plenary talks were presented by:

Completing the program were more than 50 minisymposia, 59 contributed talks, and 27 posters. In fact, the overwhelming number of presentations led to a shortage of rooms; consequently, attendees often had to pass through one talk to get to another. All in all, attendees and speakers alike handled the situation very politely.

Fault tolerance was a focus of several minisymposia. As parallel machines grow ever larger, the need for fault-tolerant algorithms and MPI implementations becomes more and more pressing. We learned that several fault-tolerant versions of MPI are available; furthermore, algorithm developers are making major strides in finding alternatives to full-disk check pointing.

Another often echoed theme was the importance of good software engineering practices. In addition to being fast and scalable, numerical libraries need to be easy to incorporate into large-scale application codes. Such libraries facilitate the quick deployment of new algorithms. Sessions on component technologies, data structures, and interfaces addressed the design and usability of numerical libraries.

The ability to model, predict, and improve performance was another common theme, considered in sessions on performance analysis, automatic algorithm tuning, and grid technologies.

Applications of parallel computing were not restricted to machines with 1000+ processors. Many users reported scientific advances on Beowulf clusters with 10-100 nodes. Applications discussed in minisymposia included structural dynamics, circuit simulation, nanoscience, the oil and gas industry, particle simulation, computational biology, and accelerator physics.

Of course, no SIAM meeting would be complete without minisymposia on algorithms, and this meeting---with sessions on linear solvers, multigrid and multilevel methods, meshing, load balancing, optimization, combinatorial algorithms, and eigenvalues---was no exception.

The meeting closed with the presentation of prizes for the three best posters and three best student talks. Student presentation awards of $500 went to Clemens Kadow, Carnegie Mellon University; Chuang Li, Pennsylvania State University; and Ivana Veljkovic, Pennsylvania State University. Poster awards (for which eligibility was not limited to students) of $200 went to Charles Peck, Joshua Hursey, and Joshua McCoy, Earlham College; John Gunnels and Fred Gustavson, IBM T.J. Watson Research Center; and Ricardo Oliva and Juan Meza, Lawrence Berkeley National Laboratory. A subset of the organizing committee served as judges for these awards.

John Gilbert, Bruce Hendrickson, Alex Pothen, Horst Simon, and Sivan Toledo had organized a workshop on combinatorial scientific computing, which was held immediately following the conference. The 82 registered attendees heard 21 talks and took part in the spirited discussions that punctuated the workshop.

Several short courses were offered in conjunction with the meeting: Parallel Computing with STAR-P, by Alan Edelman; High Performance Programming in the Partitioned Global Address Space, by Kathy Yelick; The ACTS Collection, by Tony Drummond; and Component Software for High-Performance Computing: Using the Common Component Architecture, by David Bernholdt.

The National Science Foundation provided support for the conference, including funding for 20 travel awards as well as the student presentation and poster awards. The Department of Energy provided funding for both the conference and the Combinatorial Scientific Computing workshop.

Kudos to the SIAM staff, program committee, session chairs, presenters, and the many attendees, all of whom contributed to making this a great meeting.

Tamara Kolda is a senior member of the technical staff at Sandia National Laboratories.


Donate · Contact Us · Site Map · Join SIAM · My Account
Facebook Twitter Youtube linkedin google+