Parallel Processing '08: A Global Perspective on Research in Parallel Processing for Scientific Computing

June 11, 2008


Yoshio Oyanagi of Kogakuin University, Tokyo: It was application developers, rather than the computer science community, who initiated practical parallel processing for scientific computing in Japan.

Padma Raghavan

A highlight of SIAM's 2008 conference on parallel processing was a panel convened to discuss a top priority for the field: building an inclusive global community in parallel processing for scientific computing. Horst Simon of Lawrence Berkeley National Laboratory chaired the session; the panelists, experts from seven countries spanning four continents, brought to the discussion unique perspectives and insights on some key questions: How can we combine our global resources in computing to enable breakthroughs in such areas as energy and the environment? What are the major barriers to international collaboration? What specific areas are ripe for more collaboration? Do we need a one-teraflop/s-per-scientist program? What role can SIAM play?

With 32 of the world's 50 top supercomputers, the U.S. is a world leader, said Rick Stevens of Argonne National Laboratory. He went on to highlight new investments aimed toward petascale computing and beyond by major funding agencies, including the National Science Foundation and the Department of Energy's Office of Science. Through their INCITE and OCI initiatives, Stevens said, DOE and NSF are encouraging global collaboration. Together, the two agencies allocate about half a billion supercomputer hours to eligible scientists; recipients of the purely merit-based awards do not have to be U.S. citizens or at U.S. institutions. The awards are expected to quadruple in the next two years, Stevens said, but the community is not yet ready to exploit this capacity fully: New algorithms and software are needed.

He encouraged researchers to seek novel designs for computers capable of sustained performance of 1 teraflop/s per watt, along with ideas for expanding, by a factor of ten or more, a research community capable of developing parallel applications that can be scaled to run on 100,000 processors.
New methods are needed for exploring complex domains in an emerging class of problems in energy and in the environment, among other areas, Stevens continued. There are also opportunities for researchers in HPC to focus on quality-of-life issues--big social goals, including the economy and health. Finally, he said, there is a pressing need to speed up the adoption of methods and algorithms into application codes.

Considering the quest over the last three decades to build the world's fastest supercomputer, Yoshio Oyanagi of Kogakuin University in Tokyo mentioned key differences between Japan and the U.S. In particular, he contrasted the U.S. strategy of multiprocessors made up of scalar/super-scalar processors with the Japanese emphasis on vector architectures. In Japan, he pointed out, practical parallel processing for scientific computing was initiated by application developers, rather than by the computer science community. Oyanagi alluded to one consequence of the Japanese architectural approach: Application scientists, "spoiled" by vectorization, have found it difficult to adapt to massively parallel machines.

Looking to the future, he said that Japan's next-generation (10 petaflop/s) machine is scheduled for completion in 2012, with a portion of it to be dedicated to international collaboration.

Thomas Lippert of the Gauss Centre for Supercomputing in Germany presented the European Union's vision for scientific computing (see page 3 for an article on Lippert's invited talk at the conference). The strategy in Europe, he observed, calls for evolution of the existing Europe-wide partnership for advanced computing into a single legal entity, similar to the European Space Agency and CERN, by about 2010. Lippert envisions a three-tier pyramid, grounded in EU-scale facilities and ranging upward through smaller national facilities and regional centers.

Thomas Lippert of the Gauss Centre for Supercomputing, in Germany, envisions a European partnership for high-performance computing, with a structure much like that of the European Space Agency, in place within the next few years.

Narendra Karmarkar, principal scientific adviser to the government of India, portrayed high-performance computing in India as fueled by unique public and private partnerships. Companies like Wipro and Tata, he said, have focused on hardware and facilities, while the research community is concerned with algorithmic innovations targeting specific domains, such as computational chemistry at the Indian Institute of Science and computational mathematics at the Tata Institute for Fundamental Research.

The panelists from China, Brazil, and South Africa discussed efforts in their countries to harness the power of high-performance computing for solving grand challenge problems arising in science and society. China has a strong presence in grid computing, said Yuefan Deng of Nankai University. Researchers in China, via the China National Grid, have sequenced the rice genome and improved weather forecasts.

Alvaro Coutinho of the Center for Parallel Computing and the Federal University of Rio de Janeiro pointed out that Brazil, with a national laboratory for scientific computing that dates back to 1977, was one of the first Latin American countries to have supercomputing capability. Parallel computing in Brazil tends to be multidisciplinary, he said, with application domains including climate, high-energy physics, and bio-informatics.

Alvaro Coutinho of the Federal University of Rio de Janeiro: In Brazil, application domains for parallel computing are mainly interdisciplinary---including climate modeling, high-energy physics, and bio-informatics.

The business of buying and selling energy---a major industry in Brazil---also relies on high-end computing using parallel codes; almost every energy company has a cluster, Coutinho said.

Representing South Africa, Happy Sithole of the Centre for High Performance Computing in Capetown spoke of the center's acquisition of a Blue Gene/P, the largest machine installed in Africa to date. Scientists at the center, he said, focus on the interfaces between software and applications, and between applications and hardware. Computing power, Sithole said, should be put to use not only for scientific discovery but also for the benefit of society and for outreach to high school students.

Happy Sithole of the Centre for High Performance Computing in Capetown: Along with scientific research, computing power should be used to benefit society.

The hour was late (the panel got started at 8 PM, after a busy day at the conference), but the large and engaged audience kept the discussion going for well over two hours. There was a consensus that parallelism, although the key to sustained "Moore's law" performance scaling, is now grossly underutilized. The need is greater than ever before for transformative ideas---in research, education, and outreach---that will enable a "quantum jump" in software productivity, dramatically improve the adoption rate of new software and algorithms into existing computational science communities, and support the growth of communities in new areas of broad impact.

Padma Raghavan of Pennsylvania State University gave an invited talk, "When Sparse Applications Meet Architectures," at SIAM's 2008 parallel processing conference.


Donate · Contact Us · Site Map · Join SIAM · My Account
Facebook Twitter Youtube linkedin google+