VA Tech: New Player in Supercomputing

December 1, 2003

Virginia Polytechnic Institute and State University made a bold entry into the supercomputer arena this fall, building a high-performance, low-cost cluster of 1100 Apple G5s with a fast interconnect. The story, appealing on several counts, caught the attention of New York Times business reporter John Markoff, whose article describing the "home-brew supercomputer" put together by student volunteers (repaid for their efforts in pizza) appeared on October 22; USA Today followed with an article on the feat on November 12. Neither publication is known for its close monitoring of developments in computing hardware.

Two features make the Virginia Tech computer especially newsworthy: its high peak performance (10.28 teraflops on a Linpack benchmark) and low cost (approximately $5.2 million). Readers of SIAM News will be equally impressed by a third feature: the expertise in communication/networking of the Virginia Tech researchers who designed the computer.

Benchmarking was carried out about three weeks before the planned announcement of the latest Top500 list of computers, in mid-November at Supercomputing 03. The Virginia Tech computer was expected to place in the top three---not bad for a university just putting its toe into the supercomputer waters! Only the Japanese Earth Simulator and the ASCI Q at Los Alamos National Lab were expected to rank higher. (As this issue went to press, SIAM News learned that Virginia Tech's computer indeed placed third on the list.)

A conversation with Jack Dongarra, who maintains the Top500 list, puts the Virginia Tech accomplishment into perspective. The list is based on a single benchmark, he says; a parallel machine, of course, has to be considered in the context of the applications to be run on it. "There has to be a balance: floating point, software, communications, and I/O." The Virginia Tech computer "pushes on the floating-point side," Dongarra says, with the emphasis on applications that don't need regular accesses to memory. Offering the iterative solution of a large sparse matrix as an example of an application that may not be suited to this architecture, Dongarra looks forward to seeing "how well it will do on a wider set of applications."

These reservations aside, future advances in very-high-end computing may very well come from unexpected directions, and novel ideas are being developed to advance capabilities. Institutions with the highest computing requirements, like the national laboratories, will continue to push architectures in new directions in pursuit of petascale computing.

Many large clusters have been built, and innovative approaches are needed to achieve good performance as the numbers of processors increase. The designers of IBM's Blue Gene/L, for example, were attempting to solve the scalability problem. Their motivation, according to David Klepacki of IBM, a speaker at the recent SIAM Conference on Mathematics for Industry (see article in this issue), is the limited success to date as the number of processors grows much beyond a thousand. "It's hard to run MPI when there are more than a thousand nodes," he says, "and it's especially tough when there's a bug."

Blue Gene/L (only a small fraction of which is now running) is a homogeneous collection of simple independent processing units. Its developers did not set out to achieve the highest possible density of transistors on a chip; rather, they were looking for tradeoffs that would make it possible to address memory and connectivity needs on the chip while keeping power and thermal requirements low. Their goal: sustained performance of one petaflop per second in six years.

Blue Gene/L has been developed to address a collection of grand challenge problems, beginning with protein folding. Other problem areas that will be considered when one of the machines is installed at Lawrence Livermore National Lab (as ASCI Purple) include molecular dynamics, shock turbulence, and hydrodynamic instability.

Virginia Tech's computer may not hold the answers to all the grand challenge problems, but it is a very attractive way to jump-start a university program and support new programs in computational science and engineering. Virginia Tech is in the process of developing a graduate program in CSE, according to Terry Herdman, director of research computing at Virginia Tech, where he is also director of the Interdisciplinary Center for Applied Mathematics. Twelve new hires have been designated as CSE positions, he says, and CSE has been identified as a high-priority research area at the university. Appropriate computing resources are an important factor in attracting students and faculty, says Herdman, who is a former SIAM vice president for education.

The main reason for the media interest in Virginia Tech's computer, one suspects, is the cost aspect, and the reported investment is surely modest for a "top 3" computer. But from the perspective of the SIAM community, Virginia Tech's achievement may be to have found a novel approach for developing a program in computational science and engineering, one that could well serve as a model for other universities.---JMC


Donate · Contact Us · Site Map · Join SIAM · My Account
Facebook Twitter Youtube linkedin google+