10:30 AM-12:30 PM
Center City 1
MPI has achieved acceptance as a preferred approach for implementing distributed memory parallel (DMP) applications. OpenMP is gaining a similar status for shared memory parallel (SMP) applications. At the same time, there is still tremendous opportunity for improvements. In this minisymposium, we present four efforts to provide better parallel computing capabilities outside of the traditional techniques. Specifically we present languages with parallel expressions, communications libraries that are aware of memory architecture and algorithms that take advantage of emerging architectures such as SMP clusters. We believe that such efforts are important to making qualitative advances in parallel computing.
Organizer: Michael A. Heroux