ENERGY | WIRELESS | NANOTECH | MEMS | OPTICS | QUANTUM | 3D | CHIPS | ALGORITHMS

Friday, November 19, 2010

#ALGORITHMS: "Sandia upgrades supercomputer benchmarks"

Supercomputers used to be honking big central processing units (CPUs) with massively parallel vector processors alongside. Today, however, supercomputer makers have all but given up on massive-multi-chip CPUs, instead opting for massively parallel interconnects to manage large number of single-chip microprocessors. As a result, the applications they perform must cope with multiple simultaneous operations that the new Graph 500 benchmark measures. Look for supercomputer makers to embrace the new Graph 500 benchmark which uses graph theory to divide-and-conquer analysis of the output streams from massive simulations within five years. R. Colin Johnson, Kyoto Prize Fellow @NextGenLog

Synthetic graph was generated by a method called Kronecker multiplication. Larger versions of this generator, modeling real-world graphs, are used in the Graph500 benchmark. (Courtesy of Jeremiah Willcock, Indiana University)

Here is what my story in EETimes says about Graph 500: Exascale supercomputers running a thousand times faster than today's petaflop machines will require newer performance measures, according to Sandia National Laboratories, which announced a 30 member committee effort to define a new standard with Intel, IBM, AMD, NVIDIA, and Oracle...Graph 500 differs from the traditional Linpack by testing a supercomputer's skill at using graph theory to analyze the output streams from simulations in biological, security, social and similar large-scale problems. Graph 500 not only measures the traditional number crunching ability of supercomputers, but also their ability to shuttle around the very-large data sets represented by future supercomputers like those being addressed by the U.S. Department of Energy's exascale supercomputer initiative...
Full Text: http://bit.ly/NextGenLog-cc67