Download

HPE and Intel® Omni-Path Fabric enable groundbreaking HPC.

Data-intensive high-performance computing is almost its own brand of HPC. For instance, making all of a large-scale cluster’s storage nodes visible to all of its compute nodes — when the storage is in petabytes and the number of cores is in the tens of thousands — is a challenge unto itself. But then performing world-class supercomputing on such a cutting-edge system is a challenge exponentially greater still.

Intel’s Omni-Path Architecture (OPA), as deployed in Hewlett Packard Enterprise clusters, meets both challenges skillfully.

The Pittsburgh Supercomputing Center’s flagship HPC system, Bridges, is an HPE cluster built from Integrity Superdome X, ProLiant DL580 and Apollo 2000 servers. Central to Bridges’ operations is a series of Intel® Omni-Path Host Fabric Adapters and Omni-Path Edge Switches (48 ports each).

Nick Nystrom, the center’s Director of Strategic Planning, noted that “The Omni-Path fabric… [is] essential to delivering very high performance for both new jobs and traditional HPC jobs. Omni-Path has many features for maintaining very high-bandwidth, very low-latency, and very high quality of service. Omni-Path from Intel® and its integration with HPE servers will allow us to provide scalable solutions ranging from small clusters up through national-scale resources.”

 

From cutting-edge genomics to Texas Hold’em

The Intel® OPA fabric enables Bridges to boast 12.37 GB/s at 930 nanosecond latency, according to PSC’s benchmarks. Tightly coupled applications can run in excess of one-thousand cores: The 48-port switches interconnect islands of 42 nodes, each containing 28 Intel® Xeon processor cores. These interconnected islands represent 1,176 cores at full bisection bandwidth across the 100 Gbps switch. (By contrast, 36-port switches would complicate such a tight configuration running at full bandwidth — requiring doubling up of some switches with others lying partly fallow, all of which might also be slowed down by the extra hops or increased rackspace required for the more complex topology.)

So, what do all these high-performance specs mean in real-world applications? In January 2017, at the Rivers Casino in Pittsburgh, Bridges handily defeated all four of its opponents, composed of some the world’s best Texas Hold’em poker players. The Carnegie Mellon University School of Computer Science’s Libratus AI system ran 19 million core hours on Bridges in preparation for the poker tournament.

As CMU’s Tuomas Sandholm explains, Libratus was working within the context of 10161 possible situations. That’s a number greater than the amount of atoms in the entire universe. So, it was of course impossible to brute-force calculate anything within such a vast parameter space. But Libratus’ machine learning algorithms — made possible by Bridges’ OPA-enabled, high-bandwidth, low-latency operations — instead incrementally learned its winning strategy from the four championship level players after each day’s play.

“We were like, we’ve figured this out. [Bridges] has these holes. We’re going to take it on,” says player Jason Les. “And then it improved.”

Libratus ultimately took home a stunning $1,766,250 in theoretical money over the tournament’s 120,000 total hands.

OPA enables Bridges to apply a heterogeneous approach to the diverse array of problems its users need to address. One case involved large memory requirements. University of Georgia geneticists used Bridges across compute nodes with 3 to 12 TB of RAM, a broad enough memory footprint to allow comparison of full microbiomes in the human gut. That capability, in turn, opens new insights to the complex operations of Type 2 diabetes within realistic simulations of human gut flora.

Another group of researchers needed Bridges to broaden the span of variables open to their simulations. Rather than the hundreds of variables available to typical HPC deployments today, Bridges’ Superdome X and ProLiant DL580 servers integrated with OPA fabric open up a regime of hundreds of thousands of variables for simulation and analysis. Medical researchers from the University of Pittsburgh, Carnegie Mellon University and PSC integrated broad ranges of data sets from genetic assays to fMRI and other imaging data. Their analyses of the interrelationships between cancer, chronic lung disease and brain disorders are pioneering new discoveries in integrative medicine.

Finally, researchers from Harvard University and the Allen Institute for Brain Science leveraged Bridges’ high-performance data analytics capability to analyze 35 TB of data on the visual cortex of a mouse. HPCwire recognised this research with its Editors’ Choice Award at the SC16 conference. Bridges’ phenomenal capacity for data analysis enabled the brain researchers to take a “major step,” the award said, “in reconstructing brain connections in a way that helps scientists understand how the millions of nerve cells in the brain communicate and work together.”

Whether pioneering new frontiers in deep learning or opening up broad-spectrum analytics on memory-intensive, storage-intensive or compute-intensive problems, Bridges has blazed many new trails. Its OPA fabric interconnects the entire system and brings truly scalable, high-performance, and low-latency computing to the HPE ecosystem of Superdome, Apollo, and ProLiant servers.
To discover how OPA can change the game for your HPC system, contact us today.

Contact us

 

Download Case Study

 

Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedIn

Leave a Reply