At the 2021 International Supercomputing Conference (ISC), Intel showcased how the company is extending its lead in high-performance computing (HPC) with a range of technology disclosures, partnerships, and customer adoptions.
Intel processors are currently the most widely deployed compute architecture in the world’s supercomputers, enabling global medical discoveries and scientific breakthroughs. At the ISC event, the company announced advances in its Xeon processor for HPC and AI, as well as innovations in memory, software, exascale-class storage, and networking technologies for a range of HPC use cases.
“To maximize HPC performance, we must leverage all the computer resources and technology advancements available to us,” said Trish Damkroger, VP and GM of High-Performance Computing at Intel. “Intel is the driving force behind the industry’s move toward exascale computing, and the advancements we’re delivering with our CPUs, XPUs, oneAPI Toolkits, exascale-class DAOS storage, and high-speed networking are pushing us closer toward that realization.”
Advancing HPC performance
Earlier this year, Intel extended its leadership position in HPC with the launch of 3rd Gen Intel Xeon Scalable processors. The latest processor delivers up to 53 percent higher performance across a range of HPC workloads, including life sciences, financial services, and manufacturing — as compared to the previous generation processor.
Compared to its closest x86 competitor, the 3rd Gen Intel Xeon Scalable processor delivers better performance across a range of popular HPC workloads. For example, when comparing a Xeon Scalable 8358 processor to an AMD EPYC 7543 processor, NAMD performs 62 percent better, LAMMPS performs 57 percent better, RELION performs 68 percent better, and Binomial Options performs 37 percent better.
In addition, Monte Carlo simulations run more than two times faster, allowing financial firms to achieve pricing results in half the time. Xeon Scalable 8380 processors also outperform AMD EPYC 7763 processors on key AI workloads, with 50 percent better performance across 20 common benchmarks.
HPC labs, supercomputing centers, universities and original equipment manufacturers who have adopted Intel’s latest compute platform include Dell Technologies, HPE, Korea Meteorological Administration, Lenovo, Max Planck Computing and Data Facility, Oracle, Osaka University, and the University of Tokyo.
Integration of high-bandwidth memory
Workloads such as modeling and simulation (e.g., computational fluid dynamics, climate, and weather forecasting, quantum chromodynamics), artificial intelligence (e.g., deep learning training and inferencing), analytics (e.g., big data analytics), in-memory databases, storage, and others power humanity’s scientific breakthroughs.
The next-generation of Intel Xeon Scalable processors (code-named, “Sapphire Rapids”) will offer integrated high-bandwidth memory (HBM), providing a dramatic boost in memory bandwidth and a significant performance improvement for HPC applications that operate memory bandwidth-sensitive workloads. Users can power through workloads using just HBM or in combination with DDR5.
Customer momentum is strong for Sapphire Rapids processors with integrated HBM, with early wins — such as the U.S. Department of Energy’s Aurora supercomputer at Argonne National Laboratory and the Crossroads supercomputer at Los Alamos National Laboratory.
“Achieving results at exascale requires the rapid access and processing of massive amounts of data,” said Rick Stevens, associate laboratory director of Computing, Environment, and Life Sciences at Argonne National Laboratory. “Integrating high-bandwidth memory into Intel Xeon Scalable processors will significantly boost Aurora’s memory bandwidth and enable us to leverage the power of artificial intelligence and data analytics to perform advanced simulations and 3D modeling.”
The Sapphire Rapids-based platform will provide unique capabilities to accelerate HPC, including increased I/O bandwidth with PCI express 5.0 (compared to PCI express 4.0) and Compute Express Link (CXL) 1.1 support, enabling advanced use cases across compute, networking, and storage.
In addition to memory and I/O advancements, Sapphire Rapids is optimized for HPC and artificial intelligence (AI) workloads, with a new built-in AI acceleration engine called Intel Advanced Matrix Extensions (AMX). Intel AMX is designed to deliver significant performance increase for deep learning inference and training. Customers already working with Sapphire Rapids include CINECA, Leibniz Supercomputing Centre (LRZ) and Argonne National Lab, as well as the Crossroads system teams at Los Alamos National Lab and Sandia National Lab.
“The Crossroads supercomputer at Los Alamos National Labs is designed to advance the study of complex physical systems for science and national security,” Charlie Nakhleh, associate laboratory director for Weapons Physics at Los Alamos National Laboratory. “Intel’s next-generation Xeon processor Sapphire Rapids, coupled with HBM, will significantly improve the performance of memory-intensive workloads in our Crossroads system…enabling us to complete major research and development responsibilities in global security, energy technologies, and economic competitiveness.”
Filed Under: AI, Components, Microprocessors, News, Products
Questions related to this article?
👉Ask and discuss on EDAboard.com and Electro-Tech-Online.com forums.
Tell Us What You Think!!
You must be logged in to post a comment.