Xilinx, Inc., a provider in the adaptive-computing sector, recently introduced the Alveo U55C data-center accelerator card and a new standards-based, API-driven clustering solution for deploying FPGAs at massive scale. The Alveo U55C accelerator brings superior performance-per-watt to high-performance computing (HPC) and database workloads and easily scales through the Xilinx HPC clustering solution.
Purpose-built for HPC and big data workloads, the new Alveo U55C card is the company’s most powerful Alveo accelerator card, offering the highest compute density and HBM capacity in the Alveo accelerator portfolio.
Together, with the new Xilinx RoCE v2-based clustering solution, a broad spectrum of customers with large-scale compute workloads can now implement powerful FPGA-based HPC clustering using their existing data-center infrastructure and network.
“Scaling out Alveo compute capabilities to target HPC workloads is now easier, more efficient and more powerful than ever,” said Salil Raje, executive VP and GM, Data Center Group at Xilinx. “Architecturally, FPGA-based accelerators like Alveo cards provide the highest performance at the lowest cost for many compute-intensive workloads. By introducing a standards-based methodology that enables the creation of Alveo HPC clusters using a customer’s existing infrastructure and network, we’re delivering those key advantages at massive scale to any data center.”
Built for HPC and big data applications
The Alveo U55C card combines many key features that today’s HPC workloads require. It delivers more parallelism of data pipelines, superior memory management, optimized data movement throughout the pipeline, and the highest performance-per-watt in the Alveo portfolio.
The Alveo U55C card is a single-slot full height, half length (FHHL) form factor with a low 150W max power. It offers superior compute density and doubles the HBM2 to 16GB compared to its predecessor, the dual-slot Alveo U280 card. The U55C provides more compute in a smaller form factor for creating dense Alveo accelerator-based clusters. It’s built for high-density streaming data, high IO math, and big compute problems that require scale-out like big data analytics and AI applications.
Leveraging RoCE v2 and data-center bridging, coupled with 200 Gbps bandwidth, the API-driven clustering solution enables an Alveo network that competes with InfiniBand networks in performance and latency, with no vendor lock-in. MPI integration allows for HPC developers to scale out Alveo data pipelining from the Xilinx Vitis unified software platform.
By using existing open standards and frameworks, it’s now possible to scale out across hundreds of Alveo cards regardless of the server platforms and network infrastructure and with shared workloads and memory. Software developers and data scientists can unlock the benefits of Alveo and adaptive computing through high-level programmability of the application and cluster using the Vitis platform.
Xilinx has invested heavily in the Vitis development platform and tools flow to make adaptive computing more accessible to software developers and data scientists without hardware expertise. The major AI frameworks like Pytorch and Tensorflow are supported, as well as high-level programming languages like C, C++, and Python — allowing developers to build domain solutions using specific APIs and libraries, or use Xilinx software development kits to easily accelerate key HPC workloads within an existing data center.
HPC customer use cases
CSIRO, Australia’s national research organization along with the world’s largest radio astronomy antenna array, is using Alveo U55C cards for signal processing in the Square Kilometer Array radio telescope. Deploying the Alveo cards as network-attached accelerators with HBM allows for massive throughput at scale across the HPC signal processing cluster.
The Alveo accelerator-based cluster allows CSIRO to tackle the massive compute task of aggregating, filtering, preparing, and processing data from 131,000 antennas in real time. The 460Gbps of HBM2 bandwidth across the signal processing cluster is served by 420 Alveo U55C cards fully networked together across P4-enabled 100Gbps switches. The Alveo U55C cluster delivers processing performance with overall throughput at 15Tb/s in a compact power and cost-efficient footprint.
CSIRO is now completing an example Alveo reference design in order to help other radio astronomy or adjacent industries achieve the same success.
Ansys LS-DYNA crash simulation software is used by nearly every automotive company in the world. The design of safety and structural systems hinges on the performance of models as they mitigate the costs of physical crash testing with computer-aided design finite element method (FEM) simulations.
FEM solvers are the primary algorithms driving simulations with hundreds of millions of degrees of freedom, these enormous algorithms can be broken out into more rudimentary solvers like PCG, sparse matrices and ICCG. By scaling out across many Alveo cards with hyperparallel data pipelining, LS-DYNA can accelerate performance by more than 5X in comparison to x86 CPUs. This results in more work per clock cycle in an Alveo pipeline with LS-DYNA customers benefiting from game changing simulation times.
“In the spirit of relentless innovation, we’re excited about collaborating with Xilinx to significantly accelerate the finite-element solvers, which can represent 90 percent of the compute workload for implicit mechanics, in our LS-DYNA simulation application,” said Wim Slagter, strategic partnerships director at Ansys. “We look forward to Xilinx acceleration helping us in our mission to support innovators in engineering what’s ahead.”
Filed Under: Components, FPGA, News