Intel announced the 2023 release of the Intel oneAPI tools, available in the Intel Developer Cloud and rolling out through regular distribution channels. The new oneAPI 2023 tools support the upcoming 4th Gen Intel Xeon Scalable processors, Intel Xeon CPU Max Series and Intel Data Center GPUs, including Flex Series and the new Max Series.
The tools deliver performance and productivity enhancements, and add support for new Codeplay plug-ins that make it easier than ever for developers to write SYCL code for non-Intel GPU architectures. These standards-based tools deliver choice in hardware and ease in developing high-performance applications that run on multi-architecture systems.
“We’re seeing encouraging early application performance results on our development systems using Intel Max Series GPU accelerators — applications built with Intel’s oneAPI compilers and libraries,” said Timothy Williams, deputy director, Argonne Computational Science Division. “For leadership-class computational science, we value the benefits of code portability from multivendor, multi-architecture programming standards such as SYCL and Python AI frameworks such as PyTorch, accelerated by Intel libraries.
What oneAPI tools deliver
Intel’s 2023 developer tools include a comprehensive set of the latest compilers and libraries, analysis and porting tools, and optimized artificial intelligence (AI) and machine learning frameworks to build high-performance, multi-architecture applications for CPUs, GPUs and FPGAs, powered by oneAPI.
The tools enable developers to quickly meet performance objectives and save time by using a single codebase, allowing more time for innovation.
This new oneAPI tools release helps developers take advantage of the advanced capabilities of Intel hardware:
- 4th Gen Intel Xeon Scalable and Xeon CPU Max Series processors with Intel Advanced Matrix Extensions (Intel AMX), Intel Quick Assist Technology (Intel QAT), Intel AVX-512, bfloat16 and more.
- Intel Data Center GPUs, including Flex Series with hardware-based AV1 encoder, and Max Series GPUs with data type flexibility, Intel Xe Matrix Extensions (Intel XMX), vector engine, Intel Xe Link and other features.
- MLPerf DeepCAM deep learning inference and training performance with Xeon Max CPU showed a 3.6x performance gain over Nvidia at 2.4 and AMD as the baseline 1.0 using Intel AMX enabled by the Intel oneAPI Deep Neural Network Library (oneDNN).
“We look forward to the first exascale scientific discoveries from these technologies on the Aurora system next year,” added Williams.
Filed Under: Components, News, Software