Accelerating Customer Results with Accelerated Computing

Intel ramps accelerated computing and data center graphics business with streamlined product lineup and software ecosystem enablement.

Opinion

author-image

By

In January, we launched our strongest offerings for high performance computing (HPC) and AI ever with the 4th Gen Intel® Xeon® Scalable processors, Intel® Xeon® CPU Max Series and Intel® Data Center GPU Max Series. We also introduced the Intel® Data Center GPU Flex Series last year – a flagship product for media streaming, cloud gaming and AI inference – and the Habana® Gaudi®2 deep learning processor for training.

Co-designed with leading cloud service providers, enterprise and supercomputing customers, these products showcase key technical innovations, including the integration of high-bandwidth memory with x86 CPUs and advanced chiplet architectures. Intel’s full data center and AI hardware portfolio, including our Xeon and Habana products, have been developed to help our customers solve the world’s most difficult problems and train the largest AI models.

Accelerated computing and GPUs are among the fastest-growing segments of the computing market and central to Intel’s long-term success. We are seeing great customer support and we continue to demonstrate tremendous performance improvements in real-world HPC and AI workloads on these recently deployed products.

Building on this momentum, with close customer engagement on their requirements, we are simplifying and streamlining our data center GPU roadmap. This enables our customers and the ecosystem to maximize their investments on currently available Max Series and Flex Series GPUs, while ensuring next-generation products deliver significant leaps in performance and developer productivity.

Let me share details related to customer adoption, real-world application performance improvements and roadmap updates.

Early Customer Adoption

Our early efforts to ramp Intel® Xeon, Max Series and Flex Series GPUs into the data center market have seen a positive reception from customers.

You have probably heard about Argonne National Laboratory, which will be deploying more than 60,000 Max Series GPUs and 20,000 Max Series CPUs to power the Aurora supercomputer this year. Aurora is expected to become the world’s first supercomputer with 2 exaflops of peak performance. Deployment is going well, with Intel collaborating closely on testing and development. Argonne expects the system to be accessible to early researchers by the third quarter of 2023.

Lawrence Livermore National Laboratories (LLNL) and Sandia National Laboratories are installing thousands of nodes of 4th Gen Intel Xeons in their CTS-2 systems – the supercomputing workhorse of the Department of Energy (DOE). LLNL’s Intel Xeon-powered predecessor, JADE, recently contributed to the breakthrough in fusion energy, helping to design the optimal package for laser induction.

Los Alamos National Laboratory (LANL), another DOE research center, is installing more than 10,000 Max Series CPUs for its Crossroads supercomputer, which will power national security and wildfire research.  

The impact of these technologies on science, engineering and industry cannot be underestimated.

Performance

GPUs have seen explosive growth in the HPC and AI space, with the number of flops from GPUs on the Top 500 List of the world’s fastest supercomputers growing at three times the pace of those from CPUs. With the Max Series GPU, Intel introduced its most sophisticated processor ever, using the most advanced packaging and manufacturing processes, with rich features such as hardware-accelerated ray tracing, RAMBO cache, deep systolic arrays for AI ... the list goes on and on.

But how does it perform? At this week’s Intel Extreme Performance User Group (IXPUG) meeting, Tim Williams, deputy director of Argonne's Computational Science Division, presented performance data for real-world applications on production Max Series GPUs. For materials science, nuclear engineering, cosmology and plasma physics codes, researchers measured 30% to 260% speedups over leading alternative GPUs.

The Flex Series GPU is also showing leadership in media stream density and visual quality and is now shipping initial deployments to cloud service providers and multinational companies, enabling large-scale cloud gaming and media delivery deployments.

These early results give us tremendous confidence that our investments are already paying dividends for our customers and the developer ecosystem – and that our GPU products have the capabilities and scalability needed to help solve the world’s most challenging problems today and tomorrow.

Roadmap

With a goal of maximizing return on investment for customers, we will move to a two-year cadence for data center GPUs. This matches customer expectations on new product introductions and allows time to develop their ecosystems.

Building on the momentum of the Max Series GPU, our next product in the Max Series family will be the GPU architecture code-named Falcon Shores. Targeted for introduction in 2025, Falcon Shores’ flexible chiplet-based architecture will address the exponential growth of computing needs for HPC and AI. We are working on variants for this architecture supporting AI, HPC and the convergence of these markets. This foundational architecture will have the flexibility to integrate new IP (including CPU cores and other chiplets) from Intel and customers over time, manufactured using our IDM 2.0 model. Rialto Bridge, which was intended to provide incremental improvements over our current architecture, will be discontinued.

The Flex Series product family will also move to a two-year cadence. We will discontinue the development of Lancaster Sound, which was intended to be an incremental improvement over our current generation. This allows us to accelerate development on Melville Sound, which will be a significant architectural leap from the current generation in terms of performance, features and the workloads it will enable.

In addition to streamlining our roadmap, we are increasing our focus on the software ecosystem. We will be providing continuous updates for our Max Series and Flex Series products, with performance improvements, new features, expanded operating systems support and new use cases to broaden the benefits of these products.

Accelerating Our Customers’ Work

Our accelerated computing products are in the market and ramping. The oneAPI open software ecosystem is maturing by the day. We have simplified our roadmap with the goal of doing fewer things better and are rapidly rolling out products to our customers. Stay tuned for frequent updates on deployments, workloads and performance. I look forward to sharing more at upcoming events and hope to see you at the International Supercomputing Conference (ISC) in May.

Jeff McVeigh is corporate vice president and interim general manager of the Accelerated Computing Systems and Graphics Group at Intel Corporation.