Intel’s Software-Defined Vehicle Delivers Efficiency and Performance

Intel offers the auto industry silicon-enforced virtualization features to create software-defined vehicles done right. 

News

  • March 12, 2024

  • Contact Intel PR

  • Follow Intel Newsroom on social:

    Twitter logo
    YouTube Icon

author-image

By

What’s New: Using Intel’s market-leading silicon-enforced virtualization capabilities, Intel Automotive offers the industry the most performant and efficient approach to architecting a software-defined vehicle (SDV) – one that delivers 99% efficiency and zero latency. Consumer expectations for high quality and personalized experiences demand this performant compute platform with space for multiple software workloads.

“We have the most power- and performance-efficient implementation of virtualization solutions on the market. Without this, automakers won’t be able to deliver the next-gen experiences they envision, giving consumers a poor-performing and slow-responding in-vehicle experience.”

– Jack Weast, Intel Fellow, vice president and general manager of Intel Automotive

Why It’s Important:  The auto industry has been trying to move toward the software-defined promise by using a hypervisor for software virtualization, creating a bottleneck that cannot scale with the performance demands of todays workloads. Intel’s silicon-enforced separation enables a direct path, bypassing the hypervisor, and opens up additional performance within the software for higher quality and new workloads that will unlock the next-gen features and services consumers crave.

How It Works: Think for a moment of the compute needed to power an SDV as if it were an electric vehicle (EV) with a fully charged battery. It is generally accepted that if it leaves home (Point A) to go directly to its set destination (Point B), it optimizes performance, in this case the vehicle’s range. That’s how Intel’s silicon-enforced virtualization works – it makes an efficient trip to the hardware. But if the EV is forced to make a detour to an alternative location (Point C), it must use up vital energy, and the trip takes longer. This forced “detour” is similar to the experience delivered by other silicon providers. Namely, too much of the virtualization functionality is implemented in software – that trip to Point C – before the workload gets to the underlying hardware. The detour, ultimately, leads to significant performance degradation.

Car journey analogies aside, more technical specificity around the journey shows the benefits that Intel’s market-leading virtualization capabilities deliver, in this case through the graphics processing unit (GPU).

A graphic shows GPU software virtualization capabilities that use a hypervisor compared with Intel’s plan for an SDV with hardware-enabled physical separation.

The graphic above shows the different journey taken when virtualization must be done in software versus physically separated at the silicon level. On the left, to run multiple GPU-based workloads via a hypervisor, the virtual machines must access the hypervisor, then the service operating system (OS) – which requires hundreds of extra lines of code and uses valuable bandwidth – before it can access the GPU. Conversely, when using Intel SDV SoCs with single-root I/O virtualization (SR-IOV), each workload is separated directly at the GPU silicon level, freeing up the software layers for enablement of additional performance and functionality with zero latency.

A graphic compares performance benchmark of SR-IOV and virtual-only separation (VirtIO) using GFX Manhattan 3.0 (offscreen).

The second illustration displays the efficiency benefits with Intel SDV system-on-chips (SoCs) compared with virtual-only separation (VirtIO). Using the industry-standard graphics benchmark GFX Manhattan 3.0 (offscreen), results show that when running a single workload, the Intel-based approach can operate at 99% efficiency compared with the competition, which is running at approximately 43% efficiency. In real terms this means if you run a workload that needs 100 frames per second (FPS), with the Intel solution you get 99 FPS with zero latency. With the alternative solution, you get 43 FPS, plus additional workload-dependent latency. This example merely scratches the surface of the advantages of Intel’s market-leading virtualization capabilities. These benefits similarly extend to AI-based workloads, or even headless workloads that don’t use a GPU or AI accelerator.

What It Means for In-Vehicle Experiences: Virtualization is the key to unlocking the next-gen experiences that consumers crave. With it, drivers and passengers will experience a much more responsive vehicle. Think higher frame-rate performance during game play, the beauty of 3D map applications instead of 2D, real-time 3D visualizations across multiple displays within the vehicle, or enhanced safety with real-time AI inferencing.

All of this is coupled with the reassurance and convenience of over-the-air updates that are capable of delivering the next era of services and features over the entire vehicle's lifetime.

More Context: The Software-Defined Vehicle is Here (Jack Weast Editorial) | Intel Automotive News from CES 2024