What’s New in Xen 4.13

I am pleased to announce the release of Xen Project Hypervisor 4.13. This latest release improves security, hardware support, added new options for embedded use cases and reflects a wide array of contributions from the community and ecosystem. This release also represents a fundamental shift in the long-term direction of Xen, one which solidifies its resilience against security threats due to side channel attacks and hardware issues, helps integrators and operators to simplify system maintenance and reduce downtime using the new live-patching and run-time microcode-loading features, and brings new features that provide easier adoption for embedded and safety-critical use-cases, specifically ISO 26262 and ASIL-B.

SECURITY

Core Scheduling (contributed by SUSE)

Core scheduling is a newly introduced experimental technology that allows Xen to group virtual CPUs into virtual Cores and schedules these on physical cores. Furthermore, since switching between virtual cores on a physical core is synchronized, there are never virtual cpus of different virtual cores running at the same time on a single physical core. Core scheduling on Xen has been implemented in a scheduler-agnostic fashion, which means that the algorithm works with all Xen schedulers.

Before the introduction of core scheduling virtual CPUs would be scheduled on any of the threads and cores of a CPU, making workloads vulnerable to side-channel attacks leveraging information leaks from shared core resources such as caches and micro-architectural buffers, for which the only existing mitigation is to disable hyper-threading.

Core scheduling is a necessary, but not sufficient milestone enabling users to re-enable hyperthreading and reclaiming performance benefits while reducing or eliminating the risk of HW security issues. In conjunction with the Secret-free Xen Project Hypervisor currently being worked on by Amazon or synchronized scheduling currently being investigated by SUSE and Citrix it will be possible in the next release(s) to provide better trade-offs between security and performance.

Initial benchmarks have shown that for many workloads, core scheduling allows to reclaim lost performance when used alone. We are encouraging our users to test Core scheduling such that we can tune it for future releases.

Branch hardening to mitigate against Spectre v1 (contributed by Amazon and Citrix)

Xen 4.13 we have made the Hypervisor more resilient to Spectre v1 gadgets through branch hardening. This removes a number of potential gadgets reducing the attack surface using Spectre v1.

SERVICEABILITY

Late uCode loading (contributed by Intel)

uCode updates typically contain mitigations for HW vulnerabilities and are typically updated during system initialization or kernel boot, which requires a reboot and implies a long down-time. Xen 4.13 introduces late uCode loading in which the Xen Hypervisor deploys a uCode update with no need to reboot the system.

Improved live-patching build tools (contributed by AWS)

Numerous improvements to the live-patch built tools have been added, such as the capability of patching inline assembly, improvements to stacked modules, support for module parameters, additional hooks and replicable apply/revert actions, extended python bindings for automation and the concept of expectations for additional validation of live patches.

EMBEDDED AND SAFETY-CRITICAL APPLICATIONS

OP-TEE support (contributed by EPAM)

While Xen on x86 supports guest Trusted Execution Environment via TXT and TPM access for quite some time, Xen on Arm was not allowing TrustZone access for unprivileged guests. With Xen 4.13 support was added so all guests can concurrently run Trusted Applications on Arm’s TrustZone without interfering one with another. Required changes were also released in Linux kernel 5.2 and OP-TEE 3.6. This feature was tested with Android P running as a DomU Xen guest with experimental Android HALs – Keymaster, Gatekeeper on Renesas R-Car H3 SoC.

Xen OP-TEE support is fully functional in Xen 4.13 (some improvements will still be upstreamed), but there is still work to be done in OP-TEE. The most notable missing feature is the sharing of hardware (like crypto accelerators or RPMB) between VM contexts in OP-TEE.

This feature was developed in cooperation with the Linaro Secure Working Group, which maintains OP-TEE. To use this feature you need to build and install OP-TEE with virtualization support as described at https://optee.readthedocs.io/en/latest/architecture/virtualization.html. You also need to build Xen with OP-TEE mediator support (this feature is in “Technology Preview” state and is not enabled by default).

Renesas R-CAR IPMMU-VMSA driver (contributed by EPAM)

Modern automotive computing systems use hypervisors for vehicle functions centralization or isolation in mixed-critical systems. In both cases, peripherals access from guests (e.g. for shared GPU) must be protected with IO-MMU’s, improving overall system performance and security. Xen 4.13 extends its automotive processors support by adding driver for the VMSA compatible IO-MMU of Renesas Electronics Arm-based 3rd generation R-Car system-on-chips. This is the first IO-MMU in Xen that supports functional safety, which is an important milestone towards making Xen compliant with ASIL-B requirements.

The IOMMU sub-system on Arm was updated to support generic IOMMU DT bindings:

https://www.kernel.org/doc/Documentation/devicetree/bindings/iommu/iommu.txt. Added a generic way to register DT device (which is behind the IOMMU) using the generic IOMMU DT bindings before assigning that device to a domain; while newly added IPMMU driver supports generic IOMMU DT bindings, Arm’s SMMU driver doesn’t – this is still to be done.

Renesas IPMMU-VMSA support is considered as Technological Preview feature for now and is supposed to work only with newest R-Car Gen3 SoCs revisions (H3 ES3.0, M3-W+, etc.).

Dom0-less passthrough and ImageBuilder (contributed by XILINX)

Dom0-less support in Xen 4.13 had been extended to include device assignment. It is now possible to assign devices to Dom0-less VMs, which is essential because dom0-less VMs don’t have access to any PV devices. With dom0-less device assignment a user can setup a pure static partitioning system where each VM has access to a portion of the devices on the board.

In addition, a new tool called ImageBuilder (see https://wiki.xenproject.org/wiki/ImageBuilder and https://gitlab.com/ViryaOS/imagebuilder) has been added, that can be used to automate building Xen dom0-less configurations for U-Boot. The tool takes care of all the loading addresses generation and device tree editing, making using dom0-less Xen much

much easier.

Also see https://xenproject.org/2019/12/16/true-static-partitioning-with-xen-dom0-less/

Support for new Hardware

Xen 4.13 brings support for a variety of hardware platforms. Most notably, Xen 4.13 introduces support for AMD 2nd Generation EPYC™ with exceptional performance-per-dollar, connectivity options, and security features. In addition, Xen 4.13 also supports Hygon Dhyana 18h processor family, Raspberry Pi4 and Intel AVX512.

Other notable changes

  • Many bug fixes and quality improvements to the Xen on Arm port
  • Xen 4.13 is now fully Py3 compatible (3.3+). In Xen 4.13 we changed the minimum Py2 version to 2.6
  • Alongside this release a new set of Windows PV Drivers have been released. These are available for download at the 9.0.0 drivers download pages

SUMMARY

This release contains 1640 commits from 64 developers. A significant number of contributions for this release of the Xen Project came from Citrix, Suse, ARM, EPAM, Amazon, Xilinx, Intel, Invisible Things Lab, BitDefender, Hygon and other vendors and a number of universities and individuals.

As in Xen 4.12, we spent a lot of energy to improve code quality and harden security. On behalf of the Xen Project Hypervisor team, I would like to thank everyone for their contributions (either in the form of patches, code reviews, bug reports or packaging efforts) to the Xen Project.

We are currently establishing the future direction for the project with the following focus areas, some of which are already reflected in Xen 4.13

  • More resilience to Hardware Security issues
  • Reducing downtime when applying uCode updates, applying security patches and upgrading Xen without any downtime
  • Refactoring Xen on Arm to become the best open source virtualization platform for safety-relevant use-cases: this means filling some functional gaps, while changing the codebase to make it possible for vendors to consume Xen Project software in a fashion that is compliant with ISO 26262 ASIL B or IEC 61508 SIL 1 requirements, while delivering security benefits and minimizing the impact for established Xen Project users. During this release cycle, the project created a functional safety working group (FuSa SIG), which is staffed and supported by representatives from the Xen Project community and Safety Assessors. The initial main focus of the FuSa SIG is to establish a credible plan to achieve safety-certification and to help guide its implementation.

Please check our acknowledgement page, which recognises all those who helped make this release happen. The source can be located in the tree (tag RELEASE-4.13.0) or can be downloaded as a tarball from our website. For detailed download and build instructions check out the guide on building Xen 4.13. More information can be found at

A Big Thank You

I wanted to thank all our contributors who made this release happen and in particular Jürgen Groß (SUSE), who has been serving as release manager since Xen 4.11. Paul Durrant (Amazon) will be release manager for Xen 4.14. Paul has been an active Xen developer for many years, making significant code contributions to advance Windows support in Xen Project. He is also project lead for the Windows PV Driver subproject, which just has released the 9.0.0 drivers.


Intel and Xeon are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries.

AMD, the AMD logo, EPYC, and combinations thereof are trademarks of Advanced Micro Devices, Inc.

Read more