In the past, the world was simple. An operating system took all the memory that the hardware provided. Since virtualizing multiple “computers” on one hardware, it became more complex. The individual virtual machines (VM) are not allowed to look at each other’s cards. Even with the host system controlling, managing and running the VMs, unconditional trust is fading. System extensions such as AMD’s “Secure Encrypted Virtualization and Secure Nested Paging” (SEV-SNP) and Intel’s Trusted Domain Extensions (TDX) allow the VMs to be sealed off from each other and from the host system. For example, encryption and special management (reverse mapping tables, secure nested paging) of their own working memory are used here.
All these securing measures depress performance. Encryption and additional management cost computing time. When an operating system boots, it initializes the available memory. In the case of a VM, this is the memory that the host system currently allocates to the VM. Initialization at this point means that the entire encryption machinery starts up. This takes time and extends the boot time.
If the operating system had less memory to clear for deployment, it would be up and running faster. This is where the Unified Extensible Firmware Interface (UEFI) in version 2.9 comes in. It introduces the idea of “unaccepted memory”. A system starts with the allocated memory in an unaccepted state. Specifically, it cannot use the memory until it explicitly accepts it from the host system.
In order to be able to boot at all, the boot loader of such systems accepts (pre-accept) just as much memory as the kernel requires for booting. The kernel then accepts all further memory piece by piece when it is needed. This equalizes the initialization of memory by doing it on demand and not on supply. The system is up and running faster due to the shorter boot time.
Linux 6.5 can handle the concept of “unaccepted memory.” Since the procedure for accepting memory on AMD SEV-SNP and Intel TDX is different, they come in separate patches in the kernel. Whereas on Intel TDX everything looks like peace, joy, pancakes, AMD SEV-SNP has a “problem”. SEV-SNP has an existing user base on Linux; TDX does not.
Whereas TDX can practically start from scratch, an additional patch for SEV-SNP has to ensure compatibility. SEV-SNP has been supported by Linux since kernel 5.19. Until now, however, these kernels have not been aware of the process for accepting memory. On a new host system with “unaccepted memory”, these systems would be allocated memory, but would not be able to use it. Any attempt to access the unaccepted memory area by the guest system would be acknowledged as unauthorized with a fault. The old systems would be practically unable to run on a new host.
One possibility would be to teach the old kernels “unaccepted memory” by means of a backport. But even this seems to be an illusory undertaking on a broad front in the eyes of some kernel developers. In fact, this “provisional solution” will remain in place for a long time or forever. This is similar to x86 processors, which still wake up in real mode today. Only by switching to Protected Mode do they leave the 1980s behind and are open for more than MS-DOS or CP/M-86.
Rust permanent building site
Typical of kernel development, the development team settled on a supported Rust version when Rust was introduced with Linux 6.1. The choice fell on the then already older version 1.62.0. Linux 6.5 brings the first update of Rust in the mainline kernel. Going forward, 1.68.2 is the Rust of choice, released on March 28 this year.
The choice is again conservative. Current is Rust 1.72.0, which appeared last Thursday. Making this version change necessary were needed Rust features that the old version did not previously offer.
The changes to the Rust integration are mainly limited to adjustments due to the change of the toolchain from Rust 1.62.0 to 1.68.2. The current kernel thus sets the course for the further expansion of the Rust integration for the time being. It does not advance the actual programming of kernel modules in Rust for the time being. It restructures the substructure.
Preventing memory leaks
Memory leaks are a vexed issue with respect to the C programming language. Previously allocated memory areas remain as an unusable “black hole” if no extra programmed release of these areas takes place. To address the problem away from the C language standard, the dominant open source compilers gcc and Clang have long offered suitable extensions.
These extensions allow a variable to be given a “cleanup” function by means of the keywords attribute and cleanup. As soon as a variable leaves its area of validity (scope), a call of this function provides for the tidy release of the variable. This “scope-based” resource management frees the user from having to take care of the release of previously allocated spheres at every possible point.
The kernel team pragmatically picked up a patch set from Peter Zijlstra that makes this compiler extension usable in the kernel. Linus Torvalds intervened in the issue and encouraged Zijlstra to make his solution generally usable in the kernel. Zijlstra’s original approach initially targeted only locks.
This recourse to compiler extensions will sooner or later make the kernel more secure against “holes” in memory. After all, these leaks are more serious in the kernel than in an application. The kernel runs permanently. An application can be terminated to release leaks. In the kernel, this only works by rebooting.
It can be assumed that the kernel developers will gratefully take up this new feature. After all, it elegantly frees one to consider and program one’s own code for release at every possible point.
On x86 systems, the Linux kernel can bring CPUs online in parallel in large parts. Previously, this was only possible CPU by CPU in sequence. The new parallel approach reduces the time needed to activate all processors in the system. The process can thus be shortened to a maximum of one tenth. This is particularly useful when booting systems with many processors. The new approach also stands out positively when reactivating CPUs.
Optimized code for the PCIe bus ensures that the kernel wastes less time waiting for PCIe devices. According to the PCIe specification, only devices with a transfer rate of over 5 GT/s have to actively signal their connection to the system. Conversely, slower devices up to 5 GT/s are not required to actively announce their presence and readiness. The kernel used to treat both cases identically. It waited for active feedback from the slow devices, which did not necessarily come. During system resume, this sometimes meant a delay of about a minute until the system was awake and ready for use again. According to the PCIe specification, however, it is sufficient to wait a second to recognize a device as present. Linux 6.5 now implements this for the slow devices, which can significantly reduce the wait time, especially during resume.
Hardware support beefed upWith kernel 6.5, Linux is getting ready for the latest generation of hardware.This is how it gets itself into position for USB4 v2.USB4 v2 allows transfer rates of up to 80 Gbps via USB C. It also allows 120 Gbps in one direction, although only 40 Gbps is possible in the opposite direction.
As a further innovation, the expansion of WiFi 7 in Linux is progressing.WiFi 7 allows simultaneous transmission and reception of data across several frequency bands and channels with the “Multi-Link Operation” (MLO). Provided the appropriate hardware is available, MLO allows bands of 2.4 GHz, 5 GHz and 6 GHz to be used simultaneously.The goal is to increase bandwidth through this bundling.
The latest generation of the “Music Instruments Digital Interface” is also on board with MIDI 2.0. Even the PS/2 driver code for mice and keyboards has undergone a retread.
Linux 6.5 expands hardware support.It also pulls out a few stops to optimize and improve stability. The crucial stuff takes place under the surface.It may seem like a maintenance release, but internally it sets a lot of course for the future.
A first prominent user of the new kernel has already been determined. Canonical has announced that its upcoming distribution Ubuntu 23.10 “Mantic Minotaur” will include the new Linux kernel 6.5.