SiFive’s latest flagship RISC-V CPU will be revealed today – and we’re told it will sport proper virtualization support in hardware.
The P650 is offered as an application core you can license to drop into your system-on-chip, and run Linux and other OSes on it.
It’s aimed at an ambitious range of markets, from networking equipment and 5G infrastructure, to mobile devices, automotive, and aerospace. SiFive’s most important customers will be offered the P650 in Q1 2022, and it will be generally available to license by the middle of the year.
Chris Jones, VP of product marketing at SiFive, told us the P650 was designed with the expectation it will be used in 5nm process node system-on-chips, and will clock at 2.7GHz or more at that node. How much energy it will need to run, and how much oomph it will deliver, will depend on how clients use the cores in their components, he said.
“Target TDP and performance-per-watt will vary by implementation depending on customer choice of cache and buffer sizes, and will be well-positioned against competitor products,” Jones told us.
Proposed RISC-V vector instructions crank up computing power on small devices
The P650 is a 64-bit (RV64GBC) out-of-order core with a quad-issue 13-stage pipeline and three execution units; the P550 was triple issue. It can be configured to have up to 128KB of L1 instruction and data caches, and up to four 256-bit memory ports. You can put up to 16 of the CPU cores into one coherent cluster at a time, with a shared 1MB or more L3 cache per core within that complex. SiFive said the design has a “large” instruction window and “advanced branch prediction,” plus other bits and pieces you’d expect in an application core today.
The wider pipeline gives the P650 a 40 per cent performance lift over the P550, according to SiFive, and with other optimizations in the design, the total boost is said to be 50 per cent.
SiFive claims its latest CPU core can achieve a benchmark score of approximately 11 SPECInt2006/GHz, and said this “compares favorably to the Arm Cortex-A77 across a variety of workloads.” The A77 launched in 2019. While the P650 represents SiFive closing in on the Cortex family, this core is not at the level of top-end Arm and x64 offerings.
Below is a obligatory diagram of the P650’s pipeline:
Left column, the load-store pipeline’s 13 stages. Center, a diagram showing the interconnections between the CPU core’s pipeline blocks. Right column, the 10 stages needed in integer operations. Source: SiFive … Click to enlarge
Interestingly enough, SiFive has gone ahead and implemented RISC-V’s hypervisor extension in the P650, which is at least a first for the Silicon Valley outfit.
Although the definition of this extension has been in the draft stages of development for a long while, and at time of writing it hasn’t been formally ratified by the RISC-V Foundation, it is all-but-ratified: version 1.0 of the extension, marking its official release, is awaiting final sign-off, and can be inspected here in the latest draft RISC-V privileged specification – skip to chapter 8.
One assumes the extension will be ratified in time for next week’s RISC-V Summit, and seeing as the extension has been close to getting the green light for so long, SiFive just implemented it. It’s highly unlikely the spec will change from the time SiFive engineers worked on the feature to its ratification by the foundation. SiFive’s Chris Jones told us the P650 will follow the 1.0 release candidate.
So, what does this mean? The RISC-V architecture already allows for basic hardware virtualization. If its Physical Memory Protection (PMP) feature is implemented in a core, a hypervisor running at what’s called the machine level can split the available RAM up into partitions, and within each of these partitions run a guest OS kernel at the supervisor level.
Each supervisor can’t step outside of its own hardware-isolated partition without triggering a fault. Within its allocated physical RAM, the supervisor can set up its own virtual memory using page tables, and run multitasking user-level applications. The hypervisor can then schedule guest threads onto physical cores, special instructions can be trapped as needed, and everything just works.
But it’s rather clunky. You can only have so many PMP partitions, known as regions and there are typically 4 or 8, limiting the number of active guests; the regions are each contiguous in RAM; and other fiddly complications.
Enter the hypervisor extension, which handles guests more like you see on Intel, AMD, and Arm systems: using page tables, primarily. Hypervisors aware of this extension can create virtualized environments for guests by mapping blocks of physical RAM to RAM inside a virtual machine using pagination, which the guest OS can break up into virtual memory spaces in which user-level programs run.
Chips that pass in the night: How risky is RISC-V to Arm, Intel and the others? Very
This is far more flexible and advantageous in terms of the number of guests that can be juggled by the hypervisor and managing RAM usage of virtual machines, and arguably makes it easier to port hypervisors for other architectures to RISC-V. It also enables kernel-level hypervisors, by virtualizing supervisor-level CPU control registers, as well as supporting bare-metal hypervisors.
According to SiFive’s representatives, this hypervisor support is coupled with an IOMMU first shown at the Linley Fall Processor Conference by the chip biz’s veep of architecture Shubu Mukherjee.
In all, it’s good news for people who want to eventually run modern, efficient type 1 and 2 hypervisors on RISC-V. KVM in Linux 5.16 is gaining support for the hypervisor extension on RISC-V chips, and Xen is said to be working on it, too. And now SiFive has a CPU core in its library that supports the extension.
Virtualization isn’t just useful in data-center machines and network infrastructure, it’s also used in automotive and other industrial sectors where, for example, user interface and control applications ought to be hardware isolated. ®