RISC-V is officially a decade old. Here’s a look at what the organization has accomplished and how it’s moving forward with extensions targeting specific computing environments and the industry abroad.
What you’ll learn:
- RISC-V International Association is developing a number of optional extensions intended to support specific industries as well as the overall computing industry in new and unprecedented ways.
- Three examples of these extensions are provided: cryptography, vector and reduced code size.
RISC-V recently celebrated its 10-year anniversary. Our community is now in a unique position to take advantage of the history of all that came before us in open-source software and hardware.
RISC-V is a ground-up open source architecture embodying the principles of RISC computers. It’s a flexible platform that’s appropriate for solutions targeting industry needs ranging from the Internet of Things (IoT) to supercomputers and everything in between.
We initially developed a compact instruction set architecture (ISA) with the ability to include common, optional, and custom extensions. Not surprisingly, the bar is much higher now than when the first commercial RISC chips showed up in products in the 1980s. This means that there are more requirements for ISA features as well as the need for a growing ecosystem to produce a deployable product.
Of course, the ISA is only the tip of the iceberg. By itself, it’s not useful, so we have and are continuing to develop a rich software ecosystem, ensuring that tools and features (e.g., simulators, verification tools, operating systems, hypervisors, debuggers, compilers, etc.) are in place. In turn, RISC-V members can benefit from sharing common efforts with the community and accelerate innovation.
Our organization is developing a number of optional extensions intended to support specific industries as well as the overall computing industry in new and unprecedented ways. Three examples I’ll discuss here are cryptography, vector processing, and reduced code size.
No matter the industry or implementation, companies must decide the level of security they need for their application. This is one of those cases where things have changed a lot since the beginning of RISC, from the need to use improved encryption standards like AES to guarding against malicious attacks such as Spectre and Meltdown.
The RISC-V cryptography task group, for example, has designed a number of instructions that are part of other extensions (e.g., RISC-V’s Bit Manipulation Extension) and are devising cryptography specific extensions. So, if finance is securing every transaction, we define extensions that reduce the instructions needed by more than an order of magnitude for AES-128 encryption (from 1145 instructions down to 78 instructions on a 64-bit RISC-V design). With RISC-V’s vector extension, high-performance implementations can further reduce the number of instructions down to less than one instruction per block while simultaneously providing improved resistance to side-channel attacks. RISC-V International is frugal in what we allow into our ISA, and the task groups must show the value of extensions and instructions to the community.
Vector processing has been around for a long time since the ILLIAC, STAR, and, of course, CRAY-1 computers. It has always been valuable to market segments doing things like weather prediction and sonar. However, now with the renaissance in artificial intelligence (AI) and the proliferation of machine learning (ML) in all types of applications and solutions, vector-processing requirements have become mainstream.
RISC-V has the benefit of its time in history. We witnessed all of the implementations that came before us and the needs of modern applications and workloads. As a result, we have a set of architects within the community designing the vector extension to handle the most demanding uses like sparse matrices.
Furthermore, because we’re creating the vector extension without the burden of history and with a holistic view, we paid attention to what exactly is needed. For example, for the memory system, to reduce the impact of those very sparse matrix operations, we added on virtual memory (such as page tables and TLBs) and memory access (support for implementations to efficiently reduce cache impact for traversing operations).
Reduced Code Size
Reduced code size requirements often emanate from embedded operations (such as IoT applications and computer devices). We created the C extension that supports 16-bit instruction versions for appropriate instructions that exist in our 32-bit-wide standard instruction set. As you might imagine, this reduces space requirements and improves cache locality.
However, once we designed the C extension, it became clear that all other segments could take advantage of the extension to enhance their cache locality. Subsequently, the C extension became a base extension most implementers will include in their designs. But we didn’t stop there. We looked at the needs of embedded markets holistically and kicked off other extensions like one that’s called Zfinx. It allows implementers to share registers between integer and floating-point operations, thereby reducing the space needed for registers.
With these examples, you can see that RISC-V is looking at the needs of specific industries and translating them into the appropriate RISC-V features. Such a holistic view plus RISC-V’s flexibility has attracted members from a broad set of industries. And it’s our intention to continue with this paradigm. As a result, you can one day expect to see RISC-V designs in your toaster as well as the largest supercomputers on earth and every computing platform in between.
For more information, check out https://riscv.org/technical/ .