Skip to main content

RISC-V Summit North America Registration is OPEN! | Santa Clara, California | Oct 22-23 | Register Today

Blog

NVIDIA on RVA23:
“We Wouldn’t Have Considered Porting CUDA to RISC-V Without It”

By August 7, 2025No Comments
NVIDIA

By setting a clear, stable standard, the RVA23 profile’s ratification is spurring top vendors to align on a common RISC-V hardware goal. All we need now is that hardware.

By James De Vile, Editor, RISC-V International

CUDA is coming to RISC-V, in yet another vote of confidence for an ecosystem entering a new phase of maturity. NVIDIA becomes the latest in a growing list of vendors, including Red Hat and Canonical, to have taken the ratification of the RISC-V RVA23 profile as a go signal.

The announcement, made at RISC-V Summit China 2025, was a ‘strategic technology disclosure’, rather than a product launch. Afterwards, I spoke with the bearer of this good news – NVIDIA’s VP Multimedia Architecture Frans Sijstermans – about what needs to happen next.

Before we get to that, here’s some background for the uninitiated: CUDA is NVIDIA’s proprietary software framework for AI and machine learning using its GPUs – which offer speedups measured not in percentages, but in orders of magnitude. Since its launch in 2007, it has become a foundational tool in areas such as AI training, HPC, advanced simulations, genomics, autonomous mobility and massive inference pipelines.

“We know we can bring CUDA to RISC-V and it’ll work with the new wave of RVA23 hardware”

— Frans Sijstermans

With a developer base of over 4 million, NVIDIA’s 90% market share in GPU-based neural network training is largely driven by the ease at which developers can build and train AI models using CUDA, alongside deep learning frameworks like PyTorch and TensorFlow.

The team had considered porting CUDA to RISC-V back in 2022. But, Sijstermans tells me, bringing CUDA to RISC-V isn’t like cross-compiling a simple toolchain. It requires the ISA and surrounding platform to have reached a level of maturity, predictability, and performance that until very recently, didn’t exist outside commercial ISAs.

“When you compare where things were then to where they are now, though, it’s another story entirely”, he recalls. “There are still plenty of things to be finished, but it’s getting very, very close. We know we can bring CUDA to RISC-V and it’ll work with the new wave of RVA23 hardware.”

RVA23: The Stable Hardware Target the Ecosystem Needs

The ratification of the RVA23 profile was the catalyst for NVIDIA’s work. “We wouldn’t have considered this without RVA23”, says Sijstermans. “It didn’t just give us a stable hardware target – it gave us the reassurance we‘d been waiting for.”

CUDA needs real, server-class RISC-V hardware, and RVA23 finally makes that possible. With the first RVA23-compliant development boards now in active development, a growing number of industry heavyweights are throwing their hats in the ring. Canonical, for example, has pivoted its development of Ubuntu for RISC-V entirely to RVA23. These major moves will hopefully encourage others to open their stacks to RISC‑V too.

Critically, RVA23’s mandated extensions and architectural requirements provide the stability and connection to the hardware needed for CUDA, which relies heavily on architectural support. But CUDA also requires features that extend beyond the base ISA, such as Unified Virtual Memory (UVM), driver-level integration, and kernel execution contexts that span CPU and GPU. UVM, for example, refers to a system where both the CPU and GPU can access a single, unified memory space, simplifying memory management for developers.

“Please Develop These Systems”

Frans Sijstermans, speaking at RISC-V Summit China 2025

Frans Sijstermans, speaking at RISC-V Summit China 2025

“Accelerated computing is our business, CUDA is our core product, and we want to support it on any CPU”, says Sijstermans. “If a server vendor chooses RISC-V, we want to support that too. Whether it’s a GPU or DPU, we want to accelerate any CPU – not just ones tied to a specific instruction set architecture.”

But without hardware, development work relies on virtualization. And virtualizing everything adds a lot of extra work, Sijstermans tells me. CUDA isn’t tied to a specific chip – NVIDIA releases it for platforms that meet a defined profile. But to test that properly, they need real CPUs, not just specifications or virtual hardware.

This is a rallying cry, then, for hardware makers to build RISC-V systems capable of running CUDA workloads. The thing the NVIDIA team needs now, says Sijstermans, is a capable RVA23 CPU – not any RVA23 CPU, but one with server-class capabilities such as high-speed interconnects, virtual memory, large page support, and hardware-level scheduling at a level the base ISA can’t offer.

“We need CPU host boards with PCIe slots to plug in GPUs”, he explains. No surprise, coming from a company that, for many, remains synonymous with PCI video gaming cards. “These boards must meet all the requirements of RVA23, the Server SoC Specification and the RISC-V Server Platform spec too.” Such standards define the expected memory layout, boot flow, interrupt behavior, and peripheral interfaces for RISC-V systems intended to run high-performance workloads.

Put this all together and you end up with the perfect open compute platform for CUDA: the RISC‑V CPU runs the OS (Linux), drivers, and schedules GPU kernels, the GPU does the heavy lifting, and a DPU handles networking. It’s a full heterogeneous AI compute stack tailored for edge, data center and everything in between.

Custom Chips Created for CUDA

Of course, announcing CUDA at RISC-V’s China Summit was always likely to spark discussion within the community. Yet Sijstermans is quick to assure me that he simply saw his talk as an opportunity to deliver a rally cry to hardware makers before the gap grew any wider.

Still, he notes, as export controls and market fragmentation reshape the compute landscape, it’s increasingly important for foundational technologies like CUDA to be able to operate independently of any one architecture or licensing regime. RISC-V’s greatest strength lies in its openness, which allows chip designers to customize the architecture for highly specific workloads. This is where Sijstermans believes RISC-V will truly set itself apart from commercial ISAs.

“It’s an opportunity to bring CUDA to entirely new use cases”, he says. Instead of running CUDA on a general-purpose CPU, vendors can build lightweight, application-specific chips, carefully tuned for a particular application or emerging vertical market. This flexibility enables faster development cycles and more deliberate design trade-offs across performance, power, and silicon area.

“This announcement is our way of saying: please develop these systems”, says Sijstermans. “And if you already are, come talk to us. We’re happy to work with early versions. We just need them… as soon as possible.”


Want to discover how RISC-V can enable powerful AI compute for your unique use case?

Join us at RISC-V Summit North America 2025 to learn more about RISC-V hardware, software, systems, development tools, security and much more. Registration is open now!


Register