Skip to main content
Ecosystem News

Developers Use RISC-V Stack Without Worrying About Local SRAMs, DMAs

By June 25, 2024No Comments

Semidynamics Tensor Unit efficiency data for its “All-In-One” AI IP, which uses a LlaMA-2 7B-parameter Large Language Model (LLM), has been made public.

Roger Espasa, Semidynamics’ CEO, explained, “The traditional AI design uses three separate computing elements: a CPU, a GPU (Graphical Processor Unit) and an NPU (Neural Processor Unit) connected through a bus. This traditional architecture requires DMA-intensive programming, which is error-prone, slow, and energy-hungry plus the challenge of having to integrate three different software stacks and architectures. In addition, NPUs are fixed-function hardware that cannot adapt to future AI algorithms yet-to-be-invented.

Read the article here.