
As AI models grow in complexity and capability, the gap between their computational demands and the hardware available to run them becomes more pronounced. But, says ElectroPages’ Robin Mitchell, researchers at University College Dublin may have found the answer. They’re using RISC-V to speed up deep learning models. By running an open-source NVIDIA Deep Learning Accelerator directly on a RISC-V chip (without a traditional operating system) they’ve achieved impressive performance and energy efficiency. Tests showed LeNet-5 models running in under five milliseconds and ResNet-50 in about one second at just 100 MHz. The project highlights how RISC-V’s openness and flexibility make it ideal for innovation in AI and edge computing, especially where resources are limited. It’s an exciting validation of the ecosystem’s growing impact.
What hardware limitations are slowing AI down? Why do even powerful GPUs struggle to keep up? And could open-source architectures like RISC-V hold the key to making AI deployment more efficient, especially at the edge?
Head to ElectroPages for answers.


