Skip to main content
Ecosystem News

More than Moore with Domain-Specific Processors

Gordon Moore

Last month was the 55th anniversary of Gordon Moore’s famous paper Cramming more components onto integrated circuits. He took a long-term view of the trends in integrated circuits being implemented using successively smaller feature sizes in silicon. Since that paper, integrated circuit developers have been relying on three of his predictions:

  1. The number of transistors per chip increasing exponentially over time.
  2. The clock speed increasing exponentially.
  3. The cost per transistor decreasing exponentially.

These predictions have largely held true for almost half a century, enabling successive generations of processors to achieve higher computational performance through greater processor complexity and higher clock speeds. These improvements were mainly delivered through general-purpose processors implemented in new technology nodes.

From about 2005, the improvements in clock frequency began to level off, leading to a levelling-off of single thread performance. Since then, using multiple cores on a single die has become commonplace, but again these cores were mainly general-purpose ones, whether application processors or MCUs.

If you are designing a chip with some performance challenges, do you simply follow Moore’s law and move into a smaller silicon geometry and use general-purpose processor cores? That could be a costly approach since mask-making costs are higher in small geometries. Also, you may not achieve your performance in the most efficient way.

Many embedded applications involve cryptography, DSP, encoding/decoding or machine learning. Each of these operations typically run inefficiently on a general-purpose core. For example, Galois fields are commonly used in cryptography, but multiplication operations take many clock cycles.

So, what can be done to deal with computational bottlenecks? In extreme cases, like dealing with real time video data, it may make sense to create dedicated computational units to handle a narrow range of computationally intensive operations. However, this may not be the best trade-off.

It will often be desirable to run both computationally intensive operations and other embedded functions on the same processor core.  Ideally, the most silicon-efficient processor implementation will be one that is tuned for your particular application.

This can be achieved by creating a processor core that has custom instructions targeted to address the bottlenecks. Adding custom instructions does not have to be expensive in silicon resources. For example, Microsemi created custom DSP instructions for their RISC-V-based audio processor products: Their custom Bk RISC-V processor delivered 4.24× the performance of the original RV32IMC core but required only a 48% increase in silicon area. Furthermore, the code size shrunk to 43% of the original size1.

So how can you efficiently implement custom instructions? And equally importantly, how do you verify that they are implemented correctly? We will be visiting these topics on future posts.

Roddy Urquhart

1 Dan Ganousis & Vijay Subramaniam, Implementing RISC-V for IoT Applications, Design Automation Conference 2017

]]>