Nature-Inspired Hardware for Efficient AI
Neuromorphic Computing
Neuromorphic Computing strives to build hardware for processing of Artificial Intelligence (AI) tasks that is efficient in terms of energy, latency and cost. In Neuromorphic Computing, we use principles that have proven efficient in nature – namely in biological brains. There are four main principles that we adopt from the biological models:
- using neural networks,
- massively parallel computing in or near memories,
- calculation by analog quantities, and
- communication through spikes, i.e. impulses that are either zero or suddenly jump to their peak value and then back to zero again.
In conventional digital accelerators for deep neural networks (DNN), we apply the first two principles – in mixed-signal accelerators for DNNs the first three principles. When we design digital accelerators for spiking neural networks (SNN), we also use three out of four principles, while in mixed-signal SNN accelerators all four.
Our Focus
In our Labs, Neuromorphic Computing focuses on electronic circuits – especially microelectronics, that is computer chips. We design both digital and analog circuits, and we can integrate both into mixed-signal chips or chiplets. Accordingly, we can build – and we already have built – all four aforementioned types of accelerators.
Our lightweight microelectronics circuits are suitable for integration into sensors for sensor-near signal pre-processing. Moreover, we have built digital accelerators both for DNNs and SNNs on commercial Field Programmable Gate Arrays (FPGA). For each piece of hardware, we also need some software. In our Labs, we produce the tools that are necessary to run algorithms on an accelerator, and we do also develop these algorithms. Finally, we offer to integrate our hardware into complete systems and our software tools into complete tool chains.
In our Labs, Neuromorphic Computing focuses on electronic circuits – especially microelectronics, that is computer chips. We design both digital and analog circuits, and we can integrate both into mixed-signal chips or chiplets. Accordingly, we can build – and we already have built – all four aforementioned types of accelerators.
Our lightweight microelectronics circuits are suitable for integration into sensors for sensor-near signal pre-processing. Moreover, we have built digital accelerators both for DNNs and SNNs on commercial Field Programmable Gate Arrays (FPGA). For each piece of hardware, we also need some software. In our Labs, we produce the tools that are necessary to run algorithms on an accelerator, and we do also develop these algorithms. Finally, we offer to integrate our hardware into complete systems and our software tools into complete tool chains.
Hardware-software co-design
The joint design of both the accelerator hardware and the accompanying tools – both with respect to the task or algorithm at hand – is the topic of hardware-software-co-design. What is the optimum chip architecture? How can this chip be programmed? These questions are addressed and resolved during the design process. Developing a software toolchain to complement the hardware accelerator is part of our challenge.


AI platforms
Many small and big companies are currently pushing new technologies in the area of Neuromorphic Computing – they offer commercial chips or the required tools. Our Labs use not only our self-developed chips and tools but also these commercial ones, as the best-suited chip-and-tool combination differs from application to application. We are experienced in several commercial AI platforms and are thus capable of integrating the hardware and the software tools into the overall system and get this up and running.
AI algorithms (DNN/SNN)
More and more AI applications appear in our world – tasks that you never associated with AI are emerging, like automatic detection of leakage in water pipelines. All these tasks need to be pocessed somewhere, and if you want the advantages of edge (or embedded) computing – namely low latency, privacy, reliability, (ultra-) low energy, lower cost – then AI processing has to go to the (extreme) edge. This calls for dedicated Neuromorphic Computing hardware.
As our Labs know how to build the hardware and the tools, we also know how to optimize the AI algorithms for maximum efficiency – on our own hardware but also on 3rd party platforms. And we are so good at this that we won a first prize in a German national competition for the most energy-efficient AI system in 2021.
