AI is increasingly making its way from the cloud back to the edge. There, together with sensors and neuromorphic in-memory accelerators, it forms autonomous systems with unprecedented intelligence and energy efficiency.
Whether surveillance drones in the air or self-driving vehicles on the ground—autonomous systems have been a reality for a long time now. In order to be independent, they need to constantly monitor their own status, assess their surroundings and overcome unforeseen situations. In order to manage this smoothly in real time, special high-performance computers continuously combine and process data from a large number of different sensor sources.
The problem, however, is that as autonomy and task complexity increases, so too does the number of sensors and, with it, the computational effort to link and process their data. Experts predict that in fewer than ten years the computing capacity in the sensor periphery will be equal to that of a supercomputer today. This would result in a significant increase in energy consumption. Mobile systems would lose out on service duration and range, while at the same time consuming a not insignificant contingent of global energy production.
In order to counteract this development, the Fraunhofer Institutes IPMS, ISIT, IMS, IWU and IAIS involved in the NeurOSmart project are working on neuromorphic networks that are based on biological, neural structures and are accelerated using in-memory computing.
This is followed by the development of particularly small and efficient models for detecting and classifying objects. These models are specifically adapted to the sensor, the directly integrated electronics and the special application. This results in quick reaction times, increased data throughput and significant energy savings compared to current remote or cloud-based solutions that work with increasingly large and energy-intensive models.
The blueprint for these neuromorphic chips is the human brain. As a master of multitasking, it uses neuronal networks to process immense quantities of data simultaneously. It is also extremely resource and energy efficient. Traditional computers, by comparison, carry out computations in sequence and store the data in a central memory. Therefore, their computing power ultimately depends on the rate of data transfer between processor and memory.
Crossbar architectures are ideal for simulating the brain structure. They are based on non-volatile memories such as ferroelectric field effect transistors (FeFET) made of hafnium dioxide (HfO2) that change their polarization when an electric field is applied. After that, the polarization status is retained even after the voltage is switched off.
With these FeFETs, the weight values necessary for deep learning algorithms can now not only be stored directly in the chip, but can also be calculated with it (in-memory computing). An energy and time-consuming data transfer between the processor and memory is therefore no longer necessary.
As the only non-volatile memory concept, FeFETs are also operated purely electrostatically. This saves electricity, as the charge reversal currents of the capacities are sufficient for writing the data. And in contrast to the perovskite materials used up to now, hafnium oxide-based memories are CMOS-compatible, lead-free and scalable down to very small technology nodes.
In the next four years, the new approach is to be tested in an application-oriented manner for the first time with a complex LiDAR system (Light Detection And Ranging) developed by Fraunhofer. LiDAR sensors are among the gold standard of autonomous systems, as they record their environment with additional distance information even in bad weather and over large distances. What’s more, the highly scalable, analogue neuromorphic HPC chip (High Performance Computer) is coupled with an AI-supported pre-processing pipeline to interpret the data directly at the sensor.
The new sensor system has to pass the first test in cobots (collaborating robots), where they will help their human colleagues, for example, to move heavy loads in production environments.