Article by: Sally Ward-Foxton
Rain’s scientific work proves that training deep learning networks on analog chips works.
Rain Neuromorphics trained a deep learning network on an analog chip (a crossbar network of memristors) using the company’s analog-compatible training algorithms.
The process required several orders of magnitude less power compared to current GPU systems. Although Rain’s initial work proved that AI can be effectively trained using analog chips, commercial realizations of the technology could still take a few years.
In a paper co-authored with memristor pioneer Stanley Williams, Rain describes training one- and two-layer neural networks to recognize words written in Braille. The setup uses a combination of two 64 x 64 memristor crossbar arrays (in this case, not the 3D ReRAM-based chip the company previously showed off), combined with training algorithms using a technique called difference activity, which includes Rain’s earlier work on equilibrium propagation. . Rain calls this hardware-algorithm combination minimization of activity difference energy memristor (MADEM).
Backpropagation, the training algorithm used almost exclusively in AI systems today, is incompatible with analog hardware because it is sensitive to small variabilities and mismatches in on-chip analog devices. Although compensation techniques have been used to fabricate analog inference chips, these techniques have yet to be proven for backpropagation-based training. Rain’s approach, which uses activity-difference techniques, computes local gradients instead of the repeated use by backpropagation of global gradients. The technique builds on previous work on equilibrium propagation training algorithms and is mathematically equivalent to backpropagation; in other words, it can be used to train mainstream deep learning networks.
Compared to training that uses backpropagation on a GPU, training time has been reduced by two orders of magnitude (to tens of microseconds) and power consumption has been reduced by five orders of magnitude (to tens of microseconds). hundreds of nanoJoules). Large-scale versions of MADEM should still boast a four-order-of-magnitude advantage in power consumption, according to Rain’s projections.

“Over the next 10 years, we intend to close the gap between what is being done today and the 100,000x we know is possible,” Rain Neuromorphics CEO Gordon Wilson told EE Times. . “The caveat is that this is not a product today. But this is a rigorous experiment that performed hardware-based measurements working with system noise…running analog with you instead of fighting against him.
Static device non-idealities are taken into account in the learning process, while dynamic non-idealities, such as temporal stochasticity, can actually be used to improve performance. These non-idealities can be used for regularization, reducing the complexity of the neural network during training to avoid overfitting.
“Our goal is to have the same accuracy as backpropagation, to take all the wins that have been demonstrated in the digital world with deep learning, we want to be able to move all of that to an ultra-efficient platform,” said Wilson said. “To do that, you need an algorithm as smart as backpropagation, and you need a hardware substrate as scalable as the GPU, but with orders of magnitude less power consumption.”
Rain’s recent work has been made possible by co-designing hardware and algorithms, a trend Wilson sees as essential for next-generation AI systems.

“The more components you examine in parallel, the more you take a full-stack approach, the more comprehensively you can reimagine the whole system,” he said.
Rain’s Vision is an all-analog, asynchronous, ultra-low-power, tileable, and scalable chip with 100 billion parameter capability that can mimic the human brain. Although this work used a crossbar memristor array, Rain’s hardware roadmap still includes migrating to randomly-connected ReRAM cells as the technology matures.
“This allows us to have near-term production opportunities that are still very valuable, but eliminates the risk of ReRAM for our first time to market, while allowing us to maintain that long-term goal of a continuous asynchronous fully analog learning system,” Wilson said.
Training and inference on the same platform is a big part of Rain’s plan to enable robust intelligence at the edge. Future applications would be personalization, adaptability and the ability to generalize from past experiences; Today’s robots and autonomous vehicles can’t be trained on every possible scenario, so they’ll need the ability to learn as they go, Wilson says. Along with cost and energy efficiency, this is one of the key conditions for true autonomy.
Additionally, Rain is working with Argonne National Labs to explore how his hardware could be used in the Argonne Particle Accelerator. Experiments performed in the particle accelerator are monitored by X-ray sensors, with large amounts of data from these sensors typically being transferred to a GPU cluster where the AI identifies images of interest relevant to the experiment.
Rain’s hardware could be installed next to the sensors to perform data inference without transferring to GPU servers. Continuous fine-tuning of the model is required to mitigate sensor drift and maintain performance over time. In the future, this may be made possible by Rain’s on-chip training abilities.
“We need this kind of fine-tuning and continuous learning in more places than we originally thought – other potential partners are electron microscopy, for example – because there are so many equipment that has massive data throughput where the ability to learn and fine tune is a necessary ingredient,” Wilson said.
Rain’s article, “Activity-Difference Training of Deep Neural Networks using Memristor Crossbars” is here. Rain will also be presenting a paper at IEDM this week on how his 3D ReRAM hardware design will exploit the inherent parsimony of brain structures.
This article was originally published on EE time.
Sally Ward-Foxton covers AI technology and related issues for EETimes.com and all aspects of European industry for EETimes Europe magazine. Sally has spent over 15 years writing about the electronics industry from London, UK. She has written for Electronic Design, ECN, Electronic Specificer: Design, Components in Electronics, and many others. She holds a Masters in Electrical and Electronic Engineering from the University of Cambridge.


#Rain #presents #training #analog #chip