Is Neuromorphic Computing the Future of AI? A Look at Intel’s New Loihi 2 Chip

Loihi 2 builds upon three years of neuromorphic research and offers a taste of next-gen artificial intelligence.

The Intel Loihi 2 neuromorphic research chip. (Source: Intel.)

The Intel Loihi 2 neuromorphic research chip. (Source: Intel.)

Semiconductor maker Intel recently announced its next neuromorphic research chip, Loihi 2. The chip was built on Intel’s forthcoming Intel 4 node and includes substantial improvements over the original Loihi chip from 2017.

Loihi 2 is an exciting development in the field of artificial intelligence (AI), but neuromorphic computing remains a niche aspect of this field. In this article, we’ll take a closer look at neuromorphic computing, Loihi, and what this all might mean for the future of AI.

What Is Neuromorphic Computing?

From the Greek νεῦρον (nevron, meaning nerve, and from which we get neuron, neurology, and neurotic), and μορφή (morfí, meaning shape or form, à la animorphs), comes neuromorphic, the shape of the brain.

Despite recent successes in AI technology, our organic brains remain much more flexible and efficient than anything computer scientists have yet created. Thus, neuromorphic computing is an effort to mimic the human brain in hardware.

Now, you may be thinking, doesn’t AI already mimic the human brain? Isn’t that the whole idea behind artificial neural networks? (ANNs, you add in parentheses, to make it easier on the reader of your thoughts.)

Yes and no. While modern ANNs certainly take inspiration from the network of neurons in the human brain, the similarities don’t go far. An ANN is a mathematical model composed of some number of layers, with each layer containing some number of units called neurons. Each neuron can receive input from some or all of the neurons in the previous layer, and it sends output to one or more neurons in the next layer. The connections between neurons are associated with a weight, and the input to a neuron is the weighted sum of all its connecting neurons. That input is given to the neuron’s so-called activation function to determine its output. It looks like this:

Simple model of a three-layer ANN. The input value at neuron n, ni, is a weighted sum of the output values of neurons a, b, and c (ao, bo, and co). The term Bn is a constant bias. The output value no is determined by the activation function α(ni).

Simple model of a three-layer ANN. The input value at neuron n, ni, is a weighted sum of the output values of neurons a, b, and c (ao, bo, and co). The term Bn is a constant bias. The output value no is determined by the activation function α(ni).

This type of neural network is trained (taught) by giving it input, comparing its output to what it should be, and subsequently updating the weights of the network via an algorithm called backpropagation, which uses gradient descent to find a local minimum of error. It can get more complicated than that, but ultimately ANNs represent a lot of matrix multiplication.

The real neurons in our brains are slightly more sophisticated than this model. Instead of passing a single number from one neuron to another (the numbers ao, bo, and co in the above image), actual neurons fire off an action potential, or spike, whenever they’re activated. Neurons are activated by spikes from other neurons whenever the total amount of spikes pushes the neuron over a specific threshold. In this way, neurons can fire off entire spike trains, the timing of which plays an important role in the network.

This behavior is captured in a mathematical model called a spiking neural network, or SNN. An SNN looks something like this:

In this current-based SNN model, each neuron a, b, and c produces a spike train Sx(t) that feeds into neuron n (and others) over a connection called a synapse. These spike trains determine neuron n’s synaptic response current un(t), which feeds into a differential equation for neuron n’s membrane potential (not shown) which governs its own spike train. αn is a synaptic filter kernel and Bn is a constant bias current.

In this current-based SNN model, each neuron a, b, and c produces a spike train Sx(t) that feeds into neuron n (and others) over a connection called a synapse. These spike trains determine neuron n’s synaptic response current un(t), which feeds into a differential equation for neuron n’s membrane potential (not shown) which governs its own spike train. αn is a synaptic filter kernel and Bn is a constant bias current.

While SNNs are still a simplified version of real biological brains, they come a step closer to emulating how we actually learn. Implementing SNNs efficiently requires special-purpose hardware, and that takes us back to neuromorphic computing and Intel’s Loihi processor.

A Look at Loihi

The original Intel Loihi chip. (Source: Intel.)

The original Intel Loihi chip. (Source: Intel.)

Intel’s original Loihi chip, pronounced lo-EE-hee and named after an underwater volcano off the coast of Hawaii, was unveiled in 2017 as Intel’s most advanced neuromorphic processor to date. Loihi was fabricated on Intel’s 14nm process and contained 131,072 spiking neurons distributed across 128 neuromorphic cores. The chip also had off-chip communication interfaces and three embedded x86 microprocessor cores for data coding, management, and housekeeping.

At a high level, there are two notable features of the Loihi architecture: asynchronism and parallelism. Let’s start with the latter. Human brains are full of neurons acting only with regard to their local environment—they don’t have to fetch instructions from a central pool of memory near the medulla oblongata to determine what they should do next. Likewise, neuromorphic processors like Loihi deviate from conventional CPU architecture and employ a mesh of parallel units, each with local compute and memory capability. Loihi’s neuromorphic cores contain 1024 spiking neurons and 208 kB fixed allocation memory.

Now for the asynchronism. Conventional processors are built to execute a sequence of instructions, one after another, until none remain. How quickly the instructions are executed depends on the speed of the processor’s clock, usually denoted in GHz (a 2.5 GHz processor can churn through 2.5 billion instructions per second).

The human brain has no such clock, with thoughts occurring as and when neurons are triggered by electrochemical impulses. Neuromorphic hardware follows this model. Everything that happens in a neuromorphic chip happens because something else (namely, a neural spike) happened first. Thus, neuromorphic computation is event-based (asynchronous) rather than clock-based (synchronous).

(Source: Intel.)

(Source: Intel.)

Loihi’s neurons are based on a popular spiking neuron model called leaky integrate-and-fire, or LIF, in which incoming spikes ramp up a neuron’s membrane potential until it crosses a threshold, at which point it fires off a spike and the potential resets to some baseline. This model is leaky in the sense that the potential is always adjusted downward, so the effect of a given spike diminishes over time. The particulars of Loihi’s mathematics are given in Intel’s 2018 paper describing the neuromorphic chip.

Simple model of an LIF spiking neuron. The input spike train adds to the membrane potential, which diminishes exponentially. Once the firing threshold is crossed, the neuron fires a spike and resets to the baseline potential.

Simple model of an LIF spiking neuron. The input spike train adds to the membrane potential, which diminishes exponentially. Once the firing threshold is crossed, the neuron fires a spike and resets to the baseline potential.

What’s New with Loihi 2

Intel introduced the successor to Loihi in September 2021. Loihi 2 is faster and more efficient than the original chip, and supports new neuromorphic algorithms and applications. Like Loihi, Loihi 2 contains 128 neuromorphic cores, but doubles the count of x86 microprocessor cores from three to six to eliminate bottlenecks found in the original system.­ The new chip has also ramped up off-chip interfaces to improve scalability and simplify integration with conventional architectures, including a new Ethernet interface and general-purpose input/output (GPIO).

Loihi 2 was fabricated on Intel’s pre-production 7nm process (confusingly named Intel 4), while the original Loihi was built on a 14nm process. Accordingly, the Loihi 2 architecture provides much higher resource density. Loihi 2 has cores of 8192 neurons each, with each core measuring 0.21mm2; the original Loihi’s cores were 0.41mm2 with 1024 neurons each. That’s over 15 times more neurons per square millimeter. According to Intel, the Intel 4 process also simplified the layout design rules for Loihi 2, speeding up its development.

Additionally, Loihi 2 is now faster. Its circuit speeds are between two and ten times faster than the original chip, and Loihi 2 supports minimum chip-wide time steps below 200ns. As Intel describes it, Loihi 2 can process neuromorphic networks up to 5000 times faster than our biological brains.

The Loihi 2 chip architecture. (Source: Intel.)

The Loihi 2 chip architecture. (Source: Intel.)

Most new features of Loihi 2 are designed to promote greater flexibility for neuromorphic researchers. These improvements are based on the past three years of Loihi research, with the Intel team modifying Loihi 2 in the areas that strained the original hardware.

“There were some surprises in there, and some confirmation of what we expected going into it,” explained Mike Davies, director of Intel’s Neuromorphic Computing Lab and one of the principal architects of Loihi.

One of the biggest surprises to the Loihi team was that the LIF spiking neuron model proved to be insufficient on its own. “There was a prevailing view that you don’t need really complex neuron models,” Davies recalled of the original Loihi chip. “That that’s a level of going deep into biological realism that is not essential; you get the computational capabilities and power through the emergent network dynamics that come from wiring all those neurons up.”

Not true, as it turns out. A single neuron model isn’t enough to capture the richness of our own brains, which have many different types of specialized neurons—thousands of them, according to Davies. That biological fact is one that Loihi 2 now emulates, hoping to capture deeper neuromorphic behavior.

“We flipped direction and actually put in more programmability, a configurable instruction set to define a whole wide class of neuron models,” Davies said.

Mike Davies, director of Intel’s Neuromorphic Computing Lab and one of the principal architects of Loihi. (Source: Mike Davies via LinkedIn.)

Mike Davies, director of Intel’s Neuromorphic Computing Lab and one of the principal architects of Loihi. (Source: Mike Davies via LinkedIn.)

Each of Loihi 2’s neurons can now be customized with a user-programmable neuron model without any sacrifices to performance or efficiency. There’s a practical limit to this flexibility—the shared memory in each neuromorphic core must hold both state and model information, and this memory is finite. Regardless, the ability to code different neural models is expected to blast open the range of applications available to Loihi 2.

That’s not the only change that brings Loihi 2 closer to biology. The new processor also has a more realistic learning engine than its predecessor. SNNs don’t naturally support the backpropagation algorithm fundamental to training conventional ANNs, and so must adopt a different learning mechanism to adjust synaptic weights. The most straightforward is called spike-timing-dependent plasticity, or STDP. Put simply: neurons that fire together, wire together. If one neuron’s spikes consistently occur before the next neuron’s spikes, the synaptic weight between them increases. If the downstream neuron consistently fires before the earlier neuron, the weight between them decreases.

The original Loihi chip included a programmable learning engine that allowed researchers to customize STDP learning rules. “Loihi was the leading chip from the standpoint of the integration of learning capability at that time,” said Davies. “But there has been progress in the field since then, and an emerging awareness of three-factor learning rules.”

The simple STDP described above is a two-factor learning rule, with the two factors being the spike information from each of the two neurons connected by the synapse. “It’s commonly now believed that you need a third factor—you need another term that modulates the learning in some way,” Davies explained.

While the original Loihi had some limited support for this additional factor, Loihi 2 now enables researchers to precisely map a third factor to individual synapses. Ultimately, this provides more flexibility for researchers to develop unique learning rules. In particular, Loihi 2’s expanded learning engine will allow researchers to implement cutting-edge approximations of backpropagation algorithms for SNNs.

Curiously enough, Loihi 2 does not move unilaterally toward closer harmony with the human brain. Spikes in Loihi 2 can now behave in a way that biological spikes never would.

“In nature, all the communication happens with binary signaling,” Davies said. “Designing chips with CMOS transistors, it’s very cheap comparatively for us to add some extra payload to each of these spikes.”

Neurons in Loihi 2 can now generate graded spikes, unlike Loihi neurons, which only sent binary spikes (i.e., spike or no spike). Loihi 2 spikes can carry a programmable payload up to 32 bits, which can, for instance, multiply the weights of downstream synapses. To see how this might work, consider an LIF neuron model. When the neuron’s membrane potential crosses its firing threshold, it fires a spike. However, most of the time the potential will exceed the firing threshold by some amount. This amount can be valuable to keep track of, and so the excess could be sent as the spike’s payload. Of course, it’s up to the Loihi 2 researcher to determine exactly how to compute spike payloads and how they’re used.

“There we deviated from biology in a very measured way,” Davies acknowledged. This raises a fascinating point about neuromorphic hardware: while it’s inspired by the brain, it need not be limited by it.

Oheo Gulch, a single-chip Loihi 2 system for early evaluation of the new neuromorphic processor. (Source: Intel.)

Oheo Gulch, a single-chip Loihi 2 system for early evaluation of the new neuromorphic processor. (Source: Intel.)

Loihi 2 is currently available to neuromorphic researchers in two hardware systems: Oheo Gulch, a single chip system, and Kapoho Point, a stackable eight-chip system with an Ethernet interface. The biggest system featuring the original Loihi chip was called Pohoiki Springs and included 768 chips in a 5U server chassis, so it seems likely that larger Loihi 2 hardware is on the horizon.

The Loihi 2 hardware isn’t the end of the story—Intel also made a major announcement concerning neuromorphic software, releasing alongside Loihi 2 a new neuromorphic software framework called Lava.

“In the neuromorphic field, there’s very little convergence at the software level. Each group has their own direction and they tend to reinvent a lot of software themselves. There’s not enough sharing of results and building on what other teams do,” Davies lamented.

Lava aims to change that. It’s a re-architected version of Intel’s own software, one which was never planned to be released to the public. But, recognizing the need for common ground in the neuromorphic field, Intel has released Lava as an open-source project that’s portable to other neuromorphic hardware beyond Loihi.

“We want to put it out there with permissive licensing to genuinely encourage contribution, adoption, and convergence where people aren’t worried about this just being one company’s proprietary technology,” Davies elaborated.

The Future of Neuromorphic Computing

We’ve spent a lot of time talking about neuromorphic hardware, but one question we haven’t yet answered is: why bother?

There are two good answers: power and latency. Neuromorphic chips are far, far more power-efficient than conventional processors running ANNs. Conventional CPUs and GPUs can consume hundreds of watts of power, while demonstrations of the original Loihi chip often used less than a single watt. It’s because spikes are sparse in time—most of the neurons in an SNN are sitting idle most of the time, whereas each neuron in an ANN is continually active. The factor of time also means each spike carries more information than the output of a neuron in an ANN, which further increases efficiency. Loihi’s parallel and event-driven architecture also allows it to respond to input extremely quickly, making it well-suited for low latency, real-time operation.

However, these promises can only be delivered with novel approaches and algorithms made for neuromorphic hardware. Simply trying to recreate ANNs in neuromorphic hardware would be a mistake, as the Loihi team pointed out in a recent paper surveying results of the original Loihi hardware.

“[I]t would be naïve to ignore ANN literature as we search for effective SNN architectures, but it would be equally naïve to expect SNNs to outperform ANN accelerators on the very task that they have been optimized for,” reads the paper. “Deep learning offers a fine starting point in a journey of SNN algorithm discovery, but it represents only one niche of the algorithmic universe available to neuromorphic hardware.”

Davies, the lead author of the paper, expanded on this point in his interview with engineering.com.

“There’s room [for ANNs and SNNs] to exist side-by-side. I think what we’ve understood from the research so far is that today there’s a niche where this kind of [neuromorphic] model works really well. Over time we expect that to grow from a niche to a really pervasive class of computing architecture, but ANNs are very good and so are the architectures that have developed alongside them,” Davies said.

Just what is the current niche for SNNs and neuromorphic hardware? Davies looks to our own brains (what else?) for guidance, posing a different question: what is the computing equivalent of the intelligence we see in ourselves?

“Brains evolved to control bodies and process real-time data streams efficiently,” he said. “That’s really the niche where neuromorphic and this spike-based approach is suited. For power and latency constrained applications that have to adapt in real time, not after processing gigabytes of data… that’s where this kind of technology is going to thrive.”

To sum it up in a word: robotics. That’s the general class of applications that SNNs are most immediately poised to disrupt, and the original Loihi chip has borne out this promise in several early demonstrations. This video from Intel shows off one such demo, a Loihi-equipped humanoid robot that can learn how to classify different objects (and, like all humanoid robots, looks as creepy as robotically possible). Another team of researchers incorporated Loihi into a robotic arm that combines tactile and visual data to grip and identify objects. In one experiment, Loihi processed the sensory input 21 percent faster than a GPU while using 45 times less power.

A humanoid robot equipped with the original Loihi chip. (Source: Intel.)

A humanoid robot equipped with the original Loihi chip. (Source: Intel.)

Compared to conventional approaches to machine learning, neuromorphic computing is clearly in its infancy. But three years of Loihi have given invaluable insight. “For the first time in this field we have a quantitative view of where this architecture provides value,” Davies said, pointing back to his team’s recent survey of Loihi research.

Loihi 2 hopes to expand this view even further, and from there—who knows? For his part, Davies expects neuromorphic computing to go the distance.

“The principles that we’re exploring are so basic that it’s hard to imagine that they won’t make their way into future computing systems,” Davies predicted. “Whether it’s a standalone chip that looks like Loihi, or it’s just some portion of a chip, or maybe so much progress has been made algorithmically that it looks very different from where we’re at today. We’ll see what the final form of that is.”

Written by

Michael Alba

Michael is a senior editor at engineering.com. He covers computer hardware, design software, electronics, and more. Michael holds a degree in Engineering Physics from the University of Alberta.