Computer Science, Electronics and Biology coming together. Will Neuromorphic circuits power the next wave of AI?
S ummit or OLCF-4 is a supercomputer developed by IBM, which as of now is the fastest supercomputer in the world which can clock 148.6 petaflops. It has 27,648 NVIDIA Volta V100s GPUs and 10 PetaBytes of total memory. That sounds fast! Computer speeds are generally measured in FLOPS which stands for Floating point Operations per second the measure of how many floating point calculations it can perform per second. 1 Petaflop means 1 followed by 15 zeros (1,000,000,000,000,000). So that is 148.6 Quadrillion floating point calculations per second and 10 Quadrillion bytes of memory. Too many zeros to fathom ha!
Lets see how human brain fares in comparison to that. Think fast! What is 43877669.65774398 divided by 03345333857644 to 15 digits of precision? Are you done yet? How about now? Well, Ideally you can’t measure a brain’s performance in FLOPS and we would need better methods to do that but people have tried to estimate brain’s performance in terms of FLOPS and it ranges from a Trillion (1,000,000,000,000) to a Septillion (1 followed by 24 zeros). That is a billion times faster than the fastest super computer there is. But that is not it, the Summit supercomputer performs those operations by consuming up to 13 Megawatts power. Which means 1 hour power consumption of Summit can power an average U.S household for a whole year and an average Indian household for 10 years. Whereas a human brain can do those exascale operations by consuming a mere 20 Watts.
What if we can make a computer which can perform like a human brain? Wouldn’t it be great if we can squeeze the Summit supercomputer from a 5600 square feet room into a box of a size of a human skull with just a few torch batteries powering it? Scientists have been on this pursuit and have been trying to mimic brain to perform computations since a while. The Human Brain Project or HBP is a 10 year long mega project which aims to learn everything about brain. One of the sub-projects of HBP is about building a Neuromorphic Computing platforms. Neuromorphic computing has been around for a long time. The idea and the term was first proposed in the 1980s by Carver Mead describing the use of very-large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures present in the nervous system.
But what do we mean by mimicking neuro-biological architectures? Artificial neural networks also claim to mimic the neural connections in the brain then what is different about Neuromorphics? Well neuromorphics is a very broad field which explores different memory architectures, computation models, analog computing etc to build systems inspired by the brain. That is a broad field so for now we will just try to understand how neurons in our brain work and how can they help inspire us to build brain like systems.
Neuroscience 101 — The Neuron
Brain is an extremely complex machine and even with all the latest scientific and technological advances we do not understand a lot about the brain. Continuous research is going on to uncover mysteries of the human brain. Our brain contains up to 100 billion neurons and about 700 Trillion (700,000,000,000,000) connections. Lets just try to broadly understand how a single neuron works. Unlike a traditional computer our brain does not use binary digits to compute and store information. All the feats which brain performs are done using spikes. A basic understanding of neuronal structure would help us understand spikes in a better way. Bear with me neuron dynamics are complex but I’ll try my best to simplify it.
The diagram shows two neurons connected to each other and how the signal travels through them. Here our signal is a spike which is generated in the cell body (at the axon hillock) and travels through the axon to the synapse which in turn excites the following neuron. A synapse is a junction between two neurons and a spike (as it sounds) is a spike or a sharp jump of electric potential in the cell which looks like —
The voltage shown on the Y axis is the membrane potential (or transmembrane potential) which is the difference in electric potential between the interior and the exterior of a biological cell. Hold this diagram in your head for a moment while we take a deep dive inside a neuron and we will revisit this after that.
The Ionic Dance
The neurons in the above picture do not hang around in air inside your brain but are swimming in extracellular fluid which is separated from the cell internals by a lipid bilayer membrane. There are ions on both sides of the membranes in different concentrations. K+ or potassium ions have a higher concentration inside the neuron cell compared to outside and Na+ or Sodium ions have a higher concentration outside. Both Na+ and K+ ions hold a positive charge each. These ion concentration gradients i.e. the difference in concentrations of ions inside vs. outside provide the potential energy to drive the formation of membrane potential.
In the case depicted here the pathway in the cell membrane selectively allows potassium (K+) ions to move across the membrane thus leaving negatively charged ions behind and creating an imbalance to cause potential difference. In steady state or resting state these potassium and sodium ions keep wandering in and out of the cell membrane through the selective gates which are called ion channels in the membrane thus keeping the membrane potential to -70 mV which means the inside of the cell is 70 mV less positive than the outside. Scroll up to see the graph above. The Resting State (region 5) in the graph is when the neuron is in its steady state or equilibrium. This dynamic equilibrium state is maintained by a mechanism called sodium and potassium pumps. These pumps use up the energy supplied by your food to push sodium and potassium ions from low concentration region to high concentration region thus called pumps and thus maintaining a resting potential difference of -70mV.
Before we go ahead a small but important thing — Ion channels are of different types, some are voltage controlled ion channels, mechanical ion channels, ligand controlled ion channels etc. In short a voltage controlled ion channel is controlled (opened or closed) based on the voltage in the cell.
External Stimulation and Spikes
Now if due to some external stimulation which may come from the previous neuron (presynaptic cell in the picture above) the membrane potential increases beyond a point which is called the threshold (-55 mV) the voltage controlled sodium ion channels open and the sodium ions start flowing inside the cell and the cell starts to depolarize (region 2 in the graph). Sodium ions being positively charged when flow inside the cell cause the cell to become more positive than the exterior. At about +30 mV the voltage controlled sodium ion channels get inactivated. The change in voltage inside the cell also causes the voltage controlled potassium channels to open up though they open and close more slowly as compared to sodium ion channels. The flow of potassium ions from inside the cell to outside starts the process of repolarization (region 3 in graph), positive charge is flowing from inside the cell to outside causing the cell to become more negative again but since these channels open and close slowly the potential overshoots the resting potential and the cell goes into hyperpolarization (region 4). At this point both the ion channels are closed and sodium and potassium pumps restore the cell to the resting state again. The process is a bit complex to understand at one go but is explained very well in this video with animation aids. Now this generated spike or action potential travels through the axon to the synapse.
When this potential reaches the synapse which is the junction between two neurons it is converted into a current which flows across the synaptic gap and excites the post-synaptic neuron. We will skip the actual mechanics of how the action potential of a pre-synaptic cell gets converted to current in the synaptic cleft. Similar mechanisms work at the synaptic junction which are explained well in this video
Now we understand how a spike is generated and travels through the neurons. If this is the atomic operation in our brain can we mimic this operation by replicating the neuronal dynamics in circuits? And can we connect these silicon neurons together to form neural networks which work like our brain? Indeed such circuits have been made by many researchers and companies.
Neuromorphic Circuits and Chips
Mimicking neuron and synapse behavior in a circuit has been a research interest of many scientists and industries around the world since a while now. Research groups and companies are continuously putting money, time and effort to build chips which have spiking circuits and connections inspired by the brain. Few examples of such chips are IBM’s TrueNorth, Intel’s Loihi, Qualcomm’s Zeroth etc.
IBM Truenorth Intel Loihi Qualcomm Zeroth
These chips are tiny in comparison to the scale of brain in terms of number of neurons and connections but they already are showing promising results in terms of computation at a low power. These chips are optimized for running Spiking Neural Network algorithms. Going into details of SNN will need an article of its own. We will dive into it in a future article.
The first generation of AI was rules-based and emulated classical logic to draw reasoned conclusions within a specific, narrowly defined problem domain. It was well suited to monitoring processes and improving efficiency, for instance. The second, current generation is largely concerned with sensing and perception, such as using deep-learning networks to analyze the contents of a video frame.
The coming generation will extend AI into areas that correspond to human cognition, such as interpretation and autonomous adaptation. This is critical to overcoming the current limitations of AI solutions based on neural network training and inference, which depend on literal, deterministic views of events that lack context and commonsense understanding. Next-generation AI must be able to address novel situations and abstraction to automate ordinary human activities. The key challenges in neuromorphic research are matching a human’s flexibility, and ability to learn from unstructured stimuli with the energy efficiency of the human brain. The computational building blocks within neuromorphic computing systems are logically analogous to neurons. Spiking neural networks (SNNs) are a novel model for arranging those elements to emulate natural neural networks that exist in biological brains. Neuromorphics is still a neonatal child in the AI world but it is showing a lot of promise and potential for the future of intelligent machines.