DARPA’s Building A Noninvasive Neural Interface for Soldiers

DARPA is helping warfighters leverage AI and autonomous weapons to defend against future threats

DARPA Needs a Brain-Computing Interface for Human Soldiers to Control AI with their Neurons

In the future, will soldiers be able to control drone swarms and other new military machines with an AI-brain computing interface? And could a soldier control these machines from such a device and system that would include a non-invasive bidirectional neural interface? Current state-of-the-art neuronal interfaces require invasive surgery to implant silicon-based or metal electrodes into brain tissue. These electrodes are connected to a single neuron or neural ensemble and controlled by high-resolution neural interfaces. But they are not as advanced as they need to be for weaponization, according to the contents of a recent solicitation posted by Defense Advanced Research Projects Agency (DARPA).

(Image courtesy of DARPA.)

(Image courtesy of DARPA.)

DARPA is looking for proposals to “revolutionize the nonsurgical bidirectional neural interface” for the American warfighter. They see an opportunity to create an external device and system which features nanotransducers that act as an intermediary, transducing signals between neurons and an “external recording and stimulating device”. 

DARPA is looking for a leapfrog technology and isn’t welcoming any incremental advances in electroencephalography (EEG) and magnetic resonance imaging (MRI) to the program, which is also known Next-Generation Non-Surgical Neurotechnology (N3).

DARPA’s N3 Program Sounds Like Science Fiction

If the fact that DARPA is trying to create a non-invasive, bidirectional neural interface to help warfighters of the future fuse their minds with lethal autonomous weapons systems and semi-autonomous weapons systems seems more like science fiction than reality, it isn’t. But it may have been inspired by a few different fictional works involving brain implants, including the classic cypherpunk novel Neuromancer by William Gibson, where a hacker is chased by mercenaries with neural implants that increase their strength, vision and mental acuity. Gibson also coined the term “matrix” and the concept of “jacking in” where a brain can interface with powerful computing systems.

Basically, DARPA is looking for a way to wirelessly “jack in” to a soldier’s brain tissue to use AI software systems to control autonomous and semi-autonomous weapons. And DARPA has been pursuing and funding research on brain-machine interfaces since it began awarding contracts to researchers at UCLA starting back in the 1970s, which was previously funded by a grant from the National Science Foundation. According to the solicitation, DARPA believes the biggest bottleneck in the way of achieving a warfighter-ready bidirectional neural interface is a question of how to sustain an unprecedented level of spatial and temporal resolution between the proposed interface and neural tissue. 

A Brief History on the Origin of the Brain-Machine Interface (BMI)

Hans Berger discovered electrical activity of the brain in the early 20th century and by 1924 had recorded the world’s first electroencephalograph (EEG). He did it by inserting silver wire underneath the scalp, then changed that method to a less-invasive method of attaching silver foil patches with rubber bandages. These patches were first connected to a Lippman capillary electrometer, which proved too weak to measure the tiny voltages of the human brain. After switching to the Siemens double-coiled recording galvanometer, Berger could measure as low as one thousandth of a volt of electricity, and the EEG began to take off. Berger discovered Alpha waves and beta waves, opening up new possibilities to study brain diseases and dysfunctions. A whole new field of research opened up, and out of that research came the first Brain-Computer Interface. 

AT UCLA, Professor Jacques Vidal is widely recognized as having coined the term “Brain-Computer Interface (BCI)” and as the inventor of BCIs, having published the first (among numerous others) peer-reviewed articles. In 1973, a paper he wrote gave the burgeoning BCI community a challenge: to control objects using nothing more than EEG signals.  

N3 Program Goals: Create Telepathic AI Cyborg Soldiers

DARPA is trying to create a warfighter who has the option to become a cyborg (or transhuman) after attaching a wireless neural interface to their head. Perhaps it will be embedded in a future helmet, giving the human the ability to choose when to integrate the AI system, and hopefully to always know when it is really on and off.

There’s a lot of research that needs to be done, and DARPA has an unbelievable track record of innovations, including: Reduced Instruction Set Computing (RISC), micro-electro-mechanical systems (MEMS), the internet, global positioning satellites (GPS), unmanned aerial vehicles (UAVs). Each DARPA program lasts about 4 years on average by groups from a pool of one hundred technical program managers who partner with researchers from universities that have entered into contracts with the organization. There are currently about 200 programs in place with an annual budget of USD 3 billion, received courtesy of US taxpayers.

Problem One: The Human Skull 

The highest resolution neural interfaces require that a patient undergo a craniotomy to place microelectrodes directly into brain tissue. While research into effective neural interfaces has been performed on animals and disabled individuals whose maladies were caused by brain and spinal cord damage. The risks associated with surgery make the procedures less than ideal for DARPA’s able-bodied warfighters. 

In order to function with current microelectrode technology, the proposed nonsurgical neural interface must have a high spatiotemporal resolution and low latency. It also must be bidirectional, meaning neural recordings can be taken, and neural stimulation can be given.

Problem Two: How Exactly Can You Increase Temporal and Spatial Resolution?

To achieve high levels of temporal and spatial resolution, DARPA is interested in two approaches: “minutely” invasive neural interfaces and noninvasive neural interfaces. 

Here’s how they are described by DARPA:

“Noninvasive interfaces will include the development of sensors and stimulators that do not breach the skin and will achieve neural ensemble resolution (<1mm3). Minutely invasive approaches will permit nonsurgical delivery of a nanotransducer: this could include a self-assembly approach, viral vectors, molecular, chemical and/or biomolecular technology delivered to neurons of interest to reach single neuron resolution (<50µm3). In this application, the developed technology will serve as an interface between targeted neurons and the sensor/stimulator. They should be sufficiently small to not cause tissue damage or impede the natural neuronal circuit. The sensors and stimulators developed under the minutely invasive approach will be external to the skull and will interact with the nanotransducers to enable high resolution neural recording and stimulation.”

Bottom Line

To be clear, there are a lot of bottlenecks to accomplishing DARPA’s objectives with either noninvasive or minutely invasive approaches. With noninvasive neural interfaces, there’s signal scattering issues, signal-to-background noise ratios and attenuation 

Both noninvasive and minutely invasive approaches will be required to overcome issues with signal scattering, attenuation, and signal-to-noise ratio typically seen with state-of-the-art neural interfaces. 

“Minutely invasive” approaches to the problem will likely include the development of nanotransducers to accommodate read out and write in capabilities of the neural interface. The external integrated devices and integrated subcomponents that interact with the nanotransducers inside the subject’s brain tissue must also be developed over the next four years. 

The technologies needed to meet the goals of the N3 program would have to move past current voltage recording capabilities and would have to include a variety of atypical signals like magnetic fields, electric fields, radiofrequencies and ion concentrations per neuron.  These signals and neural activity would likely have to be translated or decoded by yet unwritten algorithms.  And the bidirectional neural interface must work in the context of a relevant application for the Department of Defense.