Brain-computer interfaces still have a touch of sci-fi about them. But they have long since proven their practical feasibility in the areas of communication and neuroprosthesis control.
With spectacular successes, brain-computer interfaces (BCI) are increasingly conquering TV and print. For example, the scientific journal Nature recently reported on a paralyzed woman who was able to speak over 70 words per minute with the help of a brain chip and an AI avatar from a research group in San Francisco. A healthy person can manage about twice as much. And the ORF presented an implant from the Lausanne Polytechnic (EPFL) last year that was even able to enable a largely paralyzed patient to climb stairs again.
For experts, these enormous advances prove that, in clinical settings, this neurotechnology has long been able to offer people with functional impairments a tangible, if not yet fully developed, way to communicate and move. At the same time, ambitious start-ups are also exploring the economic potential of BCIs outside of hospitals.
The common BCI implementations can be roughly classified depending on how deep into the body the electrodes penetrate:
Invasive: Microelectrode arrays implanted directly into the brain or on the brain surface detect individual neuron action potentials in a narrowly defined area of the brain in high resolution.
Non-invasive: Electrodes on the scalp record the entire electrical activity of an area of the brain via electroencephalography (EEG). However, this unproblematic handling comes at the cost of lower quality signals, averaged over a large number of neuronal activities.
Minimally invasive: Electrodes are inserted in a kind of stent via the jugular vein into the brain, where they record the signals of individual nerve cells.
Each method has its limits in terms of the temporal, spatial or frequency-related resolution of the recorded signals. Decoding them, meanwhile, is increasingly being done by artificial intelligence in all implementations today. To do so, modern machine learning algorithms extract information from brain waves to an extent that was unimaginable until recently.
However, patients are still wired up to a series of computers and surrounded by scientists when training and using BCIs. Because of this, recent success stories are mostly based on individual case studies with training phases lasting months. In addition, every brain works differently. So, there is still some way to go before a universal BCI is ready for the market. But it seems to just be a matter of time.
In November 2021, for example, the US Food and Drug Administration (FDA) approved products from Blackrock Neurotech for an accelerated review process. The centerpiece is a microelectrode array implanted in the brain region for controlling movement impulses. Peter Thiel, who founded Paypal with Elon Musk, is also on board as an investor.
The latter is developing a similar device with his start-up Neuralink, which received approval for clinical trials on humans in May of this year. Thousands of people have reportedly already registered as study participants on the company’s website.
However, this is also possible without an operation. For example, with the help of the stent from Stanford University,similar to those used for vascular constrictions. Equipped with electrodes, it reaches the brain via a jugular vein.
The US start-up Synchron, supported by Gates and Bezos, is researching a similar method. The “stentrode” lands with a set of 16 electrodes in the vein above the motor cortex via the jugular vein.
In contrast, test subjects at the University of Technology Sydney (UTS) remain almost entirely unscathed. Researchers there have developed an EEG cap that uses the specially developed AI model “DeWave” to capture and decode thoughts and convert them into text.
Previous technologies for translating brain signals into speech required either electrodes to be surgically implanted or scanning in a magnetic resonance tomograph (MRT), which is not suitable for everyday use.
Although the cap only provides “noisy” EEG signals, the results of 40 percent exceeded the previous standard for thought translation via EEG by three percent. The level of conventional language translation programs, just under 90 percent, is now on the agenda.
Non-invasive methods are also being researched by the Cottbus-based start-up Zander Labs. At the end of 2023, €30 million in funding was provided by Germany’s federal cyber agency. The aim is to achieve unrestricted human-computer interaction via EEG caps or a kind of bracket behind the ears that personalizes the user experience and improves the effectiveness of autonomous systems.