At AAAS 2011: Taking Brain-Computer Interfaces to the Next Phase

Using BCI to control a telepresence robot

Using BCI to control a telepresence robot

Brain-machine interfaces make gains by learning about their users, letting them rest, and allowing for multitasking.

You may have heard of virtual keyboards controlled by thought, brain-powered wheelchairs, and neuro-prosthetic limbs. Once the mind is trained to send the right kind of signals, operating the interface can be downright tiring for the mind - a fact that prevents the technology from being of much use to people with disabilities, among others. Professor José del R. Millán and his team at the EPFL have a solution: engineer the system so that it learns about its user, allows for periods of rest, and even multitasking.

In a typical brain-computer interface (BCI) set-up, users can send one of three commands – left, right, or no-command. No-command is necessary for a brain-powered wheelchair to continue going straight, for example, or to stay put in front of a specific target. Paradoxically, in order for the wheelchair or small robot to continue on its way it needs constant input, and this ‘no-command’ is very taxing to maintain and requires extreme concentration. After about an hour, most users are spent. Not much help if you need to maneuver that wheelchair through an airport.

In an ongoing study demonstrated by Millán and doctoral student Michele Tavella at the AAAS 2011 Annual Meeting in Washington, D.C., the scientists hook volunteers up to BCI and ask them to read, speak, or read aloud while delivering as many left and right commands as possible or delivering a no-command. By using statistical analysis programmed by the scientists, Millán’s BCI can distinguish between left and right commands and learn when each subject is sending one of these versus a no-command. In other words, the machine learns to read the subject’s mental intention. The result is that users can mentally relax and also execute secondary tasks while controlling the BCI.

The so-called Shared Control approach to facilitating human-robot interactions employs image sensors and image-processing to avoid obstacles. According to Millán, however, Shared Control isn’t enough to let an operator to rest or concentrate on more than one command at once, limiting long-term use.

Millán’s new work complements research on Shared Control and makes multitasking a reality while at the same time allows users to catch a break. His trick is in decoding the signals coming from EEG readings on the scalp—readings that represent the activity of millions of neurons and have notoriously low resolution. By incorporating statistical analysis, or probability theory, his BCI allows for both targeted control—maneuvering around an obstacle—and more precise tasks, such as staying on a target. It also makes it easier to give simple commands like “go straight” that need to be executed over longer periods of time (think back to that airport) without having to focus on giving the same command over and over again.

It will be a while before this cutting-edge technology makes the move from lab to production line, but Millán’s prototypes are the first working models of their kind to use probability theory to make BCIs easier to use over time. His next step is to combine this new level of sophistication with Shared Control in an ongoing effort to take BCI to the next level, necessary for widespread use. Further advancements, such as finer grained interpretation of cognitive information, are being developed in collaboration with the European project for Tools for Brain Computer (www.tobi-project.org). The multinational project is headed by Professor Millán and has moved into the clinical testing phase for several BCIs. Click here for the press release about TOBI.


Author: Michael David Mitchell

Source: EPFL