Editor’s Note: Ana Matran-Fernandez is a PhD researcher, University of Essex. The views expressed in this commentary are solely those of the writer. CNN is showcasing the work of The Conversation, a collaboration between journalists and academics to provide news analysis and commentary. The content is produced solely by The Conversation.
Following the Olympic Games and Paralympic Games, this year will see the arrival of the Cybathlon, the world’s first competition for parathletes and people with severe disabilities who compete with the aid of bionic implants, prosthetics and other assistive technology.
The Cybathlon will include six disciplines, each one specialized to the competitors’ type of physical need. Agility courses test those with bionic arms and legs, while races for powered wheelchairs and powered wearable exoskeletons include tackling obstacles such as flights of stairs. There is also a bike race for paralyzed competitors using electronic muscle stimulation to move their legs, and a competition for those who have lost the ability to move their bodies but who are put back in control by means of a brain-computer interface.
It’s true that the Cybathlon is unlikely to feature the sort of athletic prowess found at the Olympics or Paralympics. But it will demonstrate what the technology is capable of, instead of it staying hidden in research labs, and focus effort and enthusiasm on improving it in order to revolutionize the lives of those with severe disabilities and life changing injuries. Organizers ETH Zurich, the Swiss Federal Institute of Technology, will bring together 80 teams of users, researchers, and the tech manufacturing industry to think about what is really needed to make technology that solves the everyday problems those living with disabilities face.
It’s this focus on practical problems that has informed the design of the challenges. For example, the prosthetic arm race includes a station where the parathletes must slice a loaf of bread or pour a cup of coffee, and another where they must walk through a door while carrying a tray of objects. These are everyday activities taken for granted by most of us, but for the 15m people the World Health Organization estimates are living with disabilities, they may be difficult or impossible.
While examples of technology such as bionic arms may be familiar, the brain-computer interface competition will be a surprise to most. A brain-computer interface is a system that interprets a person’s brain activity into one of several possible commands for equipment fitted to the competitor. This allows severely paralyzed people whose cognitive and sensitive abilities are nevertheless intact to control equipment that can help them move or communicate.
It’s rare such interface systems leave a research lab, and many exist only in theory on the pages of research journals. They may seem like science fiction, yet they have existed in one form or another for decades.
Brain as machine controller
There are several components to a brain-computer interface. The first one is of course the brain of the person. Electrical impulses in the brain are detected through electroencephalogram (EEG) sensors attached non-invasively to the scalp, very much as they are in a hospital setting. These signals quite often include interference from muscular movement such as from the eyes, so the first step is to isolate the useful signal from the noise.
The signals are then processed in a step known as feature extraction. Approaches vary, but a common technique is for the user to imagine he or she is performing a movement, such as clasping and opening a hand. This mental imagery generates a particular pattern in the brain’s motor cortex which appears as an EEG signal that is easily recognizable and distinct from the background EEG activity.
The EEG signals are processed during feature extraction to make them more easily understood by the next component, the classifier, which identifies the intention of the user. A classifier identifies how the signal patterns differ when the user thinks of moving their left or their right hand, for example, or how these differ from signals generated as the user makes mental calculations. A good classifier learns these differences and identifies the most likely intention the user had, achieved through pattern matching and machine learning algorithms.
The Cybathlon race will test competitors of the brain-computer interface race by means of a video game, in which the participants will map up to four different actions from the brain that need to be understood by the classifier of the system. The competitors must send the correct decision at the right time in order to race each others’ avatars represented in the game. The best system will be the one that most accurately recognizes and quickly responds to its user’s brain activity, selects the right command and so allows he or she to win the race.
The appearance of brain computer interfaces at Cybathlon is a rare opening outside the lab, that requires the systems’ developers to considerably improve them over those that need only function in lab experiments, for example by making them more reliable and able to cope with the user getting distracted.
Current systems aren’t yet ready for those whose lives they could so radically change. But the new developments of the last few years, which Cybathlon is encouraging further, will not only improve this technology but make it more suited to use by people living outside the lab – finally closing the loop on a technology that has been in the making for over 20 years.
Ana Matran-Fernandez is a PhD researcher, University of Essex. The views expressed in this commentary are solely those of the writer. CNN is showcasing the work of The Conversation, a collaboration between journalists and academics to provide news analysis and commentary. The content is produced solely by The Conversation.
Republished under a Creative Commons license from The Conversation.