Thursday, April 10, 2008

The First Brain-Computer Interface for Mainstream Consumers [Neural Interface]

Have you ever wanted to control your computer with your mind? I have. And come next December maybe we will. It has been long since i've been talking about Emotiv Systems . Last February 19th they came out of the cave at the GDC’08 conference with a brand new bone, the latest version of their consumer based brain-computer interface that is quite frankly geeking me out. The headset will be marketed for the game industry and is expected to go for $299. Read on for what to expect. The features are, well, pretty unbelievable.

I’ll start off by noting that there was not one, but two companies demoing brain-computer interfaces at GDC this year. Emotiv Systems & the EPOC neuroheadset, which we’ll be focusing on here, and NeuroSky. The latter is planning on selling their sensors and technologies to partners and will not be developing a specific headset on their own (they had a demo unit at GDC just to show the functionality of their systems).

What Are Brain-Computer Interfaces and the EPOC Neuroheadset?

If you’ve read Think Artificial before, you’re probably somewhat familiar with BCIs. Brain-computer interfaces. Devices that allow us to control machines using only our minds.
The key technology is called electroencephalography (EEG). A device monitors your brain’s electrical activity via sensors on your scalp. It’s been used for medical purposes for years — and the futuristic image on the side here depicts setup for a musical brainwave performance at the Deconism Gallery in 2003, for example. The audience of a concert hooked up to the EEG devices to affect music and lighting.

However, monitoring the waves is different from detecting their patterns and using them reliable “triggers”, like Emotiv Systems’ EPOC device and software does. For this to work, two things are essentially required: The user has to practice producing a repeating, recognizable pattern. But there’s always noise (because it takes practice to be able to visualize the same image, or sequence), so the second thing is that the software deciphering the electrical activity must learn to recognize trigger waves.

Naturally, EPOC is not an invasive kind of — you won’t need a drill and pliers to use it. But you’ll still have to shave your head (just kidding).

The Emotiv EPOC Neuroheadset uses a set of sensors to tune into electric signals naturally produced by the brain to detect player thoughts, feelings and expression. It connects wirelessly with all game platforms from consoles to PCs. The Emotiv neuroheadset now makes it possible for games to be controlled and influenced by the player’s mind. [link]

Emotiv Systems have been working 4 years on R&D, and have come up with their commercially viable BCI — and at a remarkably low price considering its capabilities and that this is the first time such technology hits the market for general consumers. Which brings us to its features.

What Emotiv’s Epoc Neuroheadset Can Do

Let’s start off with an easy-digest list of features expected to be bundled in the first release of EPOC:

Wireless headset - 12 hour battery-life (playing time)
Demo Game - Makes use of- and demonstrates the headset’s features
Emortal - Access to an online hub that allows users to interact with photos and music using Epoc
The EPOC system is comprised of three main software components, each of which detects different kinds of brainwave activity.

The Affectiv suite can reportedly measure the emotional states of the user. Anger, fear, frustration. Emotiv puts forth the example that this could be used to have games increase or decrease the difficulty level depending on the player’s state of mind. The Cognitiv Suite is the control mechanism that allows players to control objects, and the Expressiv suite which measures and interpretes facial expressions of the user. The descriptions and demos are vivid, for example: You smile and thus your avatar smiles.

One of my earliest questions regarding EPOC was: can the system discern many patterns at the same time with any knowledge beforehand on what you’re trying to accomplish?

Most of what I’ve seen from their demos is task-and-turn based, where the player is moved between “phases”, each of which requires him to use one and only one specific action at a time. The Stonehenge Demo, for example, moves the player from stone to stone — but the player only applies one action to each stone (e.g. “rotate” or “lift”; not both).

Let’s elaborate. A user is inside Second Life and has created a plain box. My question is: Can the system handle rotating the box while the user is smiling/making the avatar smile? Or rotating the box, moving it a bit forward, then up — perhaps even rotating and lifting the box at the same time? Is all of this possible? Because if this were possible I’d be geeked out.

Keymap Your Brainwaves
I got mail yesterday. And I geeked out. The letter was from Emotiv reporting, amongst other things, more information on EmoKey — their software for mapping mental intention to keyboards (yes, meaning the Epoc headset will be connectable to virtually any application).
The descriptions almost sound surreal:

EmoKey Software - Use the Emotiv EPOC with your existing software

In our efforts to enabled our users, Emotiv has developed the EmoKey software application in conjunction with the Emotiv EPOC. EmoKey allows you to associate any of the Emotiv EPOC detections with keystrokes on your PC. EmoKey enables all of your existing PC software to be Emotiv EPOC compatible right out of the box! In practice, this means that you can link a “smile” detection to type the “smiley emoticon” in your chat application or link a thought, such as “rotate clockwise” to a series of keystrokes such as “a-w-d-s-a-w-d-s” to rotate your magic wand!

This appears to indicate that you can basically do any action, at any time, anywhere. Right? Well, almost. It’s not clear whether you can only “press” one button at a time (”a then w then d then s….”), or if you can press many buttons at the same time. It could even be a third case where you can press 3 buttons at a time — one from each detection suite (unlikely).

However, I can venture the guess that a feature of the EmoKey is to define a “virtual button” (if not, please spread the idea to Emotiv!). This could allow you to compose a series of virtual buttons. A specific thought could then be assigned to a series of them “ctrl+a, ctrl+w, …” instead of single physical buttons, thereby enabling you to press two buttons at the same time. Like enabling rotation of a something while smiling. This brings up the question of how many mappings there can be?