Thursday, March 27, 2008

Neckband Detects User Thoughts And Translates to Speech [Neural Interface]

I recently came across news of a device that geeked me out. Its a neckband that can detect and analyze neural firings when we think about saying something, and translate them into audible words via speech synthesizer. Beyond the obvious use of bettering the lives of people who’ve lost their ability to speak, it could enable us to make phonecalls without having to actually talk (as is demonstrated in a video in this article). The creators of the device mention that they’ll have a product by the end of the year for people with ALS (a.k.a. Lou Gehrig’s Disease).


In my aforementioned geek-out craze I told my girlfriend about the device, called the Audeo, who immediately identified the problem of the device saying a thought you don’t actually want the other person to hear. You’re on the phone with your boss when you suddenly hear the device blurt out “Are you never going to shut up about those damn TPS Reports!?“.



Good point. But the creators say the device can differentiate between things that you’re thinking, and things that you actually want to say. You have to think about using your voice for the device to pick up on it.



I’m sure that this ability is a beneficial byproduct of making the device a “collar” around your neck monitoring the nerves that control muscles of the larynx.



Our Head’s Too Messy, Go for the Neck


The device is not a brain interface worn on the head, so it stands to reason that (a) they are monitoring neural activity to the muscles that control speech (larynx/voicebox), and (b) by doing so it’s easier to detect things that you actually want to say, as opposed to what you’re casually thinking.

The larynx is innervated by branches of the vagus nerve on each side. Sensory innervation to the glottis and supraglottis is by the internal branch of the superior laryngeal nerve. The external branch of the superior laryngeal nerve innervates the cricothyroid muscle. Motor innervation to all other muscles of the larynx and sensory innervation to the subglottis is by the recurrent laryngeal nerve.


However, I’m sure we’ve all been in situations where we are on the verge of saying something, perhaps in an emotionally colored debate, but think twice and eventually say something less aggressive. In such a situation I’m sure the device could accidentally be triggered. So the user must make sure to be perfectly balanced, one with himself and the universe before using it for important conversations. At least for now.


Writing this I get the idea that this problem could be overcome with AI; natural language processing could detect potentially insulting sentences or harsh language. The user could then be prompted to verify whether he meant to say a particular sentence (whether this would introduce too much lag is another question).


Voiceless Phonecalls
The device, currently able to recognize 150 words, is under development by Ambient Corporation, co-founded by Micahel Callahan who demonstrates the device in the following video at the TI Developer Conference’08 by placing a “voiceless phonecall”.


For the past few decades, humans have increasingly been extending their intellectual capacity with the use of machines. An example is using mobile devices to retrieve knowledge on the fly — making each device-wielding human more intellectually capable than one 20 years ago. But this a matter of perspective, and many only see future invasive devices as “extensions of intelligence” (e.g. neural-interfaced memory storage device) and everything else as tools.
Modern technology is starting to blur this line between intellectual extensions and tools. The “Smartest Person in the Room” project is one of these: Using the Audeo, a person thinks of a question — the question is consequently sent to a web knowledge-application, the answer found and tunneled back out through the speakers. Question never audibly asked, yet answered. Quite brilliant.


Popout
Looking forward to monitoring the developments of this project, feeding my interest in machine interfaces right along Emotiv’s Epoc and Neurosky’s non-invasive neural interfaces.


Links & References
New Scientist on the Audeo

Black carbon contributes more in global warming: study

New York (PTI) : Black carbon, emitted from biomass burning, diesel engine exhaust and cooking fires -- widely used in India and China -- has a warming effect in the atmosphere three to four times greater than prevailing estimates, according to scientists.
In an upcoming article in the journal Nature Geoscience, Scripps Climate and Atmospheric Science Professor V Ramanathan and University of Iowa researcher Greg Carmichael presented their findings on the global warming effect that the soot and other forms of black carbon could have.
Between 25 and 35 per cent of black carbon comes from India and China, emitted from the burning of wood and cow dung in household cooking and through the use of coal to heat homes, it says.

Soot and other forms of black carbon could have as much as 60 per cent of the current global warming effect of carbon dioxide, the leading greenhouse gas, the researchers noted.

Per capita emissions of black carbon from the United States and some European countries are still comparable to those from south and east Asia, the paper says.

In the paper, Ramanathan and Carmichael integrated observed data from satellites, aircraft and surface instruments about the warming effect of black carbon.

They found that its warming effect in the atmosphere, is about 0.9 watts per metre squared (W/m-2), compared to estimates of between 0.2 W/m-2 and 0.4 W/m-2 that were agreed upon as a consensus estimate in a report released last year by the Intergovernmental Panel on Climate Change (IPCC).