A new wearable device from MIT can "read" your thoughts with over 90 percent accuracy.
The "AlterEgo" device is worn suspended from the ear and attached to the chin of the user. It has sensors along the bar that pick up neuromuscular signals that occur in your jaw and face when you think of words in your mind, which are undetectable to the human eye.
The researchers collected data from computational tasks, such as arithmetic and the chess application, with limited vocabularies of about 20 words each. Signals from "sub-vocalized" (said in your head) thoughts picked up by the device were fed back into a machine-learning system trained to associate these words with the signals it receives. The device can then interpret the signals as words and commands.
Essentially, just by thinking about words, you can talk to this device and it will understand you.
The idea is that the device, which is still quite preliminary, may eventually be used by users to answer queries (e.g. by using Google) or to control a computer.
The device can already talk back to you through bone-conduction earphones, without anyone else hearing any part of your conversation. The team at MIT want to expand on this, to the point that you can have seamless conversations with the machine (or Internet, applications, AI assistants, etc.) in your own head, without ever picking up another device.
“Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?” Arnav Kapur, one of the creators of the device, told MIT News.
Currently, the device has been tested on 10 subjects in a usability study. Researchers spent around 15 minutes customizing the device to the user, then asked the test subjects to use the device to issue commands to a computer.
The device was 92 percent accurate at transcribing the sub-vocalized words in the heads of the users.
However, unlike the video above, the team did not test the system in a real-world ambulatory setting. "Our existing study was conducted in a stationary setup. In the future, we would like to conduct longitudinal usability tests in daily scenarios," they write.
By collecting more data through everyday use, the way Siri and Alexa do, they believe that it could be made much more accurate very quickly, and increase its vocabulary at the same time.
“We’re in the middle of collecting data, and the results look nice," said Kapur. "I think we’ll achieve full conversation some day.”
This article was originally published on IFLScience. Read it here.
Comments
Post a Comment