Finmaxbo › Blog › A novel MIT headset allows humans and computers to communicate silently
A novel MIT headset allows humans and computers to communicate silently
The development of subvocalization in the XXI century is in the full-blown period as a scientists team from MIT under the lead of Arnav Kapur is building up a headset for computer control with the absence of voice. They swayed to almost extirpate the acoustic component when shifting words from person to machine and vice versa. This is not an extreme structure over existing systems, but a principled new interface named AlterEgo.
Working differently from laryngophone, this headset responds not to mechanical oscillation of the skin when pronouncing words but to electrical movement in the muscles and nerves of the larynx, jaws and, tongue. Users do not need to open their mouths in fact, let in air and say words, it is adequate transparently to imagine how he pronounces the team. A string of electrodes counts the pressure, the neural platform links their pattern to the database and decodes what the person said. In the procedure of research, Kapoor’s group declined the number of sensors from one and a half dozen to just four, identified the most desirable areas for their location, grew algorithms for identifying subvocalization details. The reaction from the computer is also shifted silently, through the oscillation of the bones of the skull. Hence, the experimental headset looks “cracked”- it doesn’t have an identical microphone and earpiece. And the process of dialogue is resembling reading thoughts, which can not be caught from outside.
The vocabulary of the system is still simple, but it is completely fit for the shifting of short codes, for instance, the appointment of chess moves. After 15 minutes of personal customization, volunteers within 90 minutes of noiseless communication only made 8% of errors. This is so inspiring to Kapoor’s group that scientist has already imaging about establishing a new human-machine interface. Why do we need to disclose the skull to the user and embed some transmitter to decode the thoughts, if there is a chance to direct with a different, simpler and painless way?
Thad Starner, a professor at Georgia Tech’s College of Computing stated that he saw a diverse team of uses for this technology, containing in a loud atmosphere like airport runways and in factories. He believes that people with voice impairment, as well as those working ops, might receive benefits from this device in the future.