Without making a sound, you can tell what you want to say just by shaking the skin of your throat and jaw. It's not lip reading, it's not an agent movie.
According to foreign media reports, recently, A AI system developed by the University of Tokyo and the Sony Computer Science Institute。
It is understood that the research inspiration of the device comes from the palpation lip reading method of people with audio-visual disabilities. The researchers use machine learning to automate the whole process and complete the silent voice interaction of deep learning.
With accelerometers and angular velocity sensors installed at two locations of the jaw skin, coupled with machine learning, the device can make a silent movement of the skin from the jaw to the throat caused by the movement of the jaw and tongue muscles that occur as the speech is spoken
At present,The researchers obtained 12 dimensional skin movement information from the sensor, and can analyze and recognize 35 kinds of voice command phrases, with an accuracy of over 94%。
The equipment is small in size, light in weight, low in power consumption, and not easy to be affected by surrounding environmental factors such as lighting conditions. In life, the device will not affect the normal life of the user. Eating and daily communication can be carried out as usual, which is very practical.
In the future, we believe that this equipment can benefit a large number of disabled people.