Soli sensors / chips, evolution from left to right for the original version to the current version
As early as July 29 this year, Google announced the motion sense function of pixel 4 on its blog. Thanks to the support of soli chip, users only need to move their fingers, and pixel 4 can sense, combining unique software algorithms with advanced hardware sensors.
Google later stressed this function at a conference, and a spokesman said that pixel 4 has built-in motion sensing radar, which can track submillimeter level gesture movement with high speed and accuracy. Based on this function, the user can only control by wavingcellphone, including skipping songs, pausing alarm clock or muting phone, etc.
Even in the dark environment, Motion Sense can still be used. Moreover, users can turn on or off Motion Sense, at any time, all processing is done locally on the phone, the data has never been saved, and has never been shared with other Google services.
With regard to the prospect of this function, Google official said that motion sense has just started and will gradually improve over time. Pixel 4 is the world's first smartphone device with a soli chip.
In theory, integrating Soli radar chips into smartphones is revolutionary; however, in the eyes of most people, the technology may still be a gimmick.
Google doesn't think so. Brandon barbello, pixel's product manager, said that motion sense has three ways to interact; if you can understand these three ways thoroughly, you won't think that this technology is just a gimmick.
The first is perceptual existence.
When the mobile phone is placed on the table, the soli radar chip in pixel 4 will generate a small sensing range around the mobile phone, which can be regarded as an invisible hemisphere with a radius of 1 to 2 feet; the sensing can only be started when the mobile phone is facing up or outward; moreover, if the user is beyond the sensing range of the mobile phone, the display screen will be closed.
The second is perceptual touch.
With this technology, when the user intentionally touches the phone, the phone will open the screen and activate the facial unlock sensor; if the alarm clock or bell rings, it will automatically stop ringing when the user reaches for the phone.
The third is perceptual gesture.
Gestures are divided into two types:up-and-down gestures are used to turn off alarm or mute tones;Sliding gestures are used to control the progress bar of music. Users can also use gestures to do more specific things, but Google will not temporarily open more gestures to third-party developers.
Similar functions were usually performed by other sensors. For example, the camera can detect that the user waves in front of the mobile phone, and the accelerometer will sense that the user is picking upTo activate face recognition. So people will wonder: will using the soli radar to complete these functions really make the user experience better?
Technically, people prefer radar chips to cameras, says Ivan Poupyrev, director of engineering at Google ATAP. First, radar consumes much less power than the camera; on the other hand, it is not a camera, it does not need to collect user characteristic information.
However, there are also some technical problems when soli is used for motion sense in pixel 4; for example, how to make radar work on such a micro scale?
Ivan Poupyrev said:
It's not as simple as buying a book, we have to start from scratch... In my opinion, applying this new technology to mobile phones is a technological miracle. Although in theory, the soli technology can detect the wings of a butterfly, or even a person standing 7 meters behind the wall, in practice, we need more time to achieve the above degree.
Although there are few things that Motion Sense can do now, this is what Google intended to do-Google is creating a new interactive language that needs to be simplified; otherwise, it can be confusing.
For example, the gesture of "sliding". In fact, there is no universal standard for this gesture. Different people have different ways of thinking and make different "sliding" gestures. Ivan poupyreve said his team spent weeks solving the problem, even collecting dozens of "sliding" gestures from different people.
In addition, Google shows users aCourseGoogle also sets a "preference" option for users to choose from, considering that some users may not like the default "sliding" gesture; given that some users may not like the default "sliding" gesture, Google also sets a "preference" option for users to choose from.
In other words, Google doesn't want to replace manual operation with auto sensing function. It just wants to provide users with a better and seamless experience. After all, it turns out that more people are willing to use the auto reply function instead of manually entering a "yes".
In the final analysis, the technologies that win hearts and minds are always those that are easy to use. The crux of the problem is not what new technologies / features can do for us, but what they can do for us, but what they can do with emotional interactions, or just fun and fun.