Google research lets sign language switch active speaker in video calls
An aspect of video calls that many of us take for granted is the way they can switch between feeds to highlight whoevers speaking.
Silent speech like sign language doesnt trigger those algorithms, unfortunately, but this research from Google might change that.
Its a real-time sign language detection engine that can tell when someone is signing and when theyre done.
Of course its trivial for humans to tell this sort of thing, but its harder for a video call system thats used to just pushing pixels.
A new paper from Google researchers, presented at ECCV, shows how it can be done efficiency and with very little latency. It would defeat the point if the sign language detection worked but it resulted in delayed or degraded video, so their goal was to make sure the model was both lightweight and reliable.
The system first runs the video through a model called PoseNet, which estimates the positions of the body and limbs in each frame. This simplified visual information is sent to a model trained on pose data from video of people using German Sign Language, and it compares the live image to what it thinks signing looks like.
This simple process already produces 80 percent accuracy in predicting whether a person is signing or not, and with some additional optimizing gets up to 91.5 percent accuracy.
Right now its just a demo, which you can try here, but there doesnt seem to be any reason why it couldnt be built right into existing video call systems or even as an app that piggybacks on them.
We use cookies and analyse traffic to this site. By continuing to use this site, closing this banner, or clicking "I Agree", you agree to the use of cookies. Read our privacy poplicy for more information.