Facebook machine learning aims to modify faces, hands and outfits
The latest research out of Facebook sets machine learning models to tasks that, to us, seem rather ordinary but for a computer are still monstrously difficult.
These projects aim to anonymize faces, improvise hand movements and perhaps hardest of all give credible fashion advice.
Deepfakes use a carefully cultivated understanding of the faces features and landmarks to map one persons expressions and movements onto a completely different face. The Facebook team used the same features and landmarks, but instead uses them to tweak the face just enough that its no longer recognizable to facial recognition engines.
This could allow someone who, for whatever reason, wants to appear on video but not be recognized publicly to do so without something as clunky as a mask or completely fabricated face.
Instead, theyd look a bit like themselves, but with slightly wider-set eyes, a thinner mouth, higher forehead and so on.
The system they created appears to work well, but would of course require some optimization before it can be deployed as a product. But one can imagine how useful such a thing might be, either for those at risk of retribution from political oppressors or more garden variety privacy preferences.
Its a little funny to think about, but really theres not a lot of data on how exactly people move their hands when they talk. Its always nice to detect faces in a photo faster or more accurately, or infer location from the objects in a room, but clearly there are many more obscure or surprising aspects of digital life that could be improved with a little visual intelligence.
We use cookies and analyse traffic to this site. By continuing to use this site, closing this banner, or clicking "I Agree", you agree to the use of cookies. Read our privacy poplicy for more information.