Soon, Facebook will be able to recognize anyone's face the same way we can. The social network's researchers have created DeepFace, a complex process that can identify faces almost as well as a human being can.
Built by Yaniv Taigman and colleagues at Facebook's artificial intelligence lab, DeepFace takes a weakness that most facial recognition systems have, and turn it into one of its strengths. If faces in a photo are not in the center, most facial recognition systems fail to recognize the face, while DeepFace detects a match.
Taigman and his team designed a 3-D model of a face from a photo that can be put in perfect position so that the algorithm can find a match. A simulated neural network then computes a numerical description of the repositioned face. Basically, after DeepFace finds enough similarities between two photos, it will detect a match.
"This deep network involves more than 120 million parameters using several locally connected layers without weight sharing, rather than the standard convolutional layers," researchers explained. "Thus, we trained it on the largest facial dataset to date, an identity- labeled dataset of four million facial images belonging to more than 4,000 identities, where each identity has an average of over a thousand samples."
Regardless of light or angle, DeepFace is said to be able to detect faces with a 97.25 percent accuracy. The average human has a percentage of 97.53 percent.
The DeepFace algorithm has been tested successfully for facial verification within YouTube videos, although it was challenging as the lack of sharp quality clouded its detection process.
Facebook will probably use the tech to make it easier for users to tag people within their photos and improve other aspects of Facebook that use photos.
As of yet, DeepFace remains solely a research project. Researchers will present their work at the IEEE Conference on Computer Vision and Pattern Recognition in June.