'MIT researchers have developed an impressive, albeit terrifying, artificial intelligence application that can figure out what you look like just by listening to your voice.
In a recent paper titled 'Speech2Face: Learning the Face Behind a Voice,' they detailed how the AI software can reconstruct faces after being supplied with various sound bites.
To achieve this, the neural network was fed millions of educational clips from YouTube that featured more than 100,000 people.
'Our goal in this work is to study to what extent we can infer how a person looks from the way they talk,' researchers explained in the study, which was published on Arxiv, a publishing site for non-peer reviewed papers.
'Obviously, there is no one-to-one matching between faces and voices. Thus, our goal is not predict a recognizable image of the exact face, but rather to capture dominant facial traits of the person that are correlated with the input speech.'
The AI was able to study the provided YouTube footage and form correlations between the speaker's voice and face, as well as make judgments on factors like age, gender and ethnicity.
It was able to do this without any need for human intervention, according to researchers.
Researchers said the AI could have 'useful applications' in the future, like 'attaching a representative face to phone/video calls based on the speaker's voice.'
However, they caution that the neural network isn't meant to generate exact depictions of what a person looks like; instead, it only generates a rough approximation.'
Read more: Creepy AI can guess what you look like just by listening to a short audio clip of your voice