As Artificial Intelligence is a useful tool to help take the burden off humans when it comes to trivial and mundane tasks. Top companies such as Google using AI and machine learning to even emulate people for simple tasks such as booking a salon appointment, with products such as Google Duplex.
AI algorithms also used to animate 2D objects such as talking heads of people, but this typically requires a huge dataset of samples to train the neural network to adapt to the objects or in this case, the human’s behavior and mannerisms. This process just got a lot simpler thanks to Samsung AI center’s Moscow, which has published a paper which shows that a highly realistic talking head model can be obtained by feeding the algorithm with as little as a single sample photo.
Here What mentioned in the research paper?
The research paper, titled “Few Shot Adversarial Learning of Realistic Neural Talking Head Models” from the Skolkovo institute of Science and Technology in Moscow, found it’s way to a sub reddit along with a demo video which shows the AI in action. The algorithm uses a technique of using landmarks which is a skeletal framework of a persons face learnt by crawling through a large dataset of videos, which is used as the backbone for the synthesized face. The realism of the synthesized face depends on how many source images you feed the algorithm.
The research paper shows the difference between models trained with 32 samples, eight and one sample. it also shows the algorithm successfully creating synthesized faces of celebrities and even paintings, such as the Mona Lisa.
The technique can be used to bring people more realistic looking Memojis, which are Apple’s Version of an Animoji but of yourself. The creative potential of this is endless. However, such technology can also be misused if fallen on wrong hands.