Survey Interactive Synthesis of Animation Model Based on Tracking User's Utterance and Facial Expression
Abstract
Rana Ali salim.
Synthesizing expressive facial animation is a very challenging topic within the graphics community. Where Two-dimensional techniques have been used before, which lose a person's expressions, identity and embodiment in the image, thus unrealistic images that show an image without any expressions as a sad face when simulated from a distance, as well as generate lip and face movements of an unreal character by simulating a real character. The purpose of this paper is to review some of the ways through which to generate face animations and lip movements based on simulating the person's real face to arrive at a conclusion which of these methods are best for the purpose of adopting them and making some developments on them. The process of combining the AU with the deep learning using CNN algorithm gives excellent results and this is indicated by the 2019 research.