Identity Recognition using 4D-Facial Dynamics



The human face has been the subject of tremendous scrutiny in the fields of human cognition, computer vision, image processing and computer graphics, and is perhaps the most extensively researched facet of our body.

The study of the face from a computational perspective has resulted in increasingly more sophisticated tools in non-rigid object modelling and tracking, object parameterisation and recognition, occlusion handling and extracting invariant features in real-world settings.

Much of this analysis has been conducted on static 2D or 3D images or short 2D image sequences. However, there has been very little work in investigating facial dynamics in video-rate 3D data. The advantages of 3D over 2D data in pattern recognition tasks have been largely considered as a means to overcome variations in pose and illumination. However, 3D information over time also provides us with a complete description of how an object deforms in 4D spatiotemporal space without the loss of information, which is incurred as part of the 2D image projection process.

With respect to the human face, one important use for this is to analyse the ways in which individuals can, or are able to deform their face while performing expression or speech. We can use this analysis to explore the similarities and idiosyncrasies of facial motion across individuals, which has important applications in physiological and clinical studies. We can also use this to generate individualized dynamic facial models for highly realistic animation. Another direction of research stems from the question of whether it is possible to characterise an individual based on their facial motion. For example, how can we quantify the similarly between two peoples’ smiles? Which expressions best discriminate individuals? Is it possible to build a prototypical 4D model of how people smile? How are individual differences reflected as a deviation from this model?



Work carried out at the Institute of Systems and Robotics (ISR) on the Department of Electrical and Computer Engineering (DEEC) from the Faculty of Sciences and Technology (FCTUC) from the University of Coimbra

This Project is supported by the Portuguese
Fundação para a Ciência e a Tecnologia (FCT)
with grant - PTDC/EIA-CCO/108791/2008

Project time span: 1 April 2010 - 30 August 2012




DEMOS

Demonstration videos