3D Facial Data Representation

3D Facial Data Representation

In this third task, high level significant information about a unique facial motion signature is extracted form the Facial Dynamics Database provided by task 2.

T3.1 - Muscle activation based models

The first approach is based on AUs characteristics (from the FACS system) or facial animation parameters in MPEG-4 FAPs. Both are related to activation of located muscle groups, despite that the FAPs model defines a magnitude for a particular facial action with higher degree of discretization. Another advantage of using the FAPs model, is on the higher degree of affinity between the feature extraction with a deformable model (like the AAM) and its parameters of animation. This approach reduces the high dimensionality of data using compression based on a model that is related to facial muscle activation. Each 3D temporal instance is encoded with these models where the encoding parameters will be the input features later in classification (task 4). Both AUs and FAPs features discrimination power will be evaluated.

T3.2 - Manifold subspace analysis

In this second approach spatiotemporal features will be learned directly from the deformed facial models using subspace decomposition. This methodology aims to derive a subspace embedded in a hyperspace of high dimensionality. Whereas a cloud of points containing N samples, this can be considered as a single point in an N-dimensional space. For class of particular data, such as the human face, the variability of these points has a much less dimensional cardinality with respect to the total dimensionality of the data, so there is a low dimensional manifold of facial variability embedded onto high N-dimensional manifold.

Therefore, data of high dimensionality often represent phenomena that are inherently low-dimensionality. As the face warp smoothly during the formation of an expression, there is a high degree of dependency between the paths of points in the region around the face. Some studies showed that the facial expressions describe smooth movement (a path) in a distribution embedded in a high-dimensional space where the neutral expression is presented in the center of that distribution.

Find low dimensional manifold using subspace analysis tools There are a variety of non-supervised learning algorithms, which will be exploited to find the subspace such as Laplacian EigenMaps (LE), Independent Component Analysis (ICA), Locally Linear Embedding (LLE), Isomap, Lipschitz Embedding or Locally Preserving Projections (LPP).

Distance metric to measure dissimilarity in the embedded space Several problems arise with this approach: the huge dimensionality of data involved, the distance metric applied in the original space may not correspond with dissimilarity embedded space (the distance between neighbours may be dominated by irrelevant attributes). Establishing the dimensionality of the embedded space Another problem that will be addressed is how to determine the optimal dimensionality for the embedded space. In discriminatory decomposition, such as PCA, it is common to retain the eigenvectors which explain a user defined variance (related to theeigenvalues).

However solutions like LE, LLE and Isomap learn the embedding using the intrinsic dimensionality of the data instance, rather than its variability over multiple samples. Establishing the optimal number of dimensions where the input features were projected into will be the subject of further research.

The goals of this second approaches are to show that the extra 3D information provided will be better discriminative w.r.t. the current 2D state of the art methods and to validate what features learned directly form the data can outperform features derived from a set of heuristics (i.e. facial Action Units).

BACK   ::  Demos  ::  Publications


Demonstration videos