6 research outputs found

    Facial Expression Recognition with Independent Subspace Analysis Based Feature Learning

    Get PDF
    手工设计的特征(如gAbOr、lbP等)在表情识别中得到了广泛的应用。独立子空间分析是一种无监督特征学习方法,可从图像中学习出具有相位不变的特征。在表情识别应用中,由于复杂背景的影响以及人脸对齐方法的局限性,很难得到精确对齐的人脸图像序列。研究了在非精确对齐情况下,基于独立子空间分析的表情识别问题。通过分析不同子空间尺寸下的表情识别效果发现,在非精确对齐情况下,选择合适的子空间尺寸能提升学到的特征对表情识别的鲁棒性。Hand-designed features(such as Gabor, LBP) has been widely employed in facial expression recognition.In the real-world applications of facial expression recognition, it is very difficult to achieve perfect face alignment because of the impact of complex background and the limitations of face alignment approaches.Independent Subspace Analysis(ISA) is an unsupervised feature learning method, which can be used to learn phase-invariant visual features from images.The problem of facial expression recognition based on ISA in the situation of not precise face alignment was investigated.Through analyzing the facial expression recognition performances with different subspace size, it was turned out that choosing an appropriate subspace size is important to improve the robustness of learned features for facial expression recognition in the situation of not precise alignment.福建省自然科学基金项目(2014J01246); 虚拟现实技术与系统国家重点实验室开放基金(BUAA-VR-14KF-01); 2014年安徽省科学技术厅重大科技专项项目(1301021018

    Automated and Real Time Subtle Facial Feature Tracker for Automatic Emotion Elicitation

    Get PDF
    This thesis proposed a system for real time detection of facial expressions those are subtle and are exhibited in spontaneous real world settings. The underlying frame work of our system is the open source implementation of Active Appearance Model. Our algorithm operates by grouping the various points provided by AAM into higher level regions constructing and updating a background statistical model of movement in each region, and testing whether current movement in a given region substantially exceeds the expected value of movement in that region (computed from statistical model). Movements that exceed the expected value by some threshold and do not appear to be false alarms due to artifacts (e.g., lighting changes) are considered to be valid changes in facial expressions. These changes are expected to be rough indicators of facial activity that can be complemented by contexual driven predictors of emotion that are derived from spontaneous settings

    Real-time facial expression recognition using STAAM and layered GDA classifier

    No full text
    This paper proposes a real-time person independent facial expression recognition in two parts: one is a model fitting part using a proposed stereo active appearance model (STAAM) and another is a person independent facial expression recognition using a layered generalized discriminant analysis (GDA) classifier. The STAAM fitting algorithm uses multiple calibrated perspective cameras to compute the 3D shape and rigid motion parameters. The use of calibration information reduces the number of model parameters, restricts the degree of freedom in the model parameters, and increases the accuracy and speed of fitting. The STAAM uses a modified simultaneous update fitting method that reduces the fitting computation greatly. Also, the layered GDA classifier combines 3D shape and 2D appearance to improve the recognition performance of person independent facial expressions. Experimental results show that (1) the STAAM shows a better fitting stability than the existing multiple-view AAM (MVAAM), (2) the modified simultaneous update algorithm accelerates the AAM fitting speed, and (3) the combination of the 3D shape and 2D appearance features using a layered GDA classifier improves the performance of facial expression recognition greatly. (C) 2008 Elsevier B.V. All rights reserved.X1118sciescopu

    3D Virtual Worlds and the Metaverse: Current Status and Future Possibilities

    Get PDF
    Moving from a set of independent virtual worlds to an integrated network of 3D virtual worlds or Metaverse rests on progress in four areas: immersive realism, ubiquity of access and identity, interoperability, and scalability. For each area, the current status and needed developments in order to achieve a functional Metaverse are described. Factors that support the formation of a viable Metaverse, such as institutional and popular interest and ongoing improvements in hardware performance, and factors that constrain the achievement of this goal, including limits in computational methods and unrealized collaboration among virtual world stakeholders and developers, are also considered

    Micro-facial movement detection using spatio-temporal features

    Get PDF
    Micro-facial expressions are fast, subtle movements of facial muscles that occur when someone is attempting to conceal their true emotion. Detecting these movements for a human is di�cult, as the movement could appear and disappear within half of a second. Recently, research into detecting micro-facial movements using computer vision and other techniques has emerged with the aim of outperforming a human. The motivation behind a lot of this research is the potential applications in security, healthcare and emotional-based training. The research has also introduced some ethical concerns on whether it is okay to detect micro-movements when people do not know they are showing them. The main aim of this thesis is to investigate and develop novel ways of detecting micro-facial movements using features based in the spatial and temporal domains. The contributions towards this aim are: an extended feature descriptor to describe micro-facial movement namely Local Binary Patterns on Three Orthogonal Planes (LBP-TOP) combined with Gaussian Derivatives (GD); a dataset of spontaneously induced micro-facial movements, namely Spontaneous Activity of Micro-Movements (SAMM); an individualised baseline method for micromovement detection that forms an Adaptive Baseline Threshold (ABT); Facial Action Coding System (FACS)-based regions are proposed to focus on the local movement of relevant facial areas. The LBP-TOP with GD feature was developed to improve on an established feature and use the GD to enhance the facial features. Using machine learning, the method performs well achieving an accuracy of 92.6%. Next a new dataset, SAMM, was introduced that improved on the limitations of previous sets, including a wider demographic, increased resolution and comprehensively FACS coded. An individualised baseline method was the introduced and tested using the new dataset. Using feature di�erence instead of machine learning, the performance increased with a recall of 0.8429 on the maximum thresholding and a further increase of the recall to 0.9125 when using the ABT. To increase the relevance of what is being processed on the face, FACS-based regions were created. By focusing on local regions and individualised baselines, this method outperformed similar state-of-the-art with an Area Under Curve (AUC) of 0.7513. The research into detecting micro-movements is still in it's infancy, and much more can be done to advance this �eld. While machine learning can �nd patterns in normal facial expressions, it is the feature di�erence methods that perform the best when detecting the subtle changes of the face. By using this and comparing the movement against a person's baseline, the micro-movements can �nally be accurately detected
    corecore