194,451 research outputs found

    An Experimental Investigation about the Integration of Facial Dynamics in Video-Based Face Recognition

    Get PDF
    Recent psychological and neural studies indicate that when people talk their changing facial expressions and head movements provide a dynamic cue for recognition. Therefore, both fixed facial features and dynamic personal characteristics are used in the human visual system (HVS) to recognize faces. However, most automatic recognition systems use only the static information as it is unclear how the dynamic cue can be integrated and exploited. The few works attempting to combine facial structure and its dynamics do not consider the relative importance of these two cues. They rather combine the two cues in an adhoc manner. But what is the relative importance of these two cues separately? Does combining them enhance systematically the recognition performance? To date, no work has extensively studied these issues. In this article, we investigate these issues by analyzing the effects of incorporating the dynamic information in video-based automatic face recognition. We consider two factors (face sequence length and image quality) and study their effects on the performance of video-based systems that attempt to use a spatio-temporal representation instead of one based on a still image. We experiment with two different databases and consider HMM (the temporal hidden Markov model) and ARMA (the auto-regressive and moving average model) as baseline methods for the spatio-temporal representation and PCA and LDA for the image-based one. The extensive experimental results show that motion information enhances also automatic recognition but not in a systematic way as in the HVS

    An integrated access control and lighting configuration system for smart buildings

    Get PDF
    This article presents an integrated access control and lighting configuration system for smart buildings. The system uses two-factor authentication, one based on face recognition and other on RFID TAG, and identifies the user inside a room and performs an automatic lighting configuration based on user’s behavior. The communication among the devices is performed by Radio Frequency using the low to medium frequency spectrum (LMRF), without providing direct Internet access, and, hence, avoiding known Internet security issues. This system can be easily deployed on meeting rooms or offices in business or government buildings. Through the evaluations we observe an acceptable processing execution time, an acceptable communication time and the robustness of the system

    A real-time facial expression recognition system for online games

    Get PDF
    Multiplayer online games (MOGs) have become increasingly popular because of the opportunity they provide for collaboration, communication, and interaction. However, compared with ordinary human communication, MOG still has several limitations, especially in communication using facial expressions. Although detailed facial animation has already been achieved in a number of MOGs, players have to use text commands to control the expressions of avatars. In this paper, we propose an automatic expression recognition system that can be integrated into an MOG to control the facial expressions of avatars. To meet the specific requirements of such a system, a number of algorithms are studied, improved, and extended. In particular, Viola and Jones face-detection method is extended to detect small-scale key facial components; and fixed facial landmarks are used to reduce the computational load with little performance degradation in the recognition accuracy

    A Real-Time Facial Expression Recognition System for Online Games

    Get PDF
    Multiplayer online games (MOGs) have become increasingly popular because of the opportunity they provide for collaboration, communication, and interaction. However, compared with ordinary human communication, MOG still has several limitations, especially in communication using facial expressions. Although detailed facial animation has already been achieved in a number of MOGs, players have to use text commands to control the expressions of avatars. In this paper, we propose an automatic expression recognition system that can be integrated into an MOG to control the facial expressions of avatars. To meet the specific requirements of such a system, a number of algorithms are studied, improved, and extended. In particular, Viola and Jones face-detection method is extended to detect small-scale key facial components; and fixed facial landmarks are used to reduce the computational load with little performance degradation in the recognition accuracy

    Recognizing Faces -- An Approach Based on Gabor Wavelets

    Get PDF
    As a hot research topic over the last 25 years, face recognition still seems to be a difficult and largely problem. Distortions caused by variations in illumination, expression and pose are the main challenges to be dealt with by researchers in this field. Efficient recognition algorithms, robust against such distortions, are the main motivations of this research. Based on a detailed review on the background and wide applications of Gabor wavelet, this powerful and biologically driven mathematical tool is adopted to extract features for face recognition. The features contain important local frequency information and have been proven to be robust against commonly encountered distortions. To reduce the computation and memory cost caused by the large feature dimension, a novel boosting based algorithm is proposed and successfully applied to eliminate redundant features. The selected features are further enhanced by kernel subspace methods to handle the nonlinear face variations. The efficiency and robustness of the proposed algorithm is extensively tested using the ORL, FERET and BANCA databases. To normalize the scale and orientation of face images, a generalized symmetry measure based algorithm is proposed for automatic eye location. Without the requirement of a training process, the method is simple, fast and fully tested using thousands of images from the BioID and BANCA databases. An automatic user identification system, consisting of detection, recognition and user management modules, has been developed. The system can effectively detect faces from real video streams, identify them and retrieve corresponding user information from the application database. Different detection and recognition algorithms can also be easily integrated into the framework

    Automatic emotional state detection using facial expression dynamic in videos

    Get PDF
    In this paper, an automatic emotion detection system is built for a computer or machine to detect the emotional state from facial expressions in human computer communication. Firstly, dynamic motion features are extracted from facial expression videos and then advanced machine learning methods for classification and regression are used to predict the emotional states. The system is evaluated on two publicly available datasets, i.e. GEMEP_FERA and AVEC2013, and satisfied performances are achieved in comparison with the baseline results provided. With this emotional state detection capability, a machine can read the facial expression of its user automatically. This technique can be integrated into applications such as smart robots, interactive games and smart surveillance systems

    Integrated process of images and acceleration measurements for damage detection

    Get PDF
    The use of mobile robots and UAV to catch unthinkable images together with on-site global automated acceleration measurements easy achievable by wireless sensors, able of remote data transfer, have strongly enhanced the capability of defect and damage evaluation in bridges. A sequential procedure is, here, proposed for damage monitoring and bridge condition assessment based on both: digital image processing for survey and defect evaluation and structural identification based on acceleration measurements. A steel bridge has been simultaneously inspected by UAV to acquire images using visible light, or infrared radiation, and monitored through a wireless sensor network (WSN) measuring structural vibrations. First, image processing has been used to construct a geometrical model and to quantify corrosion extension. Then, the consistent structural model has been updated based on the modal quantities identified using the acceleration measurements acquired by the deployed WSN. © 2017 The Authors. Published by Elsevier Ltd

    Multimedia information technology and the annotation of video

    Get PDF
    The state of the art in multimedia information technology has not progressed to the point where a single solution is available to meet all reasonable needs of documentalists and users of video archives. In general, we do not have an optimistic view of the usability of new technology in this domain, but digitization and digital power can be expected to cause a small revolution in the area of video archiving. The volume of data leads to two views of the future: on the pessimistic side, overload of data will cause lack of annotation capacity, and on the optimistic side, there will be enough data from which to learn selected concepts that can be deployed to support automatic annotation. At the threshold of this interesting era, we make an attempt to describe the state of the art in technology. We sample the progress in text, sound, and image processing, as well as in machine learning
    • …
    corecore