18,843 research outputs found

    3-D Face Analysis and Identification Based on Statistical Shape Modelling

    Get PDF
    This paper presents an effective method of statistical shape representation for automatic face analysis and identification in 3-D. The method combines statistical shape modelling techniques and the non-rigid deformation matching scheme. This work is distinguished by three key contributions. The first is the introduction of a new 3-D shape registration method using hierarchical landmark detection and multilevel B-spline warping technique, which allows accurate dense correspondence search for statistical model construction. The second is the shape representation approach, based on Laplacian Eigenmap, which provides a nonlinear submanifold that links underlying structure of facial data. The third contribution is a hybrid method for matching the statistical model and test dataset which controls the levels of the model’s deformation at different matching stages and so increases chance of the successful matching. The proposed method is tested on the public database, BU-3DFE. Results indicate that it can achieve extremely high verification rates in a series of tests, thus providing real-world practicality

    A hybrid technique for face detection in color images

    Get PDF
    In this paper, a hybrid technique for face detection in color images is presented. The proposed technique combines three analysis models, namely skin detection, automatic eye localization, and appearance-based face/nonface classification. Using a robust histogram-based skin detection model, skin-like pixels are first identified in the RGB color space. Based on this, face bounding-boxes are extracted from the image. On detecting a face bounding-box, approximate positions of the candidate mouth feature points are identified using the redness property of image pixels. A region-based eye localization step, based on the detected mouth feature points, is then applied to face bounding-boxes to locate possible eye feature points in the image. Based on the distance between the detected eye feature points, face/non-face classification is performed over a normalized search area using the Bayesian discriminating feature (BDF) analysis method. Some subjective evaluation results are presented on images taken using digital cameras and a Webcam, representing both indoor and outdoor scenes

    Eye in the Sky: Real-time Drone Surveillance System (DSS) for Violent Individuals Identification using ScatterNet Hybrid Deep Learning Network

    Full text link
    Drone systems have been deployed by various law enforcement agencies to monitor hostiles, spy on foreign drug cartels, conduct border control operations, etc. This paper introduces a real-time drone surveillance system to identify violent individuals in public areas. The system first uses the Feature Pyramid Network to detect humans from aerial images. The image region with the human is used by the proposed ScatterNet Hybrid Deep Learning (SHDL) network for human pose estimation. The orientations between the limbs of the estimated pose are next used to identify the violent individuals. The proposed deep network can learn meaningful representations quickly using ScatterNet and structural priors with relatively fewer labeled examples. The system detects the violent individuals in real-time by processing the drone images in the cloud. This research also introduces the aerial violent individual dataset used for training the deep network which hopefully may encourage researchers interested in using deep learning for aerial surveillance. The pose estimation and violent individuals identification performance is compared with the state-of-the-art techniques.Comment: To Appear in the Efficient Deep Learning for Computer Vision (ECV) workshop at IEEE Computer Vision and Pattern Recognition (CVPR) 2018. Youtube demo at this: https://www.youtube.com/watch?v=zYypJPJipY

    CGAMES'2009

    Get PDF

    Facial emotion recognition using min-max similarity classifier

    Full text link
    Recognition of human emotions from the imaging templates is useful in a wide variety of human-computer interaction and intelligent systems applications. However, the automatic recognition of facial expressions using image template matching techniques suffer from the natural variability with facial features and recording conditions. In spite of the progress achieved in facial emotion recognition in recent years, the effective and computationally simple feature selection and classification technique for emotion recognition is still an open problem. In this paper, we propose an efficient and straightforward facial emotion recognition algorithm to reduce the problem of inter-class pixel mismatch during classification. The proposed method includes the application of pixel normalization to remove intensity offsets followed-up with a Min-Max metric in a nearest neighbor classifier that is capable of suppressing feature outliers. The results indicate an improvement of recognition performance from 92.85% to 98.57% for the proposed Min-Max classification method when tested on JAFFE database. The proposed emotion recognition technique outperforms the existing template matching methods

    Fast Landmark Localization with 3D Component Reconstruction and CNN for Cross-Pose Recognition

    Full text link
    Two approaches are proposed for cross-pose face recognition, one is based on the 3D reconstruction of facial components and the other is based on the deep Convolutional Neural Network (CNN). Unlike most 3D approaches that consider holistic faces, the proposed approach considers 3D facial components. It segments a 2D gallery face into components, reconstructs the 3D surface for each component, and recognizes a probe face by component features. The segmentation is based on the landmarks located by a hierarchical algorithm that combines the Faster R-CNN for face detection and the Reduced Tree Structured Model for landmark localization. The core part of the CNN-based approach is a revised VGG network. We study the performances with different settings on the training set, including the synthesized data from 3D reconstruction, the real-life data from an in-the-wild database, and both types of data combined. We investigate the performances of the network when it is employed as a classifier or designed as a feature extractor. The two recognition approaches and the fast landmark localization are evaluated in extensive experiments, and compared to stateof-the-art methods to demonstrate their efficacy.Comment: 14 pages, 12 figures, 4 table

    FMX (EEPIS FACIAL EXPRESSION MECHANISM EXPERIMENT): PENGENALAN EKSPRESI WAJAH MENGGUNAKAN NEURAL NETWORK BACKPROPAGATION

    Get PDF
    In the near future, it is expected that the robot can interact with humans. Communication itself has many varieties. Not only from word to word, but body language also be the medium. One of them is using facial expressions. Facial expression in human communication is always used to show human emotions. Whether it is happy, sad, angry, shocked, disappointed, or even relaxed? This final project focused on how to make robots that only consist of head, so it could make a variety facial expression like human beings. This Face Humanoid Robot divided into several subsystems. There are image processing subsystem, hardware subsystem and subsystem of controllers. In image processing subsystem, webcam is used for image data acquisition processed by a computer. This process needs Microsoft Visual C compiler for programming that has been installed with the functions of the Open Source Computer Vision Library (OpenCV). Image processing subsystem is used for recognizing human facial expressions. With image processing, it can be seen the pattern of an object. Backpropagation Neural Network is useful to recognize the object pattern. Subsystem hardware is a Humanoid Robot Face. Subsystem controller is a single microcontroller ATMega128 and a camera that can capture images at a distance of 50 to 120 cm. The process of running the robot is as follows. Images captured by a camera webcam. From the images that have been processed with image processing by a computer, human facial expression is obtained. Data results are sent to the subsystem controller via serial communications. Microcontroller subsystem hardware then ordered to make that facial expression. Result of this final project is all of the subsystems can be integrated to make the robot that can respond the form of human expression. The method used is simple but looks quite capable of recognizing human facial expression. Keyword: OpenCV, Neural Network BackPropagation, Humanoid Robo
    corecore