11,656 research outputs found

    Facial Expression Recognition

    Get PDF

    FMX (EEPIS FACIAL EXPRESSION MECHANISM EXPERIMENT): PENGENALAN EKSPRESI WAJAH MENGGUNAKAN NEURAL NETWORK BACKPROPAGATION

    Get PDF
    In the near future, it is expected that the robot can interact with humans. Communication itself has many varieties. Not only from word to word, but body language also be the medium. One of them is using facial expressions. Facial expression in human communication is always used to show human emotions. Whether it is happy, sad, angry, shocked, disappointed, or even relaxed? This final project focused on how to make robots that only consist of head, so it could make a variety facial expression like human beings. This Face Humanoid Robot divided into several subsystems. There are image processing subsystem, hardware subsystem and subsystem of controllers. In image processing subsystem, webcam is used for image data acquisition processed by a computer. This process needs Microsoft Visual C compiler for programming that has been installed with the functions of the Open Source Computer Vision Library (OpenCV). Image processing subsystem is used for recognizing human facial expressions. With image processing, it can be seen the pattern of an object. Backpropagation Neural Network is useful to recognize the object pattern. Subsystem hardware is a Humanoid Robot Face. Subsystem controller is a single microcontroller ATMega128 and a camera that can capture images at a distance of 50 to 120 cm. The process of running the robot is as follows. Images captured by a camera webcam. From the images that have been processed with image processing by a computer, human facial expression is obtained. Data results are sent to the subsystem controller via serial communications. Microcontroller subsystem hardware then ordered to make that facial expression. Result of this final project is all of the subsystems can be integrated to make the robot that can respond the form of human expression. The method used is simple but looks quite capable of recognizing human facial expression. Keyword: OpenCV, Neural Network BackPropagation, Humanoid Robo

    Unsupervised learning of object landmarks by factorized spatial embeddings

    Full text link
    Learning automatically the structure of object categories remains an important open problem in computer vision. In this paper, we propose a novel unsupervised approach that can discover and learn landmarks in object categories, thus characterizing their structure. Our approach is based on factorizing image deformations, as induced by a viewpoint change or an object deformation, by learning a deep neural network that detects landmarks consistently with such visual effects. Furthermore, we show that the learned landmarks establish meaningful correspondences between different object instances in a category without having to impose this requirement explicitly. We assess the method qualitatively on a variety of object types, natural and man-made. We also show that our unsupervised landmarks are highly predictive of manually-annotated landmarks in face benchmark datasets, and can be used to regress these with a high degree of accuracy.Comment: To be published in ICCV 201

    From 3D Point Clouds to Pose-Normalised Depth Maps

    Get PDF
    We consider the problem of generating either pairwise-aligned or pose-normalised depth maps from noisy 3D point clouds in a relatively unrestricted poses. Our system is deployed in a 3D face alignment application and consists of the following four stages: (i) data filtering, (ii) nose tip identification and sub-vertex localisation, (iii) computation of the (relative) face orientation, (iv) generation of either a pose aligned or a pose normalised depth map. We generate an implicit radial basis function (RBF) model of the facial surface and this is employed within all four stages of the process. For example, in stage (ii), construction of novel invariant features is based on sampling this RBF over a set of concentric spheres to give a spherically-sampled RBF (SSR) shape histogram. In stage (iii), a second novel descriptor, called an isoradius contour curvature signal, is defined, which allows rotational alignment to be determined using a simple process of 1D correlation. We test our system on both the University of York (UoY) 3D face dataset and the Face Recognition Grand Challenge (FRGC) 3D data. For the more challenging UoY data, our SSR descriptors significantly outperform three variants of spin images, successfully identifying nose vertices at a rate of 99.6%. Nose localisation performance on the higher quality FRGC data, which has only small pose variations, is 99.9%. Our best system successfully normalises the pose of 3D faces at rates of 99.1% (UoY data) and 99.6% (FRGC data)
    • …
    corecore