367,237 research outputs found

    FACE DETECTION SYSTEM

    Get PDF
    A face detection system and method are disclosed to identify the existence and position of human faces in input images. The system uses a full face detector and a part face detector in a convolutional neural network. The method involves processing of each image input using a classifier and an algorithm to detect full and part face regions, while differentiating nonface areas using the convolutional network. The above results are combined to identify face regions. This combination of techniques makes the neural network user-friendly and leads to quick processing of images for facial detection

    A 3D Face Modelling Approach for Pose-Invariant Face Recognition in a Human-Robot Environment

    Full text link
    Face analysis techniques have become a crucial component of human-machine interaction in the fields of assistive and humanoid robotics. However, the variations in head-pose that arise naturally in these environments are still a great challenge. In this paper, we present a real-time capable 3D face modelling framework for 2D in-the-wild images that is applicable for robotics. The fitting of the 3D Morphable Model is based exclusively on automatically detected landmarks. After fitting, the face can be corrected in pose and transformed back to a frontal 2D representation that is more suitable for face recognition. We conduct face recognition experiments with non-frontal images from the MUCT database and uncontrolled, in the wild images from the PaSC database, the most challenging face recognition database to date, showing an improved performance. Finally, we present our SCITOS G5 robot system, which incorporates our framework as a means of image pre-processing for face analysis

    Facial Emotion Recognition Using Machine Learning

    Get PDF
    Face detection has been around for ages. Taking a step forward, human emotion displayed by face and felt by brain, captured in either video, electric signal (EEG) or image form can be approximated. Human emotion detection is the need of the hour so that modern artificial intelligent systems can emulate and gauge reactions from face. This can be helpful to make informed decisions be it regarding identification of intent, promotion of offers or security related threats. Recognizing emotions from images or video is a trivial task for human eye, but proves to be very challenging for machines and requires many image processing techniques for feature extraction. Several machine learning algorithms are suitable for this job. Any detection or recognition by machine learning requires training algorithm and then testing them on a suitable dataset. This paper explores a couple of machine learning algorithms as well as feature extraction techniques which would help us in accurate identification of the human emotion

    Image Forensics for Forgery Detection using Contrast Enhancement and 3D Lighting

    Get PDF
    Nowadays the digital image plays an important role in human life. Due to large growth in the image processing techniques, with the availability of image modification tools any modification in the images can be done. These modifications cannot be recognized by human eyes. So Identification of the image integrity is very important in today’s life. Contrast and brightness of digital images can be adjusted by contrast enhancement. Move and paste type of images are Created by malicious person, in which contrast of one source image is enhanced to match the other source image. Here in this topic contrast enhancement technique is used which aimed at detecting image tampering has grown in different applications area such as law enforcement, surveillance. Also with the contrast enhancement, we propose an improved 3D lighting environment estimation method based on a more general surface reflection model. 3D lighting environment is an important clue in an image that can be used for image forgery detection. We intend to employ fully automatic face morphing and alignment algorithms. Also we intend to use face detection method to detect the face existence and 3D lighting environment estimation to check originality of human faces in the image

    IMPLEMENTATION OF ARTIFICIAL NEURAL NETWORK IN NANO SCALE ENVIRONMENT

    Get PDF
    Facial recognition systems are computer-based security systems that are able to automatically detect and identify human faces. Facial recognition has gained increasing interest in the recent decade. Over the years there have been several techniques being developed to achieve high success rate of accuracy in the identification and verification of individuals for authentication in security systems. This project experiments the concept of neural network for facial recognition that can differentiate and recognize face of image. This face recognition system begins with image pre-processing and then the output image is trained using Fuzzy c-means clustering (FCM) algorithm. FCM network learns by training the inputs, calculating the error between the real output and target output, and propagates back the error to the network to modify the weights until the desired output is obtained. After training the network, the recognition system is tested to ensure that the system can recognize the pattern of each face image. The purpose of this project is to recognize face of image for the recognition analysis using Neural Network and capture the brainwaves of the emotion recognition. This project is mainly concern with facial recognition systems using purely image processing technique

    Automatic Face and Hijab Segmentation Using Convolutional Network

    Get PDF
    Taking pictures and Selfies are now very common and frequent between people. People are also interested in enhancing pictures using different image processing techniques and sharing them on social media. Accurate image segmentation plays an important role in portrait editing, face beautification, human identification, hairstyle identification, airport Surveillance system and many other computer vision problems. One specific functionality of interest is automatic face and veil segmentation as this allows processing each separately. Manual segmentation can be difficult and annoying especially on smartphones small screen. In this paper, the proposed model uses fully convolutional network (FCN) to make semantic segmentation into skin, veil and background. The proposed model achieved an outperforming result on the dataset which consists of 250 images with global accuracy 92% and mean accuracy 92.69

    Face pose estimation in monocular images

    Get PDF
    People use orientation of their faces to convey rich, inter-personal information. For example, a person will direct his face to indicate who the intended target of the conversation is. Similarly in a conversation, face orientation is a non-verbal cue to listener when to switch role and start speaking, and a nod indicates that a person has understands, or agrees with, what is being said. Further more, face pose estimation plays an important role in human-computer interaction, virtual reality applications, human behaviour analysis, pose-independent face recognition, driver s vigilance assessment, gaze estimation, etc. Robust face recognition has been a focus of research in computer vision community for more than two decades. Although substantial research has been done and numerous methods have been proposed for face recognition, there remain challenges in this field. One of these is face recognition under varying poses and that is why face pose estimation is still an important research area. In computer vision, face pose estimation is the process of inferring the face orientation from digital imagery. It requires a serious of image processing steps to transform a pixel-based representation of a human face into a high-level concept of direction. An ideal face pose estimator should be invariant to a variety of image-changing factors such as camera distortion, lighting condition, skin colour, projective geometry, facial hairs, facial expressions, presence of accessories like glasses and hats, etc. Face pose estimation has been a focus of research for about two decades and numerous research contributions have been presented in this field. Face pose estimation techniques in literature have still some shortcomings and limitations in terms of accuracy, applicability to monocular images, being autonomous, identity and lighting variations, image resolution variations, range of face motion, computational expense, presence of facial hairs, presence of accessories like glasses and hats, etc. These shortcomings of existing face pose estimation techniques motivated the research work presented in this thesis. The main focus of this research is to design and develop novel face pose estimation algorithms that improve automatic face pose estimation in terms of processing time, computational expense, and invariance to different conditions

    EgoFace: Egocentric Face Performance Capture and Videorealistic Reenactment

    No full text
    Face performance capture and reenactment techniques use multiple cameras and sensors, positioned at a distance from the face or mounted on heavy wearable devices. This limits their applications in mobile and outdoor environments. We present EgoFace, a radically new lightweight setup for face performance capture and front-view videorealistic reenactment using a single egocentric RGB camera. Our lightweight setup allows operations in uncontrolled environments, and lends itself to telepresence applications such as video-conferencing from dynamic environments. The input image is projected into a low dimensional latent space of the facial expression parameters. Through careful adversarial training of the parameter-space synthetic rendering, a videorealistic animation is produced. Our problem is challenging as the human visual system is sensitive to the smallest face irregularities that could occur in the final results. This sensitivity is even stronger for video results. Our solution is trained in a pre-processing stage, through a supervised manner without manual annotations. EgoFace captures a wide variety of facial expressions, including mouth movements and asymmetrical expressions. It works under varying illuminations, background, movements, handles people from different ethnicities and can operate in real time
    • …
    corecore