9,063 research outputs found
A hybrid technique for face detection in color images
In this paper, a hybrid technique for face detection in color images is presented. The proposed technique combines three analysis models, namely skin detection, automatic eye localization, and appearance-based face/nonface classification. Using a robust histogram-based skin detection model, skin-like pixels are first identified in the RGB color space. Based on this, face bounding-boxes are extracted from the image. On detecting a face bounding-box, approximate positions of the candidate mouth feature points are identified using the redness property of image pixels. A region-based eye localization step, based on the detected mouth feature points, is then applied to face bounding-boxes to locate possible eye feature points in the image. Based on the distance between the detected eye feature points, face/non-face classification is performed over a normalized search area using the Bayesian discriminating feature (BDF) analysis method. Some subjective evaluation results are presented on images taken using digital cameras and a Webcam, representing both indoor and outdoor scenes
Real time hand gesture recognition including hand segmentation and tracking
In this paper we present a system that performs automatic gesture recognition. The system consists of two main components: (i) A unified technique for segmentation and tracking of face and hands using a skin detection algorithm along with handling occlusion between skin objects to keep track of the status of the occluded parts. This is realized by combining 3 useful features, namely, color, motion and position. (ii) A static and dynamic gesture recognition system. Static gesture recognition is achieved using a robust hand shape classification, based on PCA subspaces, that is invariant to scale along with small translation and rotation transformations. Combining hand shape classification with position information and using DHMMs allows us to accomplish dynamic gesture recognition
Automatic skin segmentation for gesture recognition combining region and support vector machine active learning
Skin segmentation is the cornerstone of many applications such as gesture recognition, face detection, and objectionable image filtering. In this paper, we attempt to address the skin segmentation problem for gesture recognition. Initially, given a gesture video sequence, a generic skin model is applied to the first couple of frames to automatically collect the training data. Then, an SVM classifier based on active learning is used to identify the skin pixels. Finally, the results are improved by incorporating region segmentation. The proposed algorithm is fully automatic and adaptive to different signers. We have tested our approach on the ECHO database. Comparing with other existing algorithms, our method could achieve better performance
Region-based Skin Color Detection.
Skin color provides a powerful cue for complex computer vision applications. Although skin color detection
has been an active research area for decades, the mainstream technology is based on the individual pixels.
This paper presents a new region-based technique for skin color detection which outperforms the current
state-of-the-art pixel-based skin color detection method on the popular Compaq dataset (Jones and Rehg,
2002). Color and spatial distance based clustering technique is used to extract the regions from the images,
also known as superpixels. In the first step, our technique uses the state-of-the-art non-parametric pixel-based
skin color classifier (Jones and Rehg, 2002) which we call the basic skin color classifier. The pixel-based skin
color evidence is then aggregated to classify the superpixels. Finally, the Conditional Random Field (CRF)
is applied to further improve the results. As CRF operates over superpixels, the computational overhead is
minimal. Our technique achieves 91.17% true positive rate with 13.12% false negative rate on the Compaq
dataset tested over approximately 14,000 web images
Fair comparison of skin detection approaches on publicly available datasets
Skin detection is the process of discriminating skin and non-skin regions in
a digital image and it is widely used in several applications ranging from hand
gesture analysis to track body parts and face detection. Skin detection is a
challenging problem which has drawn extensive attention from the research
community, nevertheless a fair comparison among approaches is very difficult
due to the lack of a common benchmark and a unified testing protocol. In this
work, we investigate the most recent researches in this field and we propose a
fair comparison among approaches using several different datasets. The major
contributions of this work are an exhaustive literature review of skin color
detection approaches, a framework to evaluate and combine different skin
detector approaches, whose source code is made freely available for future
research, and an extensive experimental comparison among several recent methods
which have also been used to define an ensemble that works well in many
different problems. Experiments are carried out in 10 different datasets
including more than 10000 labelled images: experimental results confirm that
the best method here proposed obtains a very good performance with respect to
other stand-alone approaches, without requiring ad hoc parameter tuning. A
MATLAB version of the framework for testing and of the methods proposed in this
paper will be freely available from https://github.com/LorisNann
A simple and efficient face detection algorithm for video database applications
The objective of this work is to provide a simple and yet efficient tool to detect human faces in video sequences. This information can be very useful for many applications such as video indexing and video browsing. In particular the paper focuses on the significant improvements made to our face detection algorithm presented by Albiol, Bouman and Delp (see IEEE Int. Conference on Image Processing, Kobe, Japan, 1999). Specifically, a novel approach to retrieve skin-like homogeneous regions is presented, which is later used to retrieve face images. Good results have been obtained for a large variety of video sequences.Peer ReviewedPostprint (published version
Classification of Humans into Ayurvedic Prakruti Types using Computer Vision
Ayurveda, a 5000 years old Indian medical science, believes that the universe and hence humans are made up of five elements namely ether, fire, water, earth, and air. The three Doshas (Tridosha) Vata, Pitta, and Kapha originated from the combinations of these elements. Every person has a unique combination of Tridosha elements contributing to a person’s ‘Prakruti’. Prakruti governs the physiological and psychological tendencies in all living beings as well as the way they interact with the environment. This balance influences their physiological features like the texture and colour of skin, hair, eyes, length of fingers, the shape of the palm, body frame, strength of digestion and many more as well as the psychological features like their nature (introverted, extroverted, calm, excitable, intense, laidback), and their reaction to stress and diseases. All these features are coded in the constituents at the time of a person’s creation and do not change throughout their lifetime. Ayurvedic doctors analyze the Prakruti of a person either by assessing the physical features manually and/or by examining the nature of their heartbeat (pulse). Based on this analysis, they diagnose, prevent and cure the disease in patients by prescribing precision medicine.
This project focuses on identifying Prakruti of a person by analysing his facial features like hair, eyes, nose, lips and skin colour using facial recognition techniques in computer vision. This is the first of its kind research in this problem area that attempts to bring image processing into the domain of Ayurveda
- …