136,142 research outputs found

    Face Detection with Effective Feature Extraction

    Full text link
    There is an abundant literature on face detection due to its important role in many vision applications. Since Viola and Jones proposed the first real-time AdaBoost based face detector, Haar-like features have been adopted as the method of choice for frontal face detection. In this work, we show that simple features other than Haar-like features can also be applied for training an effective face detector. Since, single feature is not discriminative enough to separate faces from difficult non-faces, we further improve the generalization performance of our simple features by introducing feature co-occurrences. We demonstrate that our proposed features yield a performance improvement compared to Haar-like features. In addition, our findings indicate that features play a crucial role in the ability of the system to generalize.Comment: 7 pages. Conference version published in Asian Conf. Comp. Vision 201

    Facial analysis in video : detection and recognition

    Get PDF
    Biometric authentication systems automatically identify or verify individuals using physiological (e.g., face, fingerprint, hand geometry, retina scan) or behavioral (e.g., speaking pattern, signature, keystroke dynamics) characteristics. Among these biometrics, facial patterns have the major advantage of being the least intrusive. Automatic face recognition systems thus have great potential in a wide spectrum of application areas. Focusing on facial analysis, this dissertation presents a face detection method and numerous feature extraction methods for face recognition. Concerning face detection, a video-based frontal face detection method has been developed using motion analysis and color information to derive field of interests, and distribution-based distance (DBD) and support vector machine (SVM) for classification. When applied to 92 still images (containing 282 faces), this method achieves 98.2% face detection rate with two false detections, a performance comparable to the state-of-the-art face detection methods; when applied to videQ streams, this method detects faces reliably and efficiently. Regarding face recognition, extensive assessments of face recognition performance in twelve color spaces have been performed, and a color feature extraction method defined by color component images across different color spaces is shown to help improve the baseline performance of the Face Recognition Grand Challenge (FRGC) problems. The experimental results show that some color configurations, such as YV in the YUV color space and YJ in the YIQ color space, help improve face recognition performance. Based on these improved results, a novel feature extraction method implementing genetic algorithms (GAs) and the Fisher linear discriminant (FLD) is designed to derive the optimal discriminating features that lead to an effective image representation for face recognition. This method noticeably improves FRGC ver1.0 Experiment 4 baseline recognition rate from 37% to 73%, and significantly elevates FRGC xxxx Experiment 4 baseline verification rate from 12% to 69%. Finally, four two-dimensional (2D) convolution filters are derived for feature extraction, and a 2D+3D face recognition system implementing both 2D and 3D imaging modalities is designed to address the FRGC problems. This method improves FRGC ver2.0 Experiment 3 baseline performance from 54% to 72%

    Motion Detection and Face Recognition for CCTV Surveillance System

    Get PDF
    Closed Circuit Television (CCTV) is currently used in daily life for a variety purpose. Development of the use of CCTV has transformed from a simple passive surveillance into an integrated intelligent control system. In this research, motion detection and facial recognation in CCTV video is done to be a base for decision making to produce automated, effective and efficient integrated system. This CCTV video processing provides three outputs, a motion detection information, a face detection information and a face identification information. Accumulative Differences Images (ADI) used  for motion detection, and Haar Classifiers Cascade used  for facial segmentation. Feature extraction is done with Speeded-Up Robust Features (SURF) and Principal Component Analysis (PCA). The features was trained by Counter-Propagation Network (CPN). Offline tests performed on 45 CCTV video. The test results obtained a motion detection success rate of 92,655%, a face detection success rate of 76%, and a face detection success rate of 60%. The results concluded that the process of faces identification through CCTV video with natural background have not been able to obtain optimal results. The motion detection process is ideal to be applied to real-time conditions. But in combination with face recognition process, there is a significant delay time

    ROBUST DETECTION AND RECOGNITION SYSTEM BASED ON FACIAL EXTRACTION AND DECISION TREE

    Get PDF
    Automatic face recognition system is suggested in this work on the basis of appearance based features focusing on the whole image as well as local based features focusing on critical face points like eyes, mouth, and nose for generating further details. Face detection is the major phase in face recognition systems, certain method for face detection (Viola-Jones) has the ability to process images efficiently and achieve high rates of detection in real time systems. Dimension reduction and feature extraction approaches are going to be utilized on the cropped image caused by detection. One of the simple, yet effective ways for extracting image features is the Local Binary Pattern Histogram (LBPH), while the technique of Principal Component Analysis (PCA) was majorly utilized in pattern recognition. Also, the technique of Linear Discriminant Analysis (LDA) utilized for overcoming PCA limitations was efficiently used in face recognition. Furthermore, classification is going to be utilized following the feature extraction. The utilized machine learning algorithms are PART and J48. The suggested system is showing high accuracy for detection with Viola-Jones 98.75, whereas the features which are extracted by means of LDA with J48 provided the best results of (F-measure, Recall, and Precision)

    Emotion Recognition Using Real Time Face Recognition

    Get PDF
    Facial expressions are the fastest means of communication while conveying any type of information. These are not only exposes the sensitivity or feelings of any person but can also be used to judge his/her mental views. Facial detection in images is the foremost step towards facial recognition and expression recognition along with face localization. High degree of variability in the images that can be obtained of faces due to varying conditions of lighting, exposure, color and expression. Using Machine learning tools and algorithms such as OpenCV 3.4.0 and the Haar Cascade Classifier. This research paper details our approach towards creating a semi-automated with a slight degree of human program which can be used to simultaneously detect multiple users and provide an effective solution to facial recognition using minimal amount of resources. Keywords-Face detection, Machine Learning, Feature Extraction, Image Processing, Neural Networks, OpenCV

    Dynamic attention-controlled cascaded shape regression exploiting training data augmentation and fuzzy-set sample weighting

    Get PDF
    We present a new Cascaded Shape Regression (CSR) architecture, namely Dynamic Attention-Controlled CSR (DAC-CSR), for robust facial landmark detection on unconstrained faces. Our DAC-CSR divides facial landmark detection into three cascaded sub-tasks: face bounding box refinement, general CSR and attention-controlled CSR. The first two stages refine initial face bounding boxes and output intermediate facial landmarks. Then, an online dynamic model selection method is used to choose appropriate domain-specific CSRs for further landmark refinement. The key innovation of our DAC-CSR is the fault-tolerant mechanism, using fuzzy set sample weighting, for attentioncontrolled domain-specific model training. Moreover, we advocate data augmentation with a simple but effective 2D profile face generator, and context-aware feature extraction for better facial feature representation. Experimental results obtained on challenging datasets demonstrate the merits of our DAC-CSR over the state-of-the-art methods

    Facial emotion recognition using min-max similarity classifier

    Full text link
    Recognition of human emotions from the imaging templates is useful in a wide variety of human-computer interaction and intelligent systems applications. However, the automatic recognition of facial expressions using image template matching techniques suffer from the natural variability with facial features and recording conditions. In spite of the progress achieved in facial emotion recognition in recent years, the effective and computationally simple feature selection and classification technique for emotion recognition is still an open problem. In this paper, we propose an efficient and straightforward facial emotion recognition algorithm to reduce the problem of inter-class pixel mismatch during classification. The proposed method includes the application of pixel normalization to remove intensity offsets followed-up with a Min-Max metric in a nearest neighbor classifier that is capable of suppressing feature outliers. The results indicate an improvement of recognition performance from 92.85% to 98.57% for the proposed Min-Max classification method when tested on JAFFE database. The proposed emotion recognition technique outperforms the existing template matching methods
    • …
    corecore