6 research outputs found

    Face recognition using selected topographical features

    Get PDF
    This paper represents a new features selection method to improve an existed feature type. Topographical (TGH) features provide large set of features by assigning each image pixel to the related feature depending on image gradient and Hessian matrix. Such type of features was handled by a proposed features selection method. A face recognition feature selector (FRFS) method is presented to inspect TGH features. FRFS depends in its main concept on linear discriminant analysis (LDA) technique, which is used in evaluating features efficiency. FRFS studies feature behavior over a dataset of images to determine the level of its performance. At the end, each feature is assigned to its related level of performance with different levels of performance over the whole image. Depending on a chosen threshold, the highest set of features is selected to be classified by SVM classifie

    Enhanced Emotion Recognition in Videos: A Convolutional Neural Network Strategy for Human Facial Expression Detection and Classification

    Get PDF
    The human face is essential in conveying emotions, as facial expressions serve as effective, natural, and universal indicators of emotional states. Automated emotion recognition has garnered increasing interest due to its potential applications in various fields, such as human-computer interaction, machine learning, robotic control, and driver emotional state monitoring. With artificial intelligence and computational power advancements, visual emotion recognition has become a prominent research area. Despite extensive research employing machine learning algorithms like convolutional neural networks (CNN), challenges remain concerning input data processing, emotion classification scope, data size, optimal CNN configurations, and performance evaluation. To address these issues, we propose a comprehensive CNN-based model for real-time detection and classification of five primary emotions: anger, happiness, neutrality, sadness, and surprise. We employ the Amsterdam Dynamic Facial Expression Set – Bath Intensity Variations (ADFES-BIV) video dataset, extracting image frames from the video samples. Image processing techniques such as histogram equalization, color conversion, cropping, and resizing are applied to the frames before labeling. The Viola-Jones algorithm is then used for face detection on the processed grayscale images. We develop and train a CNN on the processed image data, implementing dropout, batch normalization, and L2 regularization to reduce overfitting. The ideal hyperparameters are determined through trial and error, and the model's performance is evaluated. The proposed model achieves a recognition accuracy of 99.38%, with the confusion matrix, recall, precision, F1 score, and processing time further quantifying its performance characteristics. The model's generalization performance is assessed using images from the Warsaw Set of Emotional Facial Expression Pictures (WSEFEP) and Extended Cohn-Kanade Database (CK+) datasets. The results demonstrate the efficiency and usability of our proposed approach, contributing valuable insights into real-time visual emotion recognition

    Low-resolution facial expression recognition: A filter learning perspective

    Get PDF
    Abstract(#br)Automatic facial expression recognition has attracted increasing attention for a variety of applications. However, the problem of low-resolution generally causes the performance degradation of facial expression recognition methods under real-life environments. In this paper, we propose to perform low-resolution facial expression recognition from the filter learning perspective. More specifically, a novel image filter based subspace learning (IFSL) method is developed to derive an effective facial image representation. The proposed IFSL method mainly includes three steps: Firstly, we embed the image filter learning into the optimization process of linear discriminant analysis (LDA). By optimizing the cost function of LDA, a set of discriminative image filters (DIFs) corresponding to different facial expressions is learned. Secondly, the images filtered by the learned DIFs are added together to generate the combined images. Finally, a regression learning technique is leveraged for subspace learning, where an expression-aware transformation matrix is obtained using the combined images. Based on the transformation matrix, IFSL effectively removes irrelevant information while preserving useful information in the facial images. Experimental results on several facial expression datasets, including CK+, MMI, JAFFE, SFEW and RAF-DB, show the superior performance of the proposed IFSL method for low-resolution facial expression recognition, compared with several state-of-the-art methods
    corecore