122 research outputs found

    Going Deeper into Action Recognition: A Survey

    Full text link
    Understanding human actions in visual data is tied to advances in complementary research areas including object recognition, human dynamics, domain adaptation and semantic segmentation. Over the last decade, human action analysis evolved from earlier schemes that are often limited to controlled environments to nowadays advanced solutions that can learn from millions of videos and apply to almost all daily activities. Given the broad range of applications from video surveillance to human-computer interaction, scientific milestones in action recognition are achieved more rapidly, eventually leading to the demise of what used to be good in a short time. This motivated us to provide a comprehensive review of the notable steps taken towards recognizing human actions. To this end, we start our discussion with the pioneering methods that use handcrafted representations, and then, navigate into the realm of deep learning based approaches. We aim to remain objective throughout this survey, touching upon encouraging improvements as well as inevitable fallbacks, in the hope of raising fresh questions and motivating new research directions for the reader

    Real time ear recognition using deep learning

    Get PDF
    Automatic identity recognition of ear images represents an active area of interest within the biometric community. The human ear is a perfect source of data for passive person identification. Ear images can be captured from a distance and in a covert manner; this makes ear recognition technology an attractive choice for security applications and surveillance in addition to related application domains. Differing from other biometric modalities, the human ear is neither affected by expressions like faces are nor do need closer touching like fingerprints do. In this paper, a deep learning object detector called faster region based convolutional neural networks (Faster R-CNN) is used for ear detection. A convolutional neural network (CNN) is used as feature extraction. principal component analysis (PCA) and genetic algorithm are used for feature reduction and selection respectively and a fully connected artificial neural network as a matcher. The testing proved the accuracy of 97.8% percentage of success with acceptable speed and it confirmed the accuracy and robustness of the proposed system

    Convolution Neural Network Based Method for Biometric Recognition

    Get PDF
    In recent years, there has been increasing interest in the potential of precisely identifying individuals through ear images within the biometric community, owing to the distinctive characteristics of the human ear. This paper introduces deep neural network architecture for ear recognition. The suggested method incorporates a preprocessing stage that enhances significant features in ear images through contrast-limited adaptive histogram equalization. Subsequently, a classifier with deep convolutional neural network is employed to recognize the preprocessed ear images. Experimental results demonstrate a remarkable testing accuracy of 97.92% for the proposed recognition system

    Robust statistical frontalization of human and animal faces

    Get PDF
    The unconstrained acquisition of facial data in real-world conditions may result in face images with significant pose variations, illumination changes, and occlusions, affecting the performance of facial landmark localization and recognition methods. In this paper, a novel method, robust to pose, illumination variations, and occlusions is proposed for joint face frontalization and landmark localization. Unlike the state-of-the-art methods for landmark localization and pose correction, where large amount of manually annotated images or 3D facial models are required, the proposed method relies on a small set of frontal images only. By observing that the frontal facial image of both humans and animals, is the one having the minimum rank of all different poses, a model which is able to jointly recover the frontalized version of the face as well as the facial landmarks is devised. To this end, a suitable optimization problem is solved, concerning minimization of the nuclear norm (convex surrogate of the rank function) and the matrix â„“1 norm accounting for occlusions. The proposed method is assessed in frontal view reconstruction of human and animal faces, landmark localization, pose-invariant face recognition, face verification in unconstrained conditions, and video inpainting by conducting experiment on 9 databases. The experimental results demonstrate the effectiveness of the proposed method in comparison to the state-of-the-art methods for the target problems

    Robust statistical frontalization of human and animal faces

    Get PDF
    The unconstrained acquisition of facial data in real-world conditions may result in face images with significant pose variations, illumination changes, and occlusions, affecting the performance of facial landmark localization and recognition methods. In this paper, a novel method, robust to pose, illumination variations, and occlusions is proposed for joint face frontalization and landmark localization. Unlike the state-of-the-art methods for landmark localization and pose correction, where large amount of manually annotated images or 3D facial models are required, the proposed method relies on a small set of frontal images only. By observing that the frontal facial image of both humans and animals, is the one having the minimum rank of all different poses, a model which is able to jointly recover the frontalized version of the face as well as the facial landmarks is devised. To this end, a suitable optimization problem is solved, concerning minimization of the nuclear norm (convex surrogate of the rank function) and the matrix â„“1 norm accounting for occlusions. The proposed method is assessed in frontal view reconstruction of human and animal faces, landmark localization, pose-invariant face recognition, face verification in unconstrained conditions, and video inpainting by conducting experiment on 9 databases. The experimental results demonstrate the effectiveness of the proposed method in comparison to the state-of-the-art methods for the target problems

    Ear Biometrics: A Comprehensive Study of Taxonomy, Detection, and Recognition Methods

    Get PDF
    Due to the recent challenges in access control, surveillance and security, there is an increased need for efficient human authentication solutions. Ear recognition is an appealing choice to identify individuals in controlled or challenging environments. The outer part of the ear demonstrates high discriminative information across individuals and has shown to be robust for recognition. In addition, the data acquisition procedure is contactless, non-intrusive, and covert. This work focuses on using ear images for human authentication in visible and thermal spectrums. We perform a systematic study of the ear features and propose a taxonomy for them. Also, we investigate the parts of the head side view that provides distinctive identity cues. Following, we study the different modules of the ear recognition system. First, we propose an ear detection system that uses deep learning models. Second, we compare machine learning methods to state traditional systems\u27 baseline ear recognition performance. Third, we explore convolutional neural networks for ear recognition and the optimum learning process setting. Fourth, we systematically evaluate the performance in the presence of pose variation or various image artifacts, which commonly occur in real-life recognition applications, to identify the robustness of the proposed ear recognition models. Additionally, we design an efficient ear image quality assessment tool to guide the ear recognition system. Finally, we extend our work for ear recognition in the long-wave infrared domains

    Vision-based human action recognition using machine learning techniques

    Get PDF
    The focus of this thesis is on automatic recognition of human actions in videos. Human action recognition is defined as automatic understating of what actions occur in a video performed by a human. This is a difficult problem due to the many challenges including, but not limited to, variations in human shape and motion, occlusion, cluttered background, moving cameras, illumination conditions, and viewpoint variations. To start with, The most popular and prominent state-of-the-art techniques are reviewed, evaluated, compared, and presented. Based on the literature review, these techniques are categorized into handcrafted feature-based and deep learning-based approaches. The proposed action recognition framework is then based on these handcrafted and deep learning based techniques, which are then adopted throughout the thesis by embedding novel algorithms for action recognition, both in the handcrafted and deep learning domains. First, a new method based on handcrafted approach is presented. This method addresses one of the major challenges known as “viewpoint variations” by presenting a novel feature descriptor for multiview human action recognition. This descriptor employs the region-based features extracted from the human silhouette. The proposed approach is quite simple and achieves state-of-the-art results without compromising the efficiency of the recognition process which shows its suitability for real-time applications. Second, two innovative methods are presented based on deep learning approach, to go beyond the limitations of handcrafted approach. The first method is based on transfer learning using pre-trained deep learning model as a source architecture to solve the problem of human action recognition. It is experimentally confirmed that deep Convolutional Neural Network model already trained on large-scale annotated dataset is transferable to action recognition task with limited training dataset. The comparative analysis also confirms its superior performance over handcrafted feature-based methods in terms of accuracy on same datasets. The second method is based on unsupervised deep learning-based approach. This method employs Deep Belief Networks (DBNs) with restricted Boltzmann machines for action recognition in unconstrained videos. The proposed method automatically extracts suitable feature representation without any prior knowledge using unsupervised deep learning model. The effectiveness of the proposed method is confirmed with high recognition results on a challenging UCF sports dataset. Finally, the thesis is concluded with important discussions and research directions in the area of human action recognition
    • …
    corecore