105,162 research outputs found

    4D Unconstrained Real-time Face Recognition Using a Commodity Depthh Camera

    Get PDF
    Robust unconstrained real-time face recognition still remains a challenge today. The recent addition to the market of lightweight commodity depth sensors brings new possibilities for human-machine interaction and therefore face recognition. This article accompanies the reader through a succinct survey of the current literature on face recognition in general and 3D face recognition using depth sensors in particular. Consequent to the assessment of experiments performed using implementations of the most established algorithms, it can be concluded that the majority are biased towards qualitative performance and are lacking in speed. A novel method which uses noisy data from such a commodity sensor to build dynamic internal representations of faces is proposed. Distances to a surface normal to the face are measured in real-time and used as input to a specific type of recurrent neural network, namely long short-term memory. This enables the prediction of facial structure in linear time and also increases robustness towards partial occlusions

    Neighborhood Defined Feature Selection Strategy for Improved Face Recognition in Different Sensor Modalitie

    Get PDF
    A novel feature selection strategy for improved face recognition in images with variations due to illumination conditions, facial expressions, and partial occlusions is presented in this dissertation. A hybrid face recognition system that uses feature maps of phase congruency and modular kernel spaces is developed. Phase congruency provides a measure that is independent of the overall magnitude of a signal, making it invariant to variations in image illumination and contrast. A novel modular kernel spaces approach is developed and implemented on the phase congruency feature maps. Smaller sub-regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are then projected into higher dimensional spaces using kernel methods. The unique modularization procedure developed in this research takes into consideration that the facial variations in a real world scenario are confined to local regions. The additional pixel dependencies that are considered based on their importance help in providing additional information for classification. This procedure also helps in robust localization of the variations, further improving classification accuracy. The effectiveness of the new feature selection strategy has been demonstrated by employing it in two specific applications via face authentication in low resolution cameras and face recognition using multiple sensors (visible and infrared). The face authentication system uses low quality images captured by a web camera. The optical sensor of the web camera is very sensitive to environmental illumination variations. It is observed that the feature selection policy overcomes the facial and environmental variations. A methodology based on multiple training images and clustering is also incorporated to overcome the additional challenges of computational efficiency and the subject\u27s non involvement. A multi-sensor image fusion based face recognition methodology that uses the proposed feature selection technique is presented in this dissertation. Research studies have indicated that complementary information from different sensors helps in improving the recognition accuracy compared to individual modalities. A decision level fusion methodology is also developed which provides better performance compared to individual as well as data level fusion modalities. The new decision level fusion technique is also robust to registration discrepancies, which is a very important factor in operational scenarios. Research work is progressing to use the new face recognition technique in multi-view images by employing independent systems for separate views and integrating the results with an appropriate voting procedure

    An Interactive Robust Artificial Intelligence-based Defense Electro Robot (RAIDER) using a Pan-Tilt-Zoom Camera

    Get PDF
    The Vision Lab’s Robust Artificial Intelligence-based Defense Electro Robot (RAIDER) is an integrated electro-mechanical system equipped with an onboard processor and numerous imaging sensors. The RAIDER is built upon the Clearpath Husky A200 mobile base. In a multidisciplinary effort, the newly constructed robotic body houses the onboard laptop, GPU processor, LAN, IP cameras, and Kinect sensors. In our previous experiments and efforts, we shown the capability of computing a 3D model of the surrounding scene from motion imagery. We have tested autonomous navigation algorithms in which the RAIDER was to follow a particular person in a crowded environment. Algorithmic enhancements have integrated the 3D depth information into the person-tracking technique to allow for following a person around sharp corners. These navigation and controls algorithms call for an accurate face detection and recognition system as well as a human body detection and recognition system. Additionally, we have integrated a Play Station 2 wireless controller to remotely maneuver the RAIDER and activate various autonomy modes. In this poster, we present our latest effort in integrating face detection with the Pan-Tilt-Zoom (PTZ) base of an Axis camera. Positioned on top of the RAIDER, the PTZ-base will allow for the RAIDER to mimic a human’s ability to “look around” or “follow a person with only the eyes,” specifically without physically turning the robotic body. The face detection algorithm provides the location of a face within the images, the PTZ is constantly tracking the face and adjusting to keep it in the center of the image. Additional RAIDER projects work on integrating a speaker system that would vocalizes pre-defined phrases triggered by the recognition of specific persons. This would allow the RAIDER to vocalize “Hello” to people trained into its recognition system. These new artificial-intelligence RAIDER innovations create a more interactive human-like robotic system.https://ecommons.udayton.edu/stander_posters/1390/thumbnail.jp

    Face recognition in the wild.

    Get PDF
    Research in face recognition deals with problems related to Age, Pose, Illumination and Expression (A-PIE), and seeks approaches that are invariant to these factors. Video images add a temporal aspect to the image acquisition process. Another degree of complexity, above and beyond A-PIE recognition, occurs when multiple pieces of information are known about people, which may be distorted, partially occluded, or disguised, and when the imaging conditions are totally unorthodox! A-PIE recognition in these circumstances becomes really “wild” and therefore, Face Recognition in the Wild has emerged as a field of research in the past few years. Its main purpose is to challenge constrained approaches of automatic face recognition, emulating some of the virtues of the Human Visual System (HVS) which is very tolerant to age, occlusion and distortions in the imaging process. HVS also integrates information about individuals and adds contexts together to recognize people within an activity or behavior. Machine vision has a very long road to emulate HVS, but face recognition in the wild, using the computer, is a road to perform face recognition in that path. In this thesis, Face Recognition in the Wild is defined as unconstrained face recognition under A-PIE+; the (+) connotes any alterations to the design scenario of the face recognition system. This thesis evaluates the Biometric Optical Surveillance System (BOSS) developed at the CVIP Lab, using low resolution imaging sensors. Specifically, the thesis tests the BOSS using cell phone cameras, and examines the potential of facial biometrics on smart portable devices like iPhone, iPads, and Tablets. For quantitative evaluation, the thesis focused on a specific testing scenario of BOSS software using iPhone 4 cell phones and a laptop. Testing was carried out indoor, at the CVIP Lab, using 21 subjects at distances of 5, 10 and 15 feet, with three poses, two expressions and two illumination levels. The three steps (detection, representation and matching) of the BOSS system were tested in this imaging scenario. False positives in facial detection increased with distances and with pose angles above ± 15°. The overall identification rate (face detection at confidence levels above 80%) also degraded with distances, pose, and expressions. The indoor lighting added challenges also, by inducing shadows which affected the image quality and the overall performance of the system. While this limited number of subjects and somewhat constrained imaging environment does not fully support a “wild” imaging scenario, it did provide a deep insight on the issues with automatic face recognition. The recognition rate curves demonstrate the limits of low-resolution cameras for face recognition at a distance (FRAD), yet it also provides a plausible defense for possible A-PIE face recognition on portable devices

    Obtaining a ROS-Based Face Recognition and Object Detection : Hardware and Software Issues

    Get PDF
    This paper presents solutions for methodological issues that can occur when obtaining face recognition and object detection for a ROS-based (Robot Operating System) open-source platform. Ubuntu 18.04, ROS Melodic and Google TensorFlow 1.14 are used in programming the software environment. TurtleBot2 (Kobuki) mobile robot with additional onboard sensors are used to conduct the experiments. Entire system configurations and specific hardware modifications that were proved mandatory to make out the system functionality are also clarified. Coding (e.g., Python) and sensors installations are detailed both in onboard and remote laptop computers. In experiments, TensorFlow face recognition and object detection are examined by using the TurtleBot2 robot. Results show how objects and faces were detected when the robot is navigating in the previously 2D mapped indoor environment.acceptedVersionPeer reviewe

    Affective Medicine: a review of Affective Computing efforts in Medical Informatics

    Get PDF
    Background: Affective computing (AC) is concerned with emotional interactions performed with and through computers. It is defined as “computing that relates to, arises from, or deliberately influences emotions”. AC enables investigation and understanding of the relation between human emotions and health as well as application of assistive and useful technologies in the medical domain. Objectives: 1) To review the general state of the art in AC and its applications in medicine, and 2) to establish synergies between the research communities of AC and medical informatics. Methods: Aspects related to the human affective state as a determinant of the human health are discussed, coupled with an illustration of significant AC research and related literature output. Moreover, affective communication channels are described and their range of application fields is explored through illustrative examples. Results: The presented conferences, European research projects and research publications illustrate the recent increase of interest in the AC area by the medical community. Tele-home healthcare, AmI, ubiquitous monitoring, e-learning and virtual communities with emotionally expressive characters for elderly or impaired people are few areas where the potential of AC has been realized and applications have emerged. Conclusions: A number of gaps can potentially be overcome through the synergy of AC and medical informatics. The application of AC technologies parallels the advancement of the existing state of the art and the introduction of new methods. The amount of work and projects reviewed in this paper witness an ambitious and optimistic synergetic future of the affective medicine field
    • …
    corecore