90,845 research outputs found

    Effectiveness of Multi-View Face Images and Anthropometric Data In Real-Time Networked Biometrics

    Get PDF
    Over the years, biometric systems have evolved into a reliable mechanism for establishing identity of individuals in the context of applications such as access control, personnel screening and criminal identification. However, recent terror attacks, security threats and intrusion attempts have necessitated a transition to modern biometric systems that can identify humans under unconstrained environments, in real-time. Specifically, the following are three critical transitions that are needed and which form the focus of this thesis: (1) In contrast to operation in an offline mode using previously acquired photographs and videos obtained under controlled environments, it is required that identification be performed in a real-time dynamic mode using images that are continuously streaming in, each from a potentially different view (front, profile, partial profile) and with different quality (pose and resolution). (2) While different multi-modal fusion techniques have been developed to improve system accuracy, these techniques have mainly focused on combining the face biometrics with modalities such as iris and fingerprints that are more reliable but require user cooperation for acquisition. In contrast, the challenge in a real-time networked biometric system is that of combining opportunistically captured multi-view facial images along with soft biometric traits such as height, gait, attire and color that do not require user cooperation. (3) Typical operation is expected to be in an open-set mode where the number of subjects that enrolled in the system is much smaller than the number of probe subjects; yet the system is required to generate high accuracy.;To address these challenges and to make a successful transition to real-time human identification systems, this thesis makes the following contributions: (1) A score-based multi- modal, multi-sample fusion technique is designed to combine face images acquired by a multi-camera network and the effectiveness of opportunistically acquired multi-view face images using a camera network in improving the identification performance is characterized; (2) The multi-view face acquisition system is complemented by a network of Microsoft Kinects for extracting human anthropometric features (specifically height, shoulder width and arm length). The score-fusion technique is augmented to utilize human anthropometric data and the effectiveness of this data is characterized. (3) The performance of the system is demonstrated using a database of 51 subjects collected using the networked biometric data acquisition system.;Our results show improved recognition accuracy when face information from multiple views is utilized for recognition and also indicate that a given level of accuracy can be attained with fewer probe images (lesser time) when compared with a uni-modal biometric system

    Non-invasive multi-modal human identification system combining ECG, GSR, and airflow biosignals

    Get PDF
    A huge amount of data can be collected through a wide variety of sensor technologies. Data mining techniques are often useful for the analysis of gathered data. This paper studies the use of three wearable sensors that monitor the electrocardiogram, airflow, and galvanic skin response of a subject with the purpose of designing an efficient multi-modal human identification system. The proposed system, based on the rotation forest ensemble algorithm, offers a high accuracy (99.6 % true acceptance rate and just 0.1 % false positive rate). For its evaluation, the proposed system was testing against the characteristics commonly demanded in a biometric system, including universality, uniqueness, permanence, and acceptance. Finally, a proof-of-concept implementation of the system is demonstrated on a smartphone and its performance is evaluated in terms of processing speed and power consumption. The identification of a sample is extremely efficient, taking around 200 ms and consuming just a few millijoules. It is thus feasible to use the proposed system on a regular smartphone for user identification.This work was supported by MINECO grant TIN2013- 46469-R (SPINY: Security and Privacy in the Internet of You) and CAM grant S2013/ICE-3095 (CIBERDINE: Cybersecurity, Data, and Risks)

    Physical Context Detection using Wearable Wireless Sensor Networks

    Get PDF
    This paper presents the architecture of a wearable sensor network and a Hidden Markov Model (HMM) processingframework for stochastic identification of body postures andphysical contexts. The key idea is to collect multi-modal sensor data from strategically placed wireless sensors over a human subject’s body segments, and to process that using HMM in order to identify the subject’s instantaneous physical context. The key contribution of the proposed multi-modal approach is a significant extension of traditional uni-modal accelerometry in which only the individual body segment movements, without their relative proximities and orientation modalities, is used for physical context identification. Through real-life experiments with body mounted sensors it is demonstrated that while the unimodal accelerometry can be used for differentiating activityintensive postures such as walking and running, they are not effective for identification and differentiation between lowactivity postures such as sitting, standing, lying down, etc. In the proposed system, three sensor modalities namely acceleration, relative proximity and orientation are used for context identification through Hidden Markov Model (HMM) based stochastic processing. Controlled experiments using human subjects are carried out for evaluating the accuracy of the HMMidentified postures compared to a naïve threshold based mechanism over different human subjects

    Design and implementation of a multi-modal biometric system for company access control

    Get PDF
    This paper is about the design, implementation, and deployment of a multi-modal biometric system to grant access to a company structure and to internal zones in the company itself. Face and iris have been chosen as biometric traits. Face is feasible for non-intrusive checking with a minimum cooperation from the subject, while iris supports very accurate recognition procedure at a higher grade of invasivity. The recognition of the face trait is based on the Local Binary Patterns histograms, and the Daughman\u2019s method is implemented for the analysis of the iris data. The recognition process may require either the acquisition of the user\u2019s face only or the serial acquisition of both the user\u2019s face and iris, depending on the confidence level of the decision with respect to the set of security levels and requirements, stated in a formal way in the Service Level Agreement at a negotiation phase. The quality of the decision depends on the setting of proper different thresholds in the decision modules for the two biometric traits. Any time the quality of the decision is not good enough, the system activates proper rules, which ask for new acquisitions (and decisions), possibly with different threshold values, resulting in a system not with a fixed and predefined behaviour, but one which complies with the actual acquisition context. Rules are formalized as deduction rules and grouped together to represent \u201cresponse behaviors\u201d according to the previous analysis. Therefore, there are different possible working flows, since the actual response of the recognition process depends on the output of the decision making modules that compose the system. Finally, the deployment phase is described, together with the results from the testing, based on the AT&T Face Database and the UBIRIS database

    Recovering from External Disturbances in Online Manipulation through State-Dependent Revertive Recovery Policies

    Full text link
    Robots are increasingly entering uncertain and unstructured environments. Within these, robots are bound to face unexpected external disturbances like accidental human or tool collisions. Robots must develop the capacity to respond to unexpected events. That is not only identifying the sudden anomaly, but also deciding how to handle it. In this work, we contribute a recovery policy that allows a robot to recovery from various anomalous scenarios across different tasks and conditions in a consistent and robust fashion. The system organizes tasks as a sequence of nodes composed of internal modules such as motion generation and introspection. When an introspection module flags an anomaly, the recovery strategy is triggered and reverts the task execution by selecting a target node as a function of a state dependency chart. The new skill allows the robot to overcome the effects of the external disturbance and conclude the task. Our system recovers from accidental human and tool collisions in a number of tasks. Of particular importance is the fact that we test the robustness of the recovery system by triggering anomalies at each node in the task graph showing robust recovery everywhere in the task. We also trigger multiple and repeated anomalies at each of the nodes of the task showing that the recovery system can consistently recover anywhere in the presence of strong and pervasive anomalous conditions. Robust recovery systems will be key enablers for long-term autonomy in robot systems. Supplemental info including code, data, graphs, and result analysis can be found at [1].Comment: 8 pages, 8 figures, 1 tabl

    A computational framework to emulate the human perspective in flow cytometric data analysis

    Get PDF
    Background: In recent years, intense research efforts have focused on developing methods for automated flow cytometric data analysis. However, while designing such applications, little or no attention has been paid to the human perspective that is absolutely central to the manual gating process of identifying and characterizing cell populations. In particular, the assumption of many common techniques that cell populations could be modeled reliably with pre-specified distributions may not hold true in real-life samples, which can have populations of arbitrary shapes and considerable inter-sample variation. <p/>Results: To address this, we developed a new framework flowScape for emulating certain key aspects of the human perspective in analyzing flow data, which we implemented in multiple steps. First, flowScape begins with creating a mathematically rigorous map of the high-dimensional flow data landscape based on dense and sparse regions defined by relative concentrations of events around modes. In the second step, these modal clusters are connected with a global hierarchical structure. This representation allows flowScape to perform ridgeline analysis for both traversing the landscape and isolating cell populations at different levels of resolution. Finally, we extended manual gating with a new capacity for constructing templates that can identify target populations in terms of their relative parameters, as opposed to the more commonly used absolute or physical parameters. This allows flowScape to apply such templates in batch mode for detecting the corresponding populations in a flexible, sample-specific manner. We also demonstrated different applications of our framework to flow data analysis and show its superiority over other analytical methods. <p/>Conclusions: The human perspective, built on top of intuition and experience, is a very important component of flow cytometric data analysis. By emulating some of its approaches and extending these with automation and rigor, flowScape provides a flexible and robust framework for computational cytomics

    Look, Listen and Learn - A Multimodal LSTM for Speaker Identification

    Full text link
    Speaker identification refers to the task of localizing the face of a person who has the same identity as the ongoing voice in a video. This task not only requires collective perception over both visual and auditory signals, the robustness to handle severe quality degradations and unconstrained content variations are also indispensable. In this paper, we describe a novel multimodal Long Short-Term Memory (LSTM) architecture which seamlessly unifies both visual and auditory modalities from the beginning of each sequence input. The key idea is to extend the conventional LSTM by not only sharing weights across time steps, but also sharing weights across modalities. We show that modeling the temporal dependency across face and voice can significantly improve the robustness to content quality degradations and variations. We also found that our multimodal LSTM is robustness to distractors, namely the non-speaking identities. We applied our multimodal LSTM to The Big Bang Theory dataset and showed that our system outperforms the state-of-the-art systems in speaker identification with lower false alarm rate and higher recognition accuracy.Comment: The 30th AAAI Conference on Artificial Intelligence (AAAI-16
    • 

    corecore