28 research outputs found

    WoX+: A Meta-Model-Driven Approach to Mine User Habits and Provide Continuous Authentication in the Smart City

    Get PDF
    The literature is rich in techniques and methods to perform Continuous Authentication (CA) using biometric data, both physiological and behavioral. As a recent trend, less invasive methods such as the ones based on context-aware recognition allows the continuous identification of the user by retrieving device and app usage patterns. However, a still uncovered research topic is to extend the concepts of behavioral and context-aware biometric to take into account all the sensing data provided by the Internet of Things (IoT) and the smart city, in the shape of user habits. In this paper, we propose a meta-model-driven approach to mine user habits, by means of a combination of IoT data incoming from several sources such as smart mobility, smart metering, smart home, wearables and so on. Then, we use those habits to seamlessly authenticate users in real time all along the smart city when the same behavior occurs in different context and with different sensing technologies. Our model, which we called WoX+, allows the automatic extraction of user habits using a novel Artificial Intelligence (AI) technique focused on high-level concepts. The aim is to continuously authenticate the users using their habits as behavioral biometric, independently from the involved sensing hardware. To prove the effectiveness of WoX+ we organized a quantitative and qualitative evaluation in which 10 participants told us a spending habit they have involving the use of IoT. We chose the financial domain because it is ubiquitous, it is inherently multi-device, it is rich in time patterns, and most of all it requires a secure authentication. With the aim of extracting the requirement of such a system, we also asked the cohort how they expect WoX+ will use such habits to securely automatize payments and identify them in the smart city. We discovered that WoX+ satisfies most of the expected requirements, particularly in terms of unobtrusiveness of the solution, in contrast with the limitations observed in the existing studies. Finally, we used the responses given by the cohorts to generate synthetic data and train our novel AI block. Results show that the error in reconstructing the habits is acceptable: Mean Squared Error Percentage (MSEP) 0.04%

    Biomove: Biometric user identification from human kinesiological movements for virtual reality systems

    Get PDF
    © 2020 by the authors. Licensee MDPI, Basel, Switzerland. Virtual reality (VR) has advanced rapidly and is used for many entertainment and business purposes. The need for secure, transparent and non-intrusive identification mechanisms is important to facilitate users’ safe participation and secure experience. People are kinesiologically unique, having individual behavioral and movement characteristics, which can be leveraged and used in security sensitive VR applications to compensate for users’ inability to detect potential observational attackers in the physical world. Additionally, such method of identification using a user’s kinesiological data is valuable in common scenarios where multiple users simultaneously participate in a VR environment. In this paper, we present a user study (n = 15) where our participants performed a series of controlled tasks that require physical movements (such as grabbing, rotating and dropping) that could be decomposed into unique kinesiological patterns while we monitored and captured their hand, head and eye gaze data within the VR environment. We present an analysis of the data and show that these data can be used as a biometric discriminant of high confidence using machine learning classification methods such as kNN or SVM, thereby adding a layer of security in terms of identification or dynamically adapting the VR environment to the users’ preferences. We also performed a whitebox penetration testing with 12 attackers, some of whom were physically similar to the participants. We could obtain an average identification confidence value of 0.98 from the actual participants’ test data after the initial study and also a trained model classification accuracy of 98.6%. Penetration testing indicated all attackers resulted in confidence values of less than 50% (\u3c50%), although physically similar attackers had higher confidence values. These findings can help the design and development of secure VR systems

    HUMAN GENDER CLASSIFICATION USING KINECT SENSOR: A REVIEW

    Get PDF
    Human Gender Classification using Kinect sensor aims to classifying people’s gender based on their outward appearance. Application areas of Kinect sensor technology includes security, marketing, healthcare, and gaming. However, because of the changes in pose, attire, and illumination, gender determination with the Kinect sensor is not a trivial task. It is based on a variety of characteristics, including biological, social network, face, and body aspects. In recent years, gender classification that utilizes the Kinect sensor became a popular and essential way for accurate gender classification. A variety of methods and approaches, like machine learning, convolutional neural networks, sport vector machine (SVM), etc., have been used for gender classification using a Kinect sensor. This paper presents the state of the art for gender classification, with a focus on the features, databases, procedures, and algorithms used in it. A review of recent studies on this subject using the Kinect sensor and other technologies is provided, together with information on the variables that affect the classification\u27s accuracy. In addition, several publicly accessible databases or datasets are used by researchers to classify people by gender are covered. Finlay, this overview offers insightful information about the potential future avenues for research on Kinect-based human gender classification

    A PUF-and biometric-based lightweight hardware solution to increase security at sensor nodes

    Get PDF
    Security is essential in sensor nodes which acquire and transmit sensitive data. However, the constraints of processing, memory and power consumption are very high in these nodes. Cryptographic algorithms based on symmetric key are very suitable for them. The drawback is that secure storage of secret keys is required. In this work, a low-cost solution is presented to obfuscate secret keys with Physically Unclonable Functions (PUFs), which exploit the hardware identity of the node. In addition, a lightweight fingerprint recognition solution is proposed, which can be implemented in low-cost sensor nodes. Since biometric data of individuals are sensitive, they are also obfuscated with PUFs. Both solutions allow authenticating the origin of the sensed data with a proposed dual-factor authentication protocol. One factor is the unique physical identity of the trusted sensor node that measures them. The other factor is the physical presence of the legitimate individual in charge of authorizing their transmission. Experimental results are included to prove how the proposed PUF-based solution can be implemented with the SRAMs of commercial Bluetooth Low Energy (BLE) chips which belong to the communication module of the sensor node. Implementation results show how the proposed fingerprint recognition based on the novel texture-based feature named QFingerMap16 (QFM) can be implemented fully inside a low-cost sensor node. Robustness, security and privacy issues at the proposed sensor nodes are discussed and analyzed with experimental results from PUFs and fingerprints taken from public and standard databases.Ministerio de EconomĂ­a, Industria y Competitividad TEC2014-57971-R, TEC2017-83557-

    A fully automated pipeline for a robust conjunctival hyperemia estimation

    Get PDF
    Purpose: Many semi-automated and fully-automated approaches have been proposed in literature to improve the objectivity of the estimation of conjunctival hyperemia, based on image processing analysis of eyes’ photographs. The purpose is to improve its evaluation using faster fully-automated systems and independent by the human subjectivity. Methods: In this work, we introduce a fully-automated analysis of the redness grading scales able to completely automatize the clinical procedure, starting from the acquired image to the redness estimation. In particular, we introduce a neural network model for the conjunctival segmentation followed by an image processing pipeline for the vessels network segmentation. From these steps, we extract some features already known in literature and whose correlation with the conjunctival redness has already been proved. Lastly, we implemented a predictive model for the conjunctival hyperemia using these features. Results: In this work, we used a dataset of images acquired during clinical practice.We trained a neural network model for the conjunctival segmentation, obtaining an average accuracy of 0.94 and a corresponding IoU score of 0.88 on a test set of images. The set of features extracted on these ROIs is able to correctly predict the Efron scale values with a Spearman’s correlation coefficient of 0.701 on a set of not previously used samples. Conclusions: The robustness of our pipeline confirms its possible usage in a clinical practice as a viable decision support system for the ophthalmologists

    Chimerical dataset creation protocol based on Doddington Zoo : a biometric application with face, eye, and ECG.

    Get PDF
    Multimodal systems are a workaround to enhance the robustness and effectiveness of biometric systems. A proper multimodal dataset is of the utmost importance to build such systems. The literature presents some multimodal datasets, although, to the best of our knowledge, there are no previous studies combining face, iris/eye, and vital signals such as the Electrocardiogram (ECG). Moreover, there is no methodology to guide the construction and evaluation of a chimeric dataset. Taking that fact into account, we propose to create a chimeric dataset from three modalities in this work: ECG, eye, and face. Based on the Doddington Zoo criteria, we also propose a generic and systematic protocol imposing constraints for the creation of homogeneous chimeric individuals, which allow us to perform a fair and reproducible benchmark. Moreover, we have proposed a multimodal approach for these modalities based on state-of-the-art deep representations built by convolutional neural networks. We conduct the experiments in the open-world verification mode and on two different scenarios (intra-session and inter-session), using three modalities from two datasets: CYBHi (ECG) and FRGC (eye and face). Our multimodal approach achieves impressive decidability of 7.20 ? 0.18, yielding an almost perfect verification system (i.e., Equal Error Rate (EER) of 0.20% ? 0.06) on the intra-session scenario with unknown data. On the inter-session scenario, we achieve a decidability of 7.78 ? 0.78 and an EER of 0.06% ? 0.06. In summary, these figures represent a gain of over 28% in decidability and a reduction over 11% of the EER on the intra-session scenario for unknown data compared to the best-known unimodal approach. Besides, we achieve an improvement greater than 22% in decidability and an EER reduction over 6% in the inter-session scenario

    Postmortem Ocular Findings in the Optical Coherence Tomography Era: A Proof of Concept Study Based on Six Forensic Cases

    Get PDF
    Postmortem analysis of the ocular globe is an important topic for forensic pathology and transplantology. Although crucial elements may be gathered from examining cadaveric eyes, the latter do not routinely undergo in-depth analysis. The paucity of quantitative and objective data that are obtainable using current, invasive necroscopic techniques is the main reason for the limited interest in this highly specialized procedure. The aim of the current study is to describe and to object for the first time the postmortem ocular changes by mean of portable optical coherence tomography for evaluating ocular tissues postmortem. The design involved the postmortem analysis (in situ, and without enucleation) of 12 eyes by portable spectral-domain Optical Coherence Tomography. The scans were performed, in corneal, retinal and angle modality at different intervals: <6 h, 6th, 12th, and 24th hour and after autopsy (25th–72nd hour). The morphological changes in the cornea, sclera, vitreous humor and aqueous humor were easy to explore and objectify in these tissues in first 72 h postmortem. On the other hand, the “in situ” observation of the retina was difficult due to the opacification of the lenses in the first 24 h after death

    Biometric Systems

    Get PDF
    Because of the accelerating progress in biometrics research and the latest nation-state threats to security, this book's publication is not only timely but also much needed. This volume contains seventeen peer-reviewed chapters reporting the state of the art in biometrics research: security issues, signature verification, fingerprint identification, wrist vascular biometrics, ear detection, face detection and identification (including a new survey of face recognition), person re-identification, electrocardiogram (ECT) recognition, and several multi-modal systems. This book will be a valuable resource for graduate students, engineers, and researchers interested in understanding and investigating this important field of study

    Review of three-dimensional human-computer interaction with focus on the leap motion controller

    Get PDF
    Modern hardware and software development has led to an evolution of user interfaces from command-line to natural user interfaces for virtual immersive environments. Gestures imitating real-world interaction tasks increasingly replace classical two-dimensional interfaces based on Windows/Icons/Menus/Pointers (WIMP) or touch metaphors. Thus, the purpose of this paper is to survey the state-of-the-art Human-Computer Interaction (HCI) techniques with a focus on the special field of three-dimensional interaction. This includes an overview of currently available interaction devices, their applications of usage and underlying methods for gesture design and recognition. Focus is on interfaces based on the Leap Motion Controller (LMC) and corresponding methods of gesture design and recognition. Further, a review of evaluation methods for the proposed natural user interfaces is given
    corecore