10,847 research outputs found

    Radar and RGB-depth sensors for fall detection: a review

    Get PDF
    This paper reviews recent works in the literature on the use of systems based on radar and RGB-Depth (RGB-D) sensors for fall detection, and discusses outstanding research challenges and trends related to this research field. Systems to detect reliably fall events and promptly alert carers and first responders have gained significant interest in the past few years in order to address the societal issue of an increasing number of elderly people living alone, with the associated risk of them falling and the consequences in terms of health treatments, reduced well-being, and costs. The interest in radar and RGB-D sensors is related to their capability to enable contactless and non-intrusive monitoring, which is an advantage for practical deployment and users’ acceptance and compliance, compared with other sensor technologies, such as video-cameras, or wearables. Furthermore, the possibility of combining and fusing information from The heterogeneous types of sensors is expected to improve the overall performance of practical fall detection systems. Researchers from different fields can benefit from multidisciplinary knowledge and awareness of the latest developments in radar and RGB-D sensors that this paper is discussing

    Beyond Frontal Faces: Improving Person Recognition Using Multiple Cues

    Full text link
    We explore the task of recognizing peoples' identities in photo albums in an unconstrained setting. To facilitate this, we introduce the new People In Photo Albums (PIPA) dataset, consisting of over 60000 instances of 2000 individuals collected from public Flickr photo albums. With only about half of the person images containing a frontal face, the recognition task is very challenging due to the large variations in pose, clothing, camera viewpoint, image resolution and illumination. We propose the Pose Invariant PErson Recognition (PIPER) method, which accumulates the cues of poselet-level person recognizers trained by deep convolutional networks to discount for the pose variations, combined with a face recognizer and a global recognizer. Experiments on three different settings confirm that in our unconstrained setup PIPER significantly improves on the performance of DeepFace, which is one of the best face recognizers as measured on the LFW dataset

    Higher order feature extraction and selection for robust human gesture recognition using CSI of COTS Wi-Fi devices

    Get PDF
    Device-free human gesture recognition (HGR) using commercial o the shelf (COTS) Wi-Fi devices has gained attention with recent advances in wireless technology. HGR recognizes the human activity performed, by capturing the reflections ofWi-Fi signals from moving humans and storing them as raw channel state information (CSI) traces. Existing work on HGR applies noise reduction and transformation to pre-process the raw CSI traces. However, these methods fail to capture the non-Gaussian information in the raw CSI data due to its limitation to deal with linear signal representation alone. The proposed higher order statistics-based recognition (HOS-Re) model extracts higher order statistical (HOS) features from raw CSI traces and selects a robust feature subset for the recognition task. HOS-Re addresses the limitations in the existing methods, by extracting third order cumulant features that maximizes the recognition accuracy. Subsequently, feature selection methods derived from information theory construct a robust and highly informative feature subset, fed as input to the multilevel support vector machine (SVM) classifier in order to measure the performance. The proposed methodology is validated using a public database SignFi, consisting of 276 gestures with 8280 gesture instances, out of which 5520 are from the laboratory and 2760 from the home environment using a 10 5 cross-validation. HOS-Re achieved an average recognition accuracy of 97.84%, 98.26% and 96.34% for the lab, home and lab + home environment respectively. The average recognition accuracy for 150 sign gestures with 7500 instances, collected from five di erent users was 96.23% in the laboratory environment.Taylor's University through its TAYLOR'S PhD SCHOLARSHIP Programmeinfo:eu-repo/semantics/publishedVersio

    CANU-ReID: A Conditional Adversarial Network for Unsupervised person Re-IDentification

    Get PDF
    Unsupervised person re-ID is the task of identifying people on a target data set for which the ID labels are unavailable during training. In this paper, we propose to unify two trends in unsupervised person re-ID: clustering & fine-tuning and adversarial learning. On one side, clustering groups training images into pseudo-ID labels, and uses them to fine-tune the feature extractor. On the other side, adversarial learning is used, inspired by domain adaptation, to match distributions from different domains. Since target data is distributed across different camera viewpoints, we propose to model each camera as an independent domain, and aim to learn domain-independent features. Straightforward adversarial learning yields negative transfer, we thus introduce a conditioning vector to mitigate this undesirable effect. In our framework, the centroid of the cluster to which the visual sample belongs is used as conditioning vector of our conditional adversarial network, where the vector is permutation invariant (clusters ordering does not matter) and its size is independent of the number of clusters. To our knowledge, we are the first to propose the use of conditional adversarial networks for unsupervised person re-ID. We evaluate the proposed architecture on top of two state-of-the-art clustering-based unsupervised person re-identification (re-ID) methods on four different experimental settings with three different data sets and set the new state-of-the-art performance on all four of them. Our code and model will be made publicly available at https://team.inria.fr/perception/canu-reid/

    Multi-View Face Recognition From Single RGBD Models of the Faces

    Get PDF
    This work takes important steps towards solving the following problem of current interest: Assuming that each individual in a population can be modeled by a single frontal RGBD face image, is it possible to carry out face recognition for such a population using multiple 2D images captured from arbitrary viewpoints? Although the general problem as stated above is extremely challenging, it encompasses subproblems that can be addressed today. The subproblems addressed in this work relate to: (1) Generating a large set of viewpoint dependent face images from a single RGBD frontal image for each individual; (2) using hierarchical approaches based on view-partitioned subspaces to represent the training data; and (3) based on these hierarchical approaches, using a weighted voting algorithm to integrate the evidence collected from multiple images of the same face as recorded from different viewpoints. We evaluate our methods on three datasets: a dataset of 10 people that we created and two publicly available datasets which include a total of 48 people. In addition to providing important insights into the nature of this problem, our results show that we are able to successfully recognize faces with accuracies of 95% or higher, outperforming existing state-of-the-art face recognition approaches based on deep convolutional neural networks

    Detection of postural transitions using machine learning

    Get PDF
    The purpose of this project is to study the nature of human activity recognition and prepare a dataset from volunteers doing various activities which can be used for constructing the various parts of a machine learning model which is used to identify each volunteers posture transitions accurately. This report presents the problem definition, equipment used, previous work in this area of human activity recognition and the resolution of the problem along with results. Also this report sheds light on the process and the steps taken to undertake this endeavour of human activity recognition such as building of a dataset, pre-processing the data by applying filters and various windowing length techniques, splitting the data into training and testing data, performance of feature selection and feature extraction and finally selecting the model for training and testing which provides maximum accuracy and least misclassification rates. The tools used for this project includes a laptop equipped with MATLAB and EXCEL and MEDIA PLAYER CLASSIC respectively which have been used for data processing, model training and feature selection and Labelling respectively. The data has been collected using an Inertial Measurement Unit contains 3 tri-axial Accelerometers, 1 Gyroscope, 1 Magnetometer and 1 Pressure sensor. For this project only the Accelerometers, Gyroscope and the Pressure sensor is used. The sensor is made by the members of the lab named ‘The Technical Research Centre for Dependency Care and Autonomous Living (CETpD) at the UPC-ETSEIB campus. The results obtained have been satisfactory, and the objectives set have been fulfilled. There is room for possible improvements through expanding the scope of the project such as detection of chronic disorders or providing posture based statistics to the end user or even just achieving a higher rate of sensitivity of transitions of posture by using better features and increasing the dataset size by increasing the number of volunteers.Incomin

    Gait recognition and fall detection with inertial sensors

    Get PDF
    In contrast to visual information that is recorded by cameras placed somewhere, inertial information can be obtained from mobile phones that are commonly used in daily life. We present in this talk a general deep learning approach for gait and soft biometrics (age and gender) recognition. Moreover, we also study the use of gait information to detect actions during walking, specifically, fall detection. We perform a thorough experimental evaluation of the proposed approach on different datasets: OU-ISIR Biometric Database, DFNAPAS, SisFall, UniMiB-SHAR and ASLH. The experimental results show that inertial information can be used for gait recognition and fall detection with state-of-the-art results.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech
    • …
    corecore