10 research outputs found

    Making the Invisible Visible: Action Recognition Through Walls and Occlusions

    Full text link
    Understanding people's actions and interactions typically depends on seeing them. Automating the process of action recognition from visual data has been the topic of much research in the computer vision community. But what if it is too dark, or if the person is occluded or behind a wall? In this paper, we introduce a neural network model that can detect human actions through walls and occlusions, and in poor lighting conditions. Our model takes radio frequency (RF) signals as input, generates 3D human skeletons as an intermediate representation, and recognizes actions and interactions of multiple people over time. By translating the input to an intermediate skeleton-based representation, our model can learn from both vision-based and RF-based datasets, and allow the two tasks to help each other. We show that our model achieves comparable accuracy to vision-based action recognition systems in visible scenarios, yet continues to work accurately when people are not visible, hence addressing scenarios that are beyond the limit of today's vision-based action recognition.Comment: ICCV 2019. The first two authors contributed equally to this pape

    In-Home Daily-Life Captioning Using Radio Signals

    Full text link
    This paper aims to caption daily life --i.e., to create a textual description of people's activities and interactions with objects in their homes. Addressing this problem requires novel methods beyond traditional video captioning, as most people would have privacy concerns about deploying cameras throughout their homes. We introduce RF-Diary, a new model for captioning daily life by analyzing the privacy-preserving radio signal in the home with the home's floormap. RF-Diary can further observe and caption people's life through walls and occlusions and in dark settings. In designing RF-Diary, we exploit the ability of radio signals to capture people's 3D dynamics, and use the floormap to help the model learn people's interactions with objects. We also use a multi-modal feature alignment training scheme that leverages existing video-based captioning datasets to improve the performance of our radio-based captioning model. Extensive experimental results demonstrate that RF-Diary generates accurate captions under visible conditions. It also sustains its good performance in dark or occluded settings, where video-based captioning approaches fail to generate meaningful captions. For more information, please visit our project webpage: http://rf-diary.csail.mit.eduComment: ECCV 2020. The first two authors contributed equally to this pape

    Learning Longterm Representations for Person Re-Identification Using Radio Signals

    Full text link
    Person Re-Identification (ReID) aims to recognize a person-of-interest across different places and times. Existing ReID methods rely on images or videos collected using RGB cameras. They extract appearance features like clothes, shoes, hair, etc. Such features, however, can change drastically from one day to the next, leading to inability to identify people over extended time periods. In this paper, we introduce RF-ReID, a novel approach that harnesses radio frequency (RF) signals for longterm person ReID. RF signals traverse clothes and reflect off the human body; thus they can be used to extract more persistent human-identifying features like body size and shape. We evaluate the performance of RF-ReID on longitudinal datasets that span days and weeks, where the person may wear different clothes across days. Our experiments demonstrate that RF-ReID outperforms state-of-the-art RGB-based ReID approaches for long term person ReID. Our results also reveal two interesting features: First since RF signals work in the presence of occlusions and poor lighting, RF-ReID allows for person ReID in such scenarios. Second, unlike photos and videos which reveal personal and private information, RF signals are more privacy-preserving, and hence can help extend person ReID to privacy-concerned domains, like healthcare.Comment: CVPR 2020. The first three authors contributed equally to this pape

    mmFall: Fall Detection using 4D MmWave Radar and a Hybrid Variational RNN AutoEncoder

    Full text link
    In this paper we propose mmFall - a novel fall detection system, which comprises of (i) the emerging millimeter-wave (mmWave) radar sensor to collect the human body's point cloud along with the body centroid, and (ii) a variational recurrent autoencoder (VRAE) to compute the anomaly level of the body motion based on the acquired point cloud. A fall is claimed to have occurred when the spike in anomaly level and the drop in centroid height occur simultaneously. The mmWave radar sensor provides several advantages, such as privacycompliance and high-sensitivity to motion, over the traditional sensing modalities. However, (i) randomness in radar point cloud data and (ii) difficulties in fall collection/labeling in the traditional supervised fall detection approaches are the two main challenges. To overcome the randomness in radar data, the proposed VRAE uses variational inference, a probabilistic approach rather than the traditional deterministic approach, to infer the posterior probability of the body's latent motion state at each frame, followed by a recurrent neural network (RNN) to learn the temporal features of the motion over multiple frames. Moreover, to circumvent the difficulties in fall data collection/labeling, the VRAE is built upon an autoencoder architecture in a semi-supervised approach, and trained on only normal activities of daily living (ADL) such that in the inference stage the VRAE will generate a spike in the anomaly level once an abnormal motion, such as fall, occurs. During the experiment, we implemented the VRAE along with two other baselines, and tested on the dataset collected in an apartment. The receiver operating characteristic (ROC) curve indicates that our proposed model outperforms the other two baselines, and achieves 98% detection out of 50 falls at the expense of just 2 false alarms.Comment: Preprint versio

    A deep learning approach towards railway safety risk assessment

    Get PDF
    Railway stations are essential aspects of railway systems, and they play a vital role in public daily life. Various types of AI technology have been utilised in many fields to ensure the safety of people and their assets. In this paper, we propose a novel framework that uses computer vision and pattern recognition to perform risk management in railway systems in which a convolutional neural network (CNN) is applied as a supervised machine learning model to identify risks. However, risk management in railway stations is challenging because stations feature dynamic and complex conditions. Despite extensive efforts by industry associations and researchers to reduce the number of accidents and injuries in this field, such incidents still occur. The proposed model offers a beneficial method for obtaining more accurate motion data, and it detects adverse conditions as soon as possible by capturing fall, slip and trip (FST) events in the stations that represent high-risk outcomes. The framework of the presented method is generalisable to a wide range of locations and to additional types of risks

    Tag-free indoor fall detection using transformer network encoder and data fusion

    Get PDF
    This work presents a radio frequency identification (RFID)-based technique to detect falls in the elderly. The proposed RFID-based approach offers a practical and efficient alternative to wearables, which can be uncomfortable to wear and may negatively impact user experience. The system utilises strategically positioned passive ultra-high frequency (UHF) tag array, enabling unobtrusive monitoring of elderly individuals. This contactless solution queries battery-less tag and processes the received signal strength indicator (RSSI) and phase data. Leveraging the powerful data-fitting capabilities of a transformer model to take raw RSSI and phase data as input with minimal preprocessing, combined with data fusion, it significantly improves activity recognition and fall detection accuracy, achieving an average rate exceeding 96.5%. This performance surpasses existing methods such as convolutional neural network (CNN), recurrent neural network (RNN), and long short-term memory (LSTM), demonstrating its reliability and potential for practical implementation. Additionally, the system maintains good accuracy beyond a 3-m range using minimal battery-less UHF tags and a single antenna, enhancing its practicality and cost-effectiveness

    Latest research trends in gait analysis using wearable sensors and machine learning: a systematic review

    Get PDF
    Gait is the locomotion attained through the movement of limbs and gait analysis examines the patterns (normal/abnormal) depending on the gait cycle. It contributes to the development of various applications in the medical, security, sports, and fitness domains to improve the overall outcome. Among many available technologies, two emerging technologies that play a central role in modern day gait analysis are: A) wearable sensors which provide a convenient, efficient, and inexpensive way to collect data and B) Machine Learning Methods (MLMs) which enable high accuracy gait feature extraction for analysis. Given their prominent roles, this paper presents a review of the latest trends in gait analysis using wearable sensors and Machine Learning (ML). It explores the recent papers along with the publication details and key parameters such as sampling rates, MLMs, wearable sensors, number of sensors, and their locations. Furthermore, the paper provides recommendations for selecting a MLM, wearable sensor and its location for a specific application. Finally, it suggests some future directions for gait analysis and its applications

    Cybersecurity and the Digital Health: An Investigation on the State of the Art and the Position of the Actors

    Get PDF
    Cybercrime is increasingly exposing the health domain to growing risk. The push towards a strong connection of citizens to health services, through digitalization, has undisputed advantages. Digital health allows remote care, the use of medical devices with a high mechatronic and IT content with strong automation, and a large interconnection of hospital networks with an increasingly effective exchange of data. However, all this requires a great cybersecurity commitment—a commitment that must start with scholars in research and then reach the stakeholders. New devices and technological solutions are increasingly breaking into healthcare, and are able to change the processes of interaction in the health domain. This requires cybersecurity to become a vital part of patient safety through changes in human behaviour, technology, and processes, as part of a complete solution. All professionals involved in cybersecurity in the health domain were invited to contribute with their experiences. This book contains contributions from various experts and different fields. Aspects of cybersecurity in healthcare relating to technological advance and emerging risks were addressed. The new boundaries of this field and the impact of COVID-19 on some sectors, such as mhealth, have also been addressed. We dedicate the book to all those with different roles involved in cybersecurity in the health domain

    TriSense: RFID, radar, and USRP-based hybrid sensing system for enhanced sensing and monitoring

    Get PDF
    This thesis presents a comprehensive approach to contactless human activity recognition (HAR) using the capabilities of three distinct technologies: radio frequency identification (RFID), Radar, and universal software-defined radio peripheral (USRP) for capturing and processing Wi-Fi-based signals. These technologies are then fused to enhance smart healthcare systems. The study initially utilises USRP devices to analyse Wi-Fi channel state information (CSI), choosing this over received signal strength for more accurate activity recognition. It employs a combination of machine learning and a hybrid of deep learning algorithms, such as the super learner and LSTM-CNN, for precise activity localisation. Subsequently, the study progresses to incorporate a transparent RFID tag wall (TRT-Wall) that employs a passive ultra-high frequency (UHF) RFID tag array. This RFID system has proven highly accurate in distinguishing between various activities, including sitting, standing, leaning, falling, and walking in two directions. Its effectiveness and non-intrusiveness make it particularly suited for elderly care, achieved using a modified version of the Transformer model without the use of a decoder. Furthermore, a significant advancement within this study is the creation of a novel fusion (RFiDARFusion) system, which combines RFID and Radar technologies. This system employs a long short-term memory networks variational autoencoder (LSTM-VAE) fusion model, utilising RFID amplitude and Radar RSSI data. This fusion approach significantly improves accuracy in challenging scenarios, such as those involving long-range and non-line-of-sight conditions. The RFiDARFusion system notably improves the detection of complex activities, highlighting its potential to reduce healthcare costs and enhance the quality of life for elderly patients in assisted living facilities. Overall, this thesis highlights the significant potential of radio frequency technologies with artif icial intelligence, along with their combined application, to develop robust, privacy-conscious, and cost-effective solutions for healthcare and assisted living monitoring systems
    corecore