44 research outputs found

    Multioccupant Activity Recognition in Pervasive Smart Home Environments

    Get PDF
    been the center of lot of research for many years now. The aim is to recognize the sequence of actions by a specific person using sensor readings. Most of the research has been devoted to activity recognition of single occupants in the environment. However, living environments are usually inhabited by more than one person and possibly with pets. Hence, human activity recognition in the context of multi-occupancy is more general, but also more challenging. The difficulty comes from mainly two aspects: resident identification, known as data association, and diversity of human activities. The present survey paper provides an overview of existing approaches and current practices for activity recognition in multi-occupant smart homes. It presents the latest developments and highlights the open issues in this field

    Achieving multi-user capabilities through an indoor positioning system based on BLE beacons

    Get PDF
    The multiple user challenge is one of the issues that need to be addressed in order to facilitate the adoption of intelligent environments in everyday activities. The development of multi-user capabilities in smart homes is closely related to the creation of effective indoor positioning systems. This research work reports on the development and evaluation of an indoor positioning system that allows multi-user management in a smart home environment. The design of the BLE based system is presented, as well as its implementation and evaluation in the Smart Spaces Lab at Middlesex University. The validation of the system is shown as a case study in which it is used to develop multi-user capabilities in two context-aware systems of the laboratory. Video demonstrations are provided to illustrate the multi-user capabilities that were developed in the validation

    Context-Aware Data Association for Multi-Inhabitant Sensor-Based Activity Recognition

    Get PDF
    Recognizing the activities of daily living (ADLs) in multi-inhabitant settings is a challenging task. One of the major challenges is the so-called data association problem: how to assign to each user the environmental sensor events that he/she actually triggered? In this paper, we tackle this problem with a contextaware approach. Each user in the home wears a smartwatch, which is used to gather several high-level context information, like the location in the home (thanks to a micro-localization infrastructure) and the posture (e.g., sitting or standing). Context data is used to associate sensor events to the users which more likely triggered them. We show the impact of context reasoning in our framework on a dataset where up to 4 subjects perform ADLs at the same time (collaboratively or individually). We also report our experience and the lessons learned in deploying a running prototype of our method

    Towards Active Learning Interfaces for Multi-Inhabitant Activity Recognition

    Get PDF
    Semi-supervised approaches for activity recognition are a promising way to address the labeled data scarcity problem. Those methods only require a small training set in order to be initialized, and the model is continuously updated and improved over time. Among the several solutions existing in the literature, active learning is emerging as an effective technique to significantly boost the recognition rate: when the model is uncertain about the current activity performed by the user, the system asks her to provide the ground truth. This feedback is then used to update the recognition model. While active learning has been mostly proposed in single-inhabitant settings, several questions arise when such a system has to be implemented in a realistic environment with multiple users. Who to ask a feedback when the system is uncertain about a collaborative activity? In this paper, we investigate this and more questions on this topic, proposing a preliminary study of the requirements of an active learning interface for multi-inhabitant settings. In particular, we formalize the problem and we describe the solutions adopted in our system prototype

    A Tree-structure Convolutional Neural Network for Temporal Features Exaction on Sensor-based Multi-resident Activity Recognition

    Full text link
    With the propagation of sensor devices applied in smart home, activity recognition has ignited huge interest and most existing works assume that there is only one habitant. While in reality, there are generally multiple residents at home, which brings greater challenge to recognize activities. In addition, many conventional approaches rely on manual time series data segmentation ignoring the inherent characteristics of events and their heuristic hand-crafted feature generation algorithms are difficult to exploit distinctive features to accurately classify different activities. To address these issues, we propose an end-to-end Tree-Structure Convolutional neural network based framework for Multi-Resident Activity Recognition (TSC-MRAR). First, we treat each sample as an event and obtain the current event embedding through the previous sensor readings in the sliding window without splitting the time series data. Then, in order to automatically generate the temporal features, a tree-structure network is designed to derive the temporal dependence of nearby readings. The extracted features are fed into the fully connected layer, which can jointly learn the residents labels and the activity labels simultaneously. Finally, experiments on CASAS datasets demonstrate the high performance in multi-resident activity recognition of our model compared to state-of-the-art techniques.Comment: 12 pages, 4 figure

    Using an Indoor Localization System for Activity Recognition

    Get PDF
    Recognizing the activity performed by users is importantin many application domains, from e-health to home automation. Thispaper explores the use of a fine-grained indoor localization system, basedon ultra-wideband, for activity recognition. The user is supposed to weara number of active tags. The position of active tags is first determinedwith respect to the space where the user is moving, then some position-independent metrics are estimated and given as input to a previouslytrained system. Experimental results show that accuracy values as highas∌95% can be obtained when using a personalized model
    corecore