11 research outputs found

    A Tree-structure Convolutional Neural Network for Temporal Features Exaction on Sensor-based Multi-resident Activity Recognition

    Full text link
    With the propagation of sensor devices applied in smart home, activity recognition has ignited huge interest and most existing works assume that there is only one habitant. While in reality, there are generally multiple residents at home, which brings greater challenge to recognize activities. In addition, many conventional approaches rely on manual time series data segmentation ignoring the inherent characteristics of events and their heuristic hand-crafted feature generation algorithms are difficult to exploit distinctive features to accurately classify different activities. To address these issues, we propose an end-to-end Tree-Structure Convolutional neural network based framework for Multi-Resident Activity Recognition (TSC-MRAR). First, we treat each sample as an event and obtain the current event embedding through the previous sensor readings in the sliding window without splitting the time series data. Then, in order to automatically generate the temporal features, a tree-structure network is designed to derive the temporal dependence of nearby readings. The extracted features are fed into the fully connected layer, which can jointly learn the residents labels and the activity labels simultaneously. Finally, experiments on CASAS datasets demonstrate the high performance in multi-resident activity recognition of our model compared to state-of-the-art techniques.Comment: 12 pages, 4 figure

    Detection of hypoglycemic events through wearable sensors

    Get PDF
    Diabetic patients are dependent on external substances to balance their blood glucose level. In order to control this level, they historically needed to sample a drop a blood from their hand and have it analyzed. Recently, other directions emerged to offer alternative ways to estimate glucose level. In this paper, we present our ongoing work on a framework for inferring semantically annotated glycemic events on the patient, which leverages mobile wearable sensors on a sport-belt

    Activities of daily life recognition using process representation modelling to support intention analysis

    Get PDF
    Purpose – This paper aims to focus on applying a range of traditional classification- and semantic reasoning-based techniques to recognise activities of daily life (ADLs). ADL recognition plays an important role in tracking functional decline among elderly people who suffer from Alzheimer’s disease. Accurate recognition enables smart environments to support and assist the elderly to lead an independent life for as long as possible. However, the ability to represent the complex structure of an ADL in a flexible manner remains a challenge. Design/methodology/approach – This paper presents an ADL recognition approach, which uses a hierarchical structure for the representation and modelling of the activities, its associated tasks and their relationships. This study describes an approach in constructing ADLs based on a task-specific and intention-oriented plan representation language called Asbru. The proposed method is particularly flexible and adaptable for caregivers to be able to model daily schedules for Alzheimer’s patients. Findings – A proof of concept prototype evaluation has been conducted for the validation of the proposed ADL recognition engine, which has comparable recognition results with existing ADL recognition approaches. Originality/value – The work presented in this paper is novel, as the developed ADL recognition approach takes into account all relationships and dependencies within the modelled ADLs. This is very useful when conducting activity recognition with very limited features

    A hybrid approach to recognising activities of daily living from object use in the home environment

    Get PDF
    Accurate recognition of Activities of Daily Living (ADL) plays an important role in providing assistance and support to the elderly and cognitively impaired. Current knowledge-driven and ontology-based techniques model object concepts from assumptions and everyday common knowledge of object use for routine activities. Modelling activities from such information can lead to incorrect recognition of particular routine activities resulting in possible failure to detect abnormal activity trends. In cases where such prior knowledge are not available, such techniques become virtually unemployable. A significant step in the recognition of activities is the accurate discovery of the object usage for specific routine activities. This paper presents a hybrid framework for automatic consumption of sensor data and associating object usage to routine activities using Latent Dirichlet Allocation (LDA) topic modelling. This process enables the recognition of simple activities of daily living from object usage and interactions in the home environment. The evaluation of the proposed framework on the Kasteren and Ordonez datasets show that it yields better results compared to existing techniques

    Combining ontological and temporal formalisms for composite activity modelling and recognition in smart homes

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Activity recognition is essential in providing activity assistance for users in smart homes. While significant progress has been made for single-user single-activity recognition, it still remains a challenge to carry out real-time progressive composite activity recognition. This paper introduces a hybrid ontological and temporal approach to composite activity modelling and recognition by extending existing ontology-based knowledge-driven approach. The compelling feature of the approach is that it combines ontological and temporal knowledge representation formalisms to provide powerful representation capabilities for activity modelling. The paper describes in detail ontological activity modelling which establishes relationships between activities and their involved entities, and temporal activity modelling which defines relationships between constituent activities of a composite activity. As an essential part of the model, the paper also presents methods for developing temporal entailment rules to support the interpretation and inference of composite activities. In addition, this paper outlines an integrated architecture for composite activity recognition and elaborated a unified activity recognition algorithm which can support the recognition of simple and composite activities. The approach has been implemented in a feature-rich prototype system upon which testing and evaluation have been conducted. Initial experimental results have shown average recognition accuracy of 100% and 88.26% for simple and composite activities, respectively

    A Hybrid Ontological and Temporal Approach for Composite Activity Modelling

    Get PDF
    Activity modelling is required to support activity recognition and further to provide activity assistance for users in smart homes. Current research in knowledge-driven activity modelling has mainly focused on single activities with little attention being paid to the modelling of composite activities such as interleaved and concurrent activities. This paper presents a hybrid approach to composite activity modelling by combining ontological and temporal knowledge modelling formalisms. Ontological modelling constructors, i.e. concepts and properties for describing composite activities, have been developed and temporal modelling operators have been introduced. As such, the resulting approach is able to model both static and dynamic characteristics of activities. Several composite activity models have been created based on the proposed approach. In addition, a set of inference rules has been provided for use in composite activity recognition. A concurrent meal preparation scenario is used to illustrate both the proposed approach and associated reasoning mechanisms for composite activity recognition

    A Tool-Based Methodology For Long-Term Activity Monitoring

    Get PDF
    International audienceIn recent years, remarkable progress has been reported in the field of activity monitoring. However, despite significant breakthroughs in recognizing activities from sensor data, there is still a great deal more to accomplish, especially compared to fields pursuing related goals, such as computer vision and speech recognition. A key factor to move activity monitoring forward, is to enable researchers to build on each other's work more systematically via reproducible research. Besides providing sensor data, reproducibility in activity monitoring requires all aspects of a result to be available to the research community, including collection, processing and interpretation of measurements. This paper presents a tool-based methodology, dedicated to monitor the activities of daily living of older adults, that supports reproducible research. This methodology covers the key steps to defining a monitoring process of these activities, from sensor measurements to actionable activity information. These steps are uniformly described with concise and high-level rules. Additionally, to allow caregivers to monitor older adults' functional decline and to determine what assisting support is needed, our methodology includes a visualization tool, dedicated to handling user activities longitudinally. The proposed approach is validated by a set of rules dedicated to monitor activities of community-dwelling, older adults in their sensor-equipped homes. A preliminary study has been conducted to evaluate the intra-and inter-participant consistency of the results produced by our methodology, using longitudinal datasets, collected over several months. Using Signal Detection Theory, it has shown that our monitoring rules mostly produced the same interpretations as an expert in activity analysis, who manually analyzed the sensor datasets

    Deep Learning for Sensor-based Human Activity Recognition: Overview, Challenges and Opportunities

    Full text link
    The vast proliferation of sensor devices and Internet of Things enables the applications of sensor-based activity recognition. However, there exist substantial challenges that could influence the performance of the recognition system in practical scenarios. Recently, as deep learning has demonstrated its effectiveness in many areas, plenty of deep methods have been investigated to address the challenges in activity recognition. In this study, we present a survey of the state-of-the-art deep learning methods for sensor-based human activity recognition. We first introduce the multi-modality of the sensory data and provide information for public datasets that can be used for evaluation in different challenge tasks. We then propose a new taxonomy to structure the deep methods by challenges. Challenges and challenge-related deep methods are summarized and analyzed to form an overview of the current research progress. At the end of this work, we discuss the open issues and provide some insights for future directions

    Dynamic Sensor Data Segmentation for Real time Activity Recognition

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Approaches and algorithms for activity recognition have recently made substantial progress due to advancements in pervasive and mobile computing, smart environments and ambient assisted living. Nevertheless, it is still difficult to achieve real-time continuous activity recognition as sensor data segmentation remains a challenge. This paper presents a novel approach to real-time sensor data segmentation for continuous activity recognition. Central to the approach is a dynamic segmentation model, based on the notion of varied time windows, which can shrink and expand the segmentation window size by using temporal information of sensor data and activities as well as the state of activity recognition. The paper first analyzes the characteristics of activities of daily living from which the segmentation model that is applicable to a wide range of activity recognition scenarios is motivated and developed. It then describes the working mechanism and relevant algorithms of the model in the context of knowledge-driven activity recognition based on ontologies. The presented approach has been implemented in a prototype system and evaluated in a number of experiments. Results have shown average recognition accuracy above 83% in all experiments for real time activity recognition, which proves the approach and the underlying model
    corecore