5,117 research outputs found

    An Ontology-Based Hybrid Approach to Activity Modeling for Smart Homes

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Activity models play a critical role for activity recognition and assistance in ambient assisted living. Existing approaches to activity modeling suffer from a number of problems, e.g., cold-start, model reusability, and incompleteness. In an effort to address these problems, we introduce an ontology-based hybrid approach to activity modeling that combines domain knowledge based model specification and data-driven model learning. Central to the approach is an iterative process that begins with “seed” activity models created by ontological engineering. The “seed” models are deployed, and subsequently evolved through incremental activity discovery and model update. While our previous work has detailed ontological activity modeling and activity recognition, this paper focuses on the systematic hybrid approach and associated methods and inference rules for learning new activities and user activity profiles. The approach has been implemented in a feature-rich assistive living system. Analysis of the experiments conducted has been undertaken in an effort to test and evaluate the activity learning algorithms and associated mechanisms

    Combining ontological and temporal formalisms for composite activity modelling and recognition in smart homes

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Activity recognition is essential in providing activity assistance for users in smart homes. While significant progress has been made for single-user single-activity recognition, it still remains a challenge to carry out real-time progressive composite activity recognition. This paper introduces a hybrid ontological and temporal approach to composite activity modelling and recognition by extending existing ontology-based knowledge-driven approach. The compelling feature of the approach is that it combines ontological and temporal knowledge representation formalisms to provide powerful representation capabilities for activity modelling. The paper describes in detail ontological activity modelling which establishes relationships between activities and their involved entities, and temporal activity modelling which defines relationships between constituent activities of a composite activity. As an essential part of the model, the paper also presents methods for developing temporal entailment rules to support the interpretation and inference of composite activities. In addition, this paper outlines an integrated architecture for composite activity recognition and elaborated a unified activity recognition algorithm which can support the recognition of simple and composite activities. The approach has been implemented in a feature-rich prototype system upon which testing and evaluation have been conducted. Initial experimental results have shown average recognition accuracy of 100% and 88.26% for simple and composite activities, respectively

    Context-Aware Personalized Activity Modeling in Concurrent Environment

    Get PDF
    Activity recognition, having endemic impact on smart homes, faces one of the biggest challenges in learning a personalized activity model completely by using a generic model especially for parallel and interleaved activities. Furthermore, inhabitant’s mistaken object interaction may entail in another spurious activity at smart homes. Identifying and removing such spurious activities is another challenging task. Knowledge driven techniques used for recognizing activity models are static in nature, lack contextual representation and may not comprehend spurious actions for parallel/interleaved activities. In this paper, a novel approach for completing the personalized model specific to each inhabitant at smart homes using generic model (incomplete) is presented that can recognize the sequential, parallel, and interleaved activities dynamically while removing the spurious activities semantically. A comprehensive set of experiments and results based upon number of correct (true positivity) or incorrect (false negativity) recognition of activities assert effectiveness of presented approach within a smart hom

    A knowledge-based approach towards human activity recognition in smart environments

    Get PDF
    For many years it is known that the population of older persons is on the rise. A recent report estimates that globally, the share of the population aged 65 years or over is expected to increase from 9.3 percent in 2020 to around 16.0 percent in 2050 [1]. This point has been one of the main sources of motivation for active research in the domain of human activity recognition in smart-homes. The ability to perform ADL without assistance from other people can be considered as a reference for the estimation of the independent living level of the older person. Conventionally, this has been assessed by health-care domain experts via a qualitative evaluation of the ADL. Since this evaluation is qualitative, it can vary based on the person being monitored and the caregiver\u2019s experience. A significant amount of research work is implicitly or explicitly aimed at augmenting the health-care domain expert\u2019s qualitative evaluation with quantitative data or knowledge obtained from HAR. From a medical perspective, there is a lack of evidence about the technology readiness level of smart home architectures supporting older persons by recognizing ADL [2]. We hypothesize that this may be due to a lack of effective collaboration between smart-home researchers/developers and health-care domain experts, especially when considering HAR. We foresee an increase in HAR systems being developed in close collaboration with caregivers and geriatricians to support their qualitative evaluation of ADL with explainable quantitative outcomes of the HAR systems. This has been a motivation for the work in this thesis. The recognition of human activities \u2013 in particular ADL \u2013 may not only be limited to support the health and well-being of older people. It can be relevant to home users in general. For instance, HAR could support digital assistants or companion robots to provide contextually relevant and proactive support to the home users, whether young adults or old. This has also been a motivation for the work in this thesis. Given our motivations, namely, (i) facilitation of iterative development and ease in collaboration between HAR system researchers/developers and health-care domain experts in ADL, and (ii) robust HAR that can support digital assistants or companion robots. There is a need for the development of a HAR framework that at its core is modular and flexible to facilitate an iterative development process [3], which is an integral part of collaborative work that involves develop-test-improve phases. At the same time, the framework should be intelligible for the sake of enriched collaboration with health-care domain experts. Furthermore, it should be scalable, online, and accurate for having robust HAR, which can enable many smart-home applications. The goal of this thesis is to design and evaluate such a framework. This thesis contributes to the domain of HAR in smart-homes. Particularly the contribution can be divided into three parts. The first contribution is Arianna+, a framework to develop networks of ontologies - for knowledge representation and reasoning - that enables smart homes to perform human activity recognition online. The second contribution is OWLOOP, an API that supports the development of HAR system architectures based on Arianna+. It enables the usage of Ontology Web Language (OWL) by the means of Object-Oriented Programming (OOP). The third contribution is the evaluation and exploitation of Arianna+ using OWLOOP API. The exploitation of Arianna+ using OWLOOP API has resulted in four HAR system implementations. The evaluations and results of these HAR systems emphasize the novelty of Arianna+

    Dynamically Reconfigurable Online Self-organising Fuzzy Neural Network with Variable Number of Inputs for Smart Home Application

    Get PDF
    A self-organising fuzzy-neural network (SOFNN) adapts its structure based on variations of the input data. Conventionally in such self-organising networks, the number of inputs providing the data is fixed. In this paper, we consider the situation where the number of inputs to a network changes dynamically during its online operation. We extend our existing work on a SOFNN such that the SOFNN can self-organise its structure based not only on its input data, but also according to the changes in the number of its inputs. We apply the approach to a smart home application, where there are certain situations when some of the existing events may be removed or new events emerge, and illustrate that our approach enhances cognitive reasoning in a dynamic smart home environment. In this case, the network identifies the removed and/or added events from the received information over time, and reconfigures its structure dynamically. We present results for different combinations of training and testing phases of the dynamic reconfigurable SOFNN using a set of realistic synthesized data. The results show the potential of the proposed method

    Real-Time Sensor Observation Segmentation For Complex Activity Recognition Within Smart Environments

    Get PDF
    The file attached to this record is the author's final peer reviewed versionActivity Recognition (AR) is at the heart of any types of assistive living systems. One of the key challenges faced in AR is segmentation of the sensor events when inhabitant performs simple or composite activities of daily living (ADLs). In addition, each inhabitant may follow a particular ritual or a tradition in performing different ADLs and their patterns may change overtime. Many recent studies apply methods to segment and recognise generic ADLs performed in a composite manner. However, little has been explored in semantically distinguishing individual sensor events and directly passing it to the relevant ongoing/new atomic activities. This paper proposes to use the ontological model to capture generic knowledge of ADLs and methods which also takes inhabitant-specific preferences into considerations when segmenting sensor events. The system implementation was developed, deployed and evaluated against 84 use case scenarios. The result suggests that all sensor events were adequately segmented with 98% accuracy and the average classification time of 3971ms and 62183ms for single and composite ADL scenarios were recorded, respectively

    ProCAVIAR: Hybrid Data-Driven and Probabilistic Knowledge-Based Activity Recognition

    Get PDF
    The recognition of physical activities using sensors on mobile devices has been mainly addressed with supervised and semi-supervised learning. The state-of-the-art methods are mainly based on the analysis of the user\u2019s movement patterns that emerge from inertial sensors data. While the literature on this topic is quite mature, existing approaches are still not adequate to discriminate activities characterized by similar physical movements. The context that surrounds the user (e.g., semantic location) could be used as additional information to significantly extend the set of recognizable activities. Since collecting a comprehensive training set with activities performed in every possible context condition is too costly, if possible at all, existing works proposed knowledge-based reasoning over ontological representation of context data to refine the predictions obtained from machine learning. A problem with this approach is the rigidity of the underlying logic formalism that cannot capture the intrinsic uncertainty of the relationships between activities and context. In this work, we propose a novel activity recognition method that combines semisupervised learning and probabilistic ontological reasoning. We model the relationships between activities and context as a combination of soft and hard ontological axioms. For each activity, we use a probabilistic ontology to compute its compatibility with the current context conditions. The output of probabilistic semantic reasoning is combined with the output of a machine learning classifier based on inertial sensor data to obtain the most likely activity performed by the user. The evaluation of our system on a dataset with 13 types of activities performed by 26 subjects shows that our probabilistic framework outperforms both a pure machine learning approach and previous hybrid approaches based on classic ontological reasoning

    A semantics-based approach to sensor data segmentation in real-time Activity Recognition

    Get PDF
    Department of Information Engineering, Dalian University, China The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Activity Recognition (AR) is key in context-aware assistive living systems. One challenge in AR is the segmentation of observed sensor events when interleaved or concurrent activities of daily living (ADLs) are performed. Several studies have proposed methods of separating and organising sensor observations and recognise generic ADLs performed in a simple or composite manner. However, little has been explored in semantically distinguishing individual sensor events directly and passing it to the relevant ongoing/new atomic activities. This paper proposes Semiotic theory inspired ontological model, capturing generic knowledge and inhabitant-specific preferences for conducting ADLs to support the segmentation process. A multithreaded decision algorithm and system prototype were developed and evaluated against 30 use case scenarios where each event was simulated at 10sec interval on a machine with i7 2.60GHz CPU, 2 cores and 8GB RAM. The result suggests that all sensor events were adequately segmented with 100% accuracy for single ADL scenarios and minor improvement of 97.8% accuracy for composite ADL scenario. However, the performance has suffered to segment each event with the average classification time of 3971ms and 62183ms for single and composite ADL scenarios, respectively
    • …
    corecore