13 research outputs found

    Ambient Assisted Living and Ageing: Preliminary Results of RITA Project

    Get PDF
    The ageing of population is a social phenomenon that most of worldwide countries are facing. They are, and will be even more in the future, indeed trying to find solutions for improving quality of life of their elderly citizens. The project RITA wants to demonstrate that an update of the current socio-medical services with an Ambient Assisted Living (AAL) approach could improve the service efficiency and the quality of life of both elderly and caregiver. This paper presents the preliminary results obtained in RITA

    The OCarePlatform : a context-aware system to support independent living

    Get PDF
    Background: Currently, healthcare services, such as institutional care facilities, are burdened with an increasing number of elderly people and individuals with chronic illnesses and a decreasing number of competent caregivers. Objectives: To relieve the burden on healthcare services, independent living at home could be facilitated, by offering individuals and their (in)formal caregivers support in their daily care and needs. With the rise of pervasive healthcare, new information technology solutions can assist elderly people ("residents") and their caregivers to allow residents to live independently for as long as possible. Methods: To this end, the OCarePlatform system was designed. This semantic, data-driven and cloud based back-end system facilitates independent living by offering information and knowledge-based services to the resident and his/her (in)formal caregivers. Data and context information are gathered to realize context-aware and personalized services and to support residents in meeting their daily needs. This body of data, originating from heterogeneous data and information sources, is sent to personalized services, where is fused, thus creating an overview of the resident's current situation. Results: The architecture of the OCarePlatform is proposed, which is based on a service-oriented approach, together with its different components and their interactions. The implementation details are presented, together with a running example. A scalability and performance study of the OCarePlatform was performed. The results indicate that the OCarePlatform is able to support a realistic working environment and respond to a trigger in less than 5 seconds. The system is highly dependent on the allocated memory. Conclusion: The data-driven character of the OCarePlatform facilitates easy plug-in of new functionality, enabling the design of personalized, context-aware services. The OCarePlatform leads to better support for elderly people and individuals with chronic illnesses, who live independently. (C) 2016 Elsevier Ireland Ltd. All rights reserved

    Multi-view stacking for activity recognition with sound and accelerometer data

    Get PDF
    Many Ambient Intelligence (AmI) systems rely on automatic human activity recognition for getting crucial context information, so that they can provide personalized services based on the current users’ state. Activity recognition provides core functionality to many types of systems including: Ambient Assisted Living, fitness trackers, behavior monitoring, security, and so on. The advent of wearable devices along with their diverse set of embedded sensors opens new opportunities for ubiquitous context sensing. Recently, wearable devices such as smartphones and smart-watches have been used for activity recognition and monitoring. Most of the previous works use inertial sensors (accelerometers, gyroscopes) for activity recognition and combine them using an aggregation approach, i.e., extract features from each sensor and aggregate them to build the final classification model. This is not optimal since each sensor data source has its own statistical properties. In this work, we propose the use of a multi-view stacking method to fuse the data from heterogeneous types of sensors for activity recognition. Specifically, we used sound and accelerometer data collected with a smartphone and a wrist-band while performing home task activities. The proposed method is based on multi-view learning and stacked generalization, and consists of training a model for each of the sensor views and combining them with stacking. Our experimental results showed that the multi-view stacking method outperformed the aggregation approach in terms of accuracy, recall and specificity

    Ontology-driven monitoring of patient's vital signs enabling personalized medical detection and alert

    Get PDF
    A major challenge related to caring for patients with chronic conditions is the early detection of exacerbations of the disease. Medical personnel should be contacted immediately in order to intervene in time before an acute state is reached, ensuring patient safety. This paper proposes an approach to an ambient intelligence (AmI) framework supporting real-time remote monitoring of patients diagnosed with congestive heart failure (CHF). Its novelty is the integration of: (i) personalized monitoring of the patients health status and risk stage; (ii) intelligent alerting of the dedicated physician through the construction of medical workflows on-the-fly; and (iii) dynamic adaptation of the vital signs' monitoring environment on any available device or smart phone located in close proximity to the physician depending on new medical measurements, additional disease specifications or the failure of the infrastructure. The intelligence lies in the adoption of semantics providing for a personalized and automated emergency alerting that smoothly interacts with the physician, regardless of his location, ensuring timely intervention during an emergency. It is evaluated on a medical emergency scenario, where in the case of exceeded patient thresholds, medical personnel are localized and contacted, presenting ad hoc information on the patient's condition on the most suited device within the physician's reach

    Towards a cascading reasoning framework to support responsive ambient-intelligent healthcare interventions

    Get PDF
    In hospitals and smart nursing homes, ambient-intelligent care rooms are equipped with many sensors. They can monitor environmental and body parameters, and detect wearable devices of patients and nurses. Hence, they continuously produce data streams. This offers the opportunity to collect, integrate and interpret this data in a context-aware manner, with a focus on reactivity and autonomy. However, doing this in real time on huge data streams is a challenging task. In this context, cascading reasoning is an emerging research approach that exploits the trade-off between reasoning complexity and data velocity by constructing a processing hierarchy of reasoners. Therefore, a cascading reasoning framework is proposed in this paper. A generic architecture is presented allowing to create a pipeline of reasoning components hosted locally, in the edge of the network, and in the cloud. The architecture is implemented on a pervasive health use case, where medically diagnosed patients are constantly monitored, and alarming situations can be detected and reacted upon in a context-aware manner. A performance evaluation shows that the total system latency is mostly lower than 5 s, allowing for responsive intervention by a nurse in alarming situations. Using the evaluation results, the benefits of cascading reasoning for healthcare are analyzed

    Multi-sensor fusion based on multiple classifier systems for human activity identification

    Get PDF
    Multimodal sensors in healthcare applications have been increasingly researched because it facilitates automatic and comprehensive monitoring of human behaviors, high-intensity sports management, energy expenditure estimation, and postural detection. Recent studies have shown the importance of multi-sensor fusion to achieve robustness, high-performance generalization, provide diversity and tackle challenging issue that maybe difficult with single sensor values. The aim of this study is to propose an innovative multi-sensor fusion framework to improve human activity detection performances and reduce misrecognition rate. The study proposes a multi-view ensemble algorithm to integrate predicted values of different motion sensors. To this end, computationally efficient classification algorithms such as decision tree, logistic regression and k-Nearest Neighbors were used to implement diverse, flexible and dynamic human activity detection systems. To provide compact feature vector representation, we studied hybrid bio-inspired evolutionary search algorithm and correlation-based feature selection method and evaluate their impact on extracted feature vectors from individual sensor modality. Furthermore, we utilized Synthetic Over-sampling minority Techniques (SMOTE) algorithm to reduce the impact of class imbalance and improve performance results. With the above methods, this paper provides unified framework to resolve major challenges in human activity identification. The performance results obtained using two publicly available datasets showed significant improvement over baseline methods in the detection of specific activity details and reduced error rate. The performance results of our evaluation showed 3% to 24% improvement in accuracy, recall, precision, F-measure and detection ability (AUC) compared to single sensors and feature-level fusion. The benefit of the proposed multi-sensor fusion is the ability to utilize distinct feature characteristics of individual sensor and multiple classifier systems to improve recognition accuracy. In addition, the study suggests a promising potential of hybrid feature selection approach, diversity-based multiple classifier systems to improve mobile and wearable sensor-based human activity detection and health monitoring system. - 2019, The Author(s).This research is supported by University of Malaya BKP Special Grant no vote BKS006-2018.Scopu

    Sensor Data Fusion for Activity Monitoring in the PERSONA Ambient Assisted Living Project

    No full text
    User activity monitoring is a major problem in ambient assisted living, since it requires to infer new knowledge from collected and fused sensor data while dealing with highly dynamic environments, where devices continuously change their availability and (or) physical location. In the context of the European project PERSONA, we have developed an activity monitoring sub-system characterized by high modularity, little invasiveness of the environment and good responsiveness. In this paper we first illustrate the functional architecture of the proposed solution from a general point of view, discussing the motivations of the design. Then we describe in details the software components—sensor abstraction and integration layer, human posture classification, activity monitor—and the resulting activity monitoring application, presenting also a performance evaluation

    Sensor data fusion for activity monitoring in the PERSONA ambient assisted living project

    No full text
    User activity monitoring is a major problem in ambient assisted living, since it requires to infer new knowledge from collected and fused sensor data while dealing with highly dynamic environments, where devices continuously change their availability and (or) physical location. In the context of the European project PERSONA, we have developed an activity monitoring sub-system characterized by high modularity, little invasiveness of the environment and good responsiveness. In this paper we first illustrate the functional architecture of the proposed solution from a general point of view, discussing the motivations of the design. Then we describe in details the software components - sensor abstraction and integration layer, human posture classification, activity monitor - and the resulting activity monitoring application, presenting also a performance evaluation
    corecore