1,742 research outputs found

    Towards Smart Homes Using Low Level Sensory Data

    Get PDF
    Ubiquitous Life Care (u-Life care) is receiving attention because it provides high quality and low cost care services. To provide spontaneous and robust healthcare services, knowledge of a patient’s real-time daily life activities is required. Context information with real-time daily life activities can help to provide better services and to improve healthcare delivery. The performance and accuracy of existing life care systems is not reliable, even with a limited number of services. This paper presents a Human Activity Recognition Engine (HARE) that monitors human health as well as activities using heterogeneous sensor technology and processes these activities intelligently on a Cloud platform for providing improved care at low cost. We focus on activity recognition using video-based, wearable sensor-based, and location-based activity recognition engines and then use intelligent processing to analyze the context of the activities performed. The experimental results of all the components showed good accuracy against existing techniques. The system is deployed on Cloud for Alzheimer’s disease patients (as a case study) with four activity recognition engines to identify low level activity from the raw data captured by sensors. These are then manipulated using ontology to infer higher level activities and make decisions about a patient’s activity using patient profile information and customized rules

    Smart Diary - A Guide to Man's Daily Planning

    Get PDF
    In this paper we present Smart phone app that senses and analyses mobile data to understand, predict, and summarize a man?s daily activities, such as his daily routine. These activities are used to represent knowledge, which helps in generating digital personal diaries in an automatic manner. Here we make use of different sensors for the purpose of sensing. Smart Diary is able to make predictions based on a wide range of information sources, like phone?s sensor readings, locations, along with interaction history with the users, by integrating such information into a sustainable mining model.. This Android app is specifically developed to handle heterogeneous and noisy data, and it is made to be extensible in which people can define their own logic rules which will express predictions like short-term, mid-term, and long-term events and patterns about their daily routine. The app?s evaluation results are based on the platform provided by Android

    Building an Understanding of Human Activities in First Person Video using Fuzzy Inference

    Get PDF
    Activities of Daily Living (ADL’s) are the activities that people perform every day in their home as part of their typical routine. The in-home, automated monitoring of ADL’s has broad utility for intelligent systems that enable independent living for the elderly and mentally or physically disabled individuals. With rising interest in electronic health (e-Health) and mobile health (m-Health) technology, opportunities abound for the integration of activity monitoring systems into these newer forms of healthcare. In this dissertation we propose a novel system for describing ’s based on video collected from a wearable camera. Most in-home activities are naturally defined by interaction with objects. We leverage these object-centric activity definitions to develop a set of rules for a Fuzzy Inference System (FIS) that uses video features and the identification of objects to identify and classify activities. Further, we demonstrate that the use of FIS enhances the reliability of the system and provides enhanced explainability and interpretability of results over popular machine-learning classifiers due to the linguistic nature of fuzzy systems

    PocketCare: Tracking the Flu with Mobile Phones using Partial Observations of Proximity and Symptoms

    Full text link
    Mobile phones provide a powerful sensing platform that researchers may adopt to understand proximity interactions among people and the diffusion, through these interactions, of diseases, behaviors, and opinions. However, it remains a challenge to track the proximity-based interactions of a whole community and then model the social diffusion of diseases and behaviors starting from the observations of a small fraction of the volunteer population. In this paper, we propose a novel approach that tries to connect together these sparse observations using a model of how individuals interact with each other and how social interactions happen in terms of a sequence of proximity interactions. We apply our approach to track the spreading of flu in the spatial-proximity network of a 3000-people university campus by mobilizing 300 volunteers from this population to monitor nearby mobile phones through Bluetooth scanning and to daily report flu symptoms about and around them. Our aim is to predict the likelihood for an individual to get flu based on how often her/his daily routine intersects with those of the volunteers. Thus, we use the daily routines of the volunteers to build a model of the volunteers as well as of the non-volunteers. Our results show that we can predict flu infection two weeks ahead of time with an average precision from 0.24 to 0.35 depending on the amount of information. This precision is six to nine times higher than with a random guess model. At the population level, we can predict infectious population in a two-week window with an r-squared value of 0.95 (a random-guess model obtains an r-squared value of 0.2). These results point to an innovative approach for tracking individuals who have interacted with people showing symptoms, allowing us to warn those in danger of infection and to inform health researchers about the progression of contact-induced diseases

    Robust Activity Recognition for Adaptive Worker-Robot Interaction using Transfer Learning

    Full text link
    Human activity recognition (HAR) using machine learning has shown tremendous promise in detecting construction workers' activities. HAR has many applications in human-robot interaction research to enable robots' understanding of human counterparts' activities. However, many existing HAR approaches lack robustness, generalizability, and adaptability. This paper proposes a transfer learning methodology for activity recognition of construction workers that requires orders of magnitude less data and compute time for comparable or better classification accuracy. The developed algorithm transfers features from a model pre-trained by the original authors and fine-tunes them for the downstream task of activity recognition in construction. The model was pre-trained on Kinetics-400, a large-scale video-based human activity recognition dataset with 400 distinct classes. The model was fine-tuned and tested using videos captured from manual material handling (MMH) activities found on YouTube. Results indicate that the fine-tuned model can recognize distinct MMH tasks in a robust and adaptive manner which is crucial for the widespread deployment of collaborative robots in construction.Comment: 2023 ASCE International Conference on Computing in Civil Engineering (I3CE

    A Knowledge-driven Distributed Architecture for Context-Aware Systems

    Get PDF
    As the number of devices increases, it becomes a challenge for the users to use them effectively. This is more challenging when the majority of these devices are mobile. The users and their devices enter and leave different environments where different settings and computing needs may be required. To effectively use these devices in such environments means to constantly be aware of their whereabouts, functionalities and desirable working conditions. This is impractical and hence it is imperative to increase seamless interactions between the users and devices,and to make these devices less intrusive. To address these problems, various responsive computing systems, called context- aware systems, have been developed. These systems rely on architectures to perceive their physical environments in order to appropriately and effortlessly respond. Currently, the majority of the existing architectures focus on acquiring data from sensors, interpreting and sharing it with these systems
    corecore