5 research outputs found

    Big Data Based Architecture of the ADL Recognition System

    Get PDF
    In the age of the Internet of Things (IoT), every motion in our daily life can be captured and modulated to a digital data with smart portable devices. These data are very valuable in the presentation of someone’s living status, and in analyzing its health condition. Based on that, suggestions by professionals can be given directionally to improve the living quality. An arising issue is that, in this way the amount of data in the filter is big and the velocity of data flow is high. The original structure is strainful in handling such a large amount of data or such detailed data. An urgent requirement is to build a structure to support big data in the aspect of grabbing, filtering, analysis, and presentation. The ADL Recognition System collects information from elderly people, analyzes their behaviors, and presents them in a visualized way to specific users. It aims to provide a better nursing service for elderly people and a convenience in assessing health condition or nursing level for nurses and doctors. My work is to design and implement a big data based architecture of the ADL Recognition System so that it can accept more users and data flowing in. Besides, the architecture will be expandable in computation and storage and adaptive to the scale of the real application environment. The necessity and methodology of importing big data based architecture will be justified in each process of data filtering from the aspects of technologies

    Activity recognition and animation of activities of daily living

    Get PDF
    Activities of Daily Living (ADL) can give us information about an individual’s health, both physical and mental. They are captured using sensors and then processed and recognized into different activities. Activity recognition is the process of understanding a person’s movement and actions. In this work, we develop a language in a simple grammar that describes the activity and uses it to recognize the activity. We call this language as Activities of Daily Living Description Language, or A(DL)2 in short.Even after an activity has been recognized, the data it represents is still digital data and it would take some expertise and time to understand it. To overcome this problem, a system that can visualize and animate individuals’ activity in real time without violating any privacy issues, can be built. This will not only help in understanding the current state of individual but will also help those who are in charge of monitoring them remotely like nurses, doctors, family members, thereby rendering better care and support especially to the elderly people who are aging. We propose a real time activity recognition and animation system that recognizes and animates the individual’s activity. We experimented with one of the basic ADLs, walking, and found the result satisfactory. Individuals location is tracked using sensors and is sent to the recognition system which then decides the type of activity in real time by using the language to describe it, and then the data is sent to a visualization system which animates that activity. When fully developed, this system intends to serve the purpose of providing better health care and immediate support to the people in need

    Recognition of activities of daily living

    Get PDF
    Activities of daily living (ADL) are things we normally do in daily living, including any daily activity such as feeding ourselves, bathing, dressing, grooming, work, homemaking, and leisure. The ability or inability to perform ADLs can be used as a very practical measure of human capability in many types of disorder and disability. Oftentimes in a health care facility, with the help of observations by nurses and self-reporting by residents, professional staff manually collect ADL data and enter data into the system. Technologies in smart homes can provide some solutions to detecting and monitoring a resident’s ADL. Typically multiple sensors can be deployed, such as surveillance cameras in the smart home environment, and contacted sensors affixed to the resident’s body. Note that the traditional technologies incur costly and laborious sensor deployment, and cause uncomfortable feeling of contacted sensors with increased inconvenience. This work presents a novel system facilitated via mobile devices to collect and analyze mobile data pertaining to the human users’ ADL. By employing only one smart phone, this system, named ADL recognition system, significantly reduces set-up costs and saves manpower. It encapsulates rather sophisticated technologies under the hood, such as an agent-based information management platform integrating both the mobile end and the cloud, observer patterns and a time-series based motion analysis mechanism over sensory data. As a single-point deployment system, ADL recognition system provides further benefits that enable the replay of users’ daily ADL routines, in addition to the timely assessment of their life habits

    A situation-driven framework for relearning of activities of daily living in smart home environments

    Get PDF
    Activities of Daily Living (ADLs) are sine qua non for self-care and improved quality of life. Self-efficacy is major challenge for seniors with early-stage dementia (ED) when performing daily living activities. ED causes deterioration of cognitive functions and thus impacts aging adults’ functioning initiative and performance of instrumental activities of daily living (IADLs). Generally, IADLs requires certain skills in both planning and execution and may involve sequence of steps for aging adults to accomplish their goals. These intricate procedures in IADLs potentially predispose older adults to safety-critical situations with life-threatening consequences. A safety-critical situation is a state or event that potentially constitutes a risk with life-threatening injuries or accidents. To address this problem, a situation-driven framework for relearning of daily living activities in smart home environment is proposed. The framework is composed of three (3) major units namely: a) goal inference unit – leverages a deep learning model to infer human goal in a smart home, b) situation-context generator – responsible for risk mitigation in IADLs, and c) a recommendation unit – to support decision making of aging adults in safety-critical situations. The proposed framework was validated against IADLs dataset collected from a smart home research prototype and the results obtained are promising
    corecore