11,526 research outputs found

    An intelligent information forwarder for healthcare big data systems with distributed wearable sensors

    Get PDF
    Ā© 2016 IEEE. An increasing number of the elderly population wish to live an independent lifestyle, rather than rely on intrusive care programmes. A big data solution is presented using wearable sensors capable of carrying out continuous monitoring of the elderly, alerting the relevant caregivers when necessary and forwarding pertinent information to a big data system for analysis. A challenge for such a solution is the development of context-awareness through the multidimensional, dynamic and nonlinear sensor readings that have a weak correlation with observable human behaviours and health conditions. To address this challenge, a wearable sensor system with an intelligent data forwarder is discussed in this paper. The forwarder adopts a Hidden Markov Model for human behaviour recognition. Locality sensitive hashing is proposed as an efficient mechanism to learn sensor patterns. A prototype solution is implemented to monitor health conditions of dispersed users. It is shown that the intelligent forwarders can provide the remote sensors with context-awareness. They transmit only important information to the big data server for analytics when certain behaviours happen and avoid overwhelming communication and data storage. The system functions unobtrusively, whilst giving the users peace of mind in the knowledge that their safety is being monitored and analysed

    Activity Recognition and Prediction in Real Homes

    Full text link
    In this paper, we present work in progress on activity recognition and prediction in real homes using either binary sensor data or depth video data. We present our field trial and set-up for collecting and storing the data, our methods, and our current results. We compare the accuracy of predicting the next binary sensor event using probabilistic methods and Long Short-Term Memory (LSTM) networks, include the time information to improve prediction accuracy, as well as predict both the next sensor event and its mean time of occurrence using one LSTM model. We investigate transfer learning between apartments and show that it is possible to pre-train the model with data from other apartments and achieve good accuracy in a new apartment straight away. In addition, we present preliminary results from activity recognition using low-resolution depth video data from seven apartments, and classify four activities - no movement, standing up, sitting down, and TV interaction - by using a relatively simple processing method where we apply an Infinite Impulse Response (IIR) filter to extract movements from the frames prior to feeding them to a convolutional LSTM network for the classification.Comment: 12 pages, Symposium of the Norwegian AI Society NAIS 201

    Using health mind maps to capture patient's explanatory models of illness

    Get PDF
    BACKGROUND: Management of chronic diseases has become one of the major challenges for the health care community. Most of disease management relies on patientā€™s self-management, influenced in part by their illness perspectives or explanatory models of illness (EMI). Unfortunately, assessing patientā€™s EMI and using this information to engage patients in chronic illness self-management continues to be a challenge. This is due to time constraints, ambiguity in the design of EMI assessments, lack of motivation, and low health literacy. This study used ā€˜mind mappingā€™, a graphic representation of ideas, to develop a process that captures EMI. We will refer to this process as ā€œHealth Mind Mappingā€ (HMM). We explored patientā€™s experiences using HMM and potential uses of this tool during their care. METHODS: 20 adult (>18 years old) English and Spanish patients with uncontrolled (HbA1c >7%) type 2 diabetes were recruited from a primary care clinic. Participants developed their health mind maps with the guide of a facilitator. Each participant also completed a semi-structured interview in which patients were asked about their experience with HMM. The HMM process and qualitative interviews were video and audio recorded. Transcriptions were analyzed using grounded thematic analysis to identify how patients perceived and were impacted by the process. RESULTS: Two domains regarding the HMM process were identified: patientā€™s perceptions of the process itself and patientā€™s reports of potential uses of HMM. Three main themes related to the process itself emerged: 1) Helps to develop insight about self and illness; 2) Catalyst for taking actions to improve their illness; 3) Opportunity to actively share illness. Four main themes related to potential uses of HMM were identified: 1) Communicating their illness to others in their social network; 2) Communicating with their providers; 3) Share to help others with diabetes; 4) Use to encourage ongoing engagement in diabetes self-care. CONCLUSIONS: HMM helped patients to develop new insight about their illness and represented a catalyst for taking control of their illness. Additional research is needed to determine how to use HMM to facilitate patient communication and better engage patients in collaborative goal setting to improve self-care in chronic illness.2017-05-05T00:00:00

    A random forest approach to segmenting and classifying gestures

    Full text link
    This thesis investigates a gesture segmentation and recognition scheme that employs a random forest classification model. A complete gesture recognition system should localize and classify each gesture from a given gesture vocabulary, within a continuous video stream. Thus, the system must determine the start and end points of each gesture in time, as well as accurately recognize the class label of each gesture. We propose a unified approach that performs the tasks of temporal segmentation and classification simultaneously. Our method trains a random forest classification model to recognize gestures from a given vocabulary, as presented in a training dataset of video plus 3D body joint locations, as well as out-of-vocabulary (non-gesture) instances. Given an input video stream, our trained model is applied to candidate gestures using sliding windows at multiple temporal scales. The class label with the highest classifier confidence is selected, and its corresponding scale is used to determine the segmentation boundaries in time. We evaluated our formulation in segmenting and recognizing gestures from two different benchmark datasets: the NATOPS dataset of 9,600 gesture instances from a vocabulary of 24 aircraft handling signals, and the CHALEARN dataset of 7,754 gesture instances from a vocabulary of 20 Italian communication gestures. The performance of our method compares favorably with state-of-the-art methods that employ Hidden Markov Models or Hidden Conditional Random Fields on the NATOPS dataset. We conclude with a discussion of the advantages of using our model

    Symbol Emergence in Robotics: A Survey

    Full text link
    Humans can learn the use of language through physical interaction with their environment and semiotic communication with other people. It is very important to obtain a computational understanding of how humans can form a symbol system and obtain semiotic skills through their autonomous mental development. Recently, many studies have been conducted on the construction of robotic systems and machine-learning methods that can learn the use of language through embodied multimodal interaction with their environment and other systems. Understanding human social interactions and developing a robot that can smoothly communicate with human users in the long term, requires an understanding of the dynamics of symbol systems and is crucially important. The embodied cognition and social interaction of participants gradually change a symbol system in a constructive manner. In this paper, we introduce a field of research called symbol emergence in robotics (SER). SER is a constructive approach towards an emergent symbol system. The emergent symbol system is socially self-organized through both semiotic communications and physical interactions with autonomous cognitive developmental agents, i.e., humans and developmental robots. Specifically, we describe some state-of-art research topics concerning SER, e.g., multimodal categorization, word discovery, and a double articulation analysis, that enable a robot to obtain words and their embodied meanings from raw sensory--motor information, including visual information, haptic information, auditory information, and acoustic speech signals, in a totally unsupervised manner. Finally, we suggest future directions of research in SER.Comment: submitted to Advanced Robotic
    • ā€¦
    corecore