4,417 research outputs found

    Fall Prediction and Prevention Systems: Recent Trends, Challenges, and Future Research Directions.

    Get PDF
    Fall prediction is a multifaceted problem that involves complex interactions between physiological, behavioral, and environmental factors. Existing fall detection and prediction systems mainly focus on physiological factors such as gait, vision, and cognition, and do not address the multifactorial nature of falls. In addition, these systems lack efficient user interfaces and feedback for preventing future falls. Recent advances in internet of things (IoT) and mobile technologies offer ample opportunities for integrating contextual information about patient behavior and environment along with physiological health data for predicting falls. This article reviews the state-of-the-art in fall detection and prediction systems. It also describes the challenges, limitations, and future directions in the design and implementation of effective fall prediction and prevention systems

    Towards a Practical Pedestrian Distraction Detection Framework using Wearables

    Full text link
    Pedestrian safety continues to be a significant concern in urban communities and pedestrian distraction is emerging as one of the main causes of grave and fatal accidents involving pedestrians. The advent of sophisticated mobile and wearable devices, equipped with high-precision on-board sensors capable of measuring fine-grained user movements and context, provides a tremendous opportunity for designing effective pedestrian safety systems and applications. Accurate and efficient recognition of pedestrian distractions in real-time given the memory, computation and communication limitations of these devices, however, remains the key technical challenge in the design of such systems. Earlier research efforts in pedestrian distraction detection using data available from mobile and wearable devices have primarily focused only on achieving high detection accuracy, resulting in designs that are either resource intensive and unsuitable for implementation on mainstream mobile devices, or computationally slow and not useful for real-time pedestrian safety applications, or require specialized hardware and less likely to be adopted by most users. In the quest for a pedestrian safety system that achieves a favorable balance between computational efficiency, detection accuracy, and energy consumption, this paper makes the following main contributions: (i) design of a novel complex activity recognition framework which employs motion data available from users' mobile and wearable devices and a lightweight frequency matching approach to accurately and efficiently recognize complex distraction related activities, and (ii) a comprehensive comparative evaluation of the proposed framework with well-known complex activity recognition techniques in the literature with the help of data collected from human subject pedestrians and prototype implementations on commercially-available mobile and wearable devices

    Acceptability of novel lifelogging technology to determine context of sedentary behaviour in older adults

    Get PDF
    <strong>Objective:</strong> Lifelogging, using body worn sensors (activity monitors and time lapse photography) has the potential to shed light on the context of sedentary behaviour. The objectives of this study were to examine the acceptability, to older adults, of using lifelogging technology and indicate its usefulness for understanding behaviour.<strong> </strong><strong>Method:</strong> 6 older adults (4 males, mean age: 68yrs) wore the equipment (ActivPAL<sup>TM</sup> and Vicon Revue<sup>TM</sup>/SenseCam<sup>TM</sup>) for 7 consecutive days during free-living activity. The older adults’ perception of the lifelogging technology was assessed through semi-structured interviews, including a brief questionnaire (Likert scale), and reference to the researcher&#39;s diary. <strong>Results:</strong> Older adults in this study found the equipment acceptable to wear and it did not interfere with privacy, safety or create reactivity, but they reported problems with the actual technical functioning of the camera. <strong>Conclusion:</strong> This combination of sensors has good potential to provide lifelogging information on the context of sedentary behaviour

    Pedestrian Detection with Wearable Cameras for the Blind: A Two-way Perspective

    Full text link
    Blind people have limited access to information about their surroundings, which is important for ensuring one's safety, managing social interactions, and identifying approaching pedestrians. With advances in computer vision, wearable cameras can provide equitable access to such information. However, the always-on nature of these assistive technologies poses privacy concerns for parties that may get recorded. We explore this tension from both perspectives, those of sighted passersby and blind users, taking into account camera visibility, in-person versus remote experience, and extracted visual information. We conduct two studies: an online survey with MTurkers (N=206) and an in-person experience study between pairs of blind (N=10) and sighted (N=40) participants, where blind participants wear a working prototype for pedestrian detection and pass by sighted participants. Our results suggest that both of the perspectives of users and bystanders and the several factors mentioned above need to be carefully considered to mitigate potential social tensions.Comment: The 2020 ACM CHI Conference on Human Factors in Computing Systems (CHI 2020

    Forecasting User Attention During Everyday Mobile Interactions Using Device-Integrated and Wearable Sensors

    Full text link
    Visual attention is highly fragmented during mobile interactions, but the erratic nature of attention shifts currently limits attentive user interfaces to adapting after the fact, i.e. after shifts have already happened. We instead study attention forecasting -- the challenging task of predicting users' gaze behaviour (overt visual attention) in the near future. We present a novel long-term dataset of everyday mobile phone interactions, continuously recorded from 20 participants engaged in common activities on a university campus over 4.5 hours each (more than 90 hours in total). We propose a proof-of-concept method that uses device-integrated sensors and body-worn cameras to encode rich information on device usage and users' visual scene. We demonstrate that our method can forecast bidirectional attention shifts and predict whether the primary attentional focus is on the handheld mobile device. We study the impact of different feature sets on performance and discuss the significant potential but also remaining challenges of forecasting user attention during mobile interactions.Comment: 13 pages, 9 figure

    Human behavior understanding for worker-centered intelligent manufacturing

    Get PDF
    “In a worker-centered intelligent manufacturing system, sensing and understanding of the worker’s behavior are the primary tasks, which are essential for automatic performance evaluation & optimization, intelligent training & assistance, and human-robot collaboration. In this study, a worker-centered training & assistant system is proposed for intelligent manufacturing, which is featured with self-awareness and active-guidance. To understand the hand behavior, a method is proposed for complex hand gesture recognition using Convolutional Neural Networks (CNN) with multiview augmentation and inference fusion, from depth images captured by Microsoft Kinect. To sense and understand the worker in a more comprehensive way, a multi-modal approach is proposed for worker activity recognition using Inertial Measurement Unit (IMU) signals obtained from a Myo armband and videos from a visual camera. To automatically learn the importance of different sensors, a novel attention-based approach is proposed to human activity recognition using multiple IMU sensors worn at different body locations. To deploy the developed algorithms to the factory floor, a real-time assembly operation recognition system is proposed with fog computing and transfer learning. The proposed worker-centered training & assistant system has been validated and demonstrated the feasibility and great potential for applying to the manufacturing industry for frontline workers. Our developed approaches have been evaluated: 1) the multi-view approach outperforms the state-of-the-arts on two public benchmark datasets, 2) the multi-modal approach achieves an accuracy of 97% on a worker activity dataset including 6 activities and achieves the best performance on a public dataset, 3) the attention-based method outperforms the state-of-the-art methods on five publicly available datasets, and 4) the developed transfer learning model achieves a real-time recognition accuracy of 95% on a dataset including 10 worker operations”--Abstract, page iv
    • …
    corecore