491 research outputs found

    Automatic Synchronization of Wearable Sensors and Video-Cameras for Ground Truth Annotation -- A Practical Approach

    Full text link

    Human activity recognition for pervasive interaction

    Get PDF
    PhD ThesisThis thesis addresses the challenge of computing food preparation context in the kitchen. The automatic recognition of fine-grained human activities and food ingredients is realized through pervasive sensing which we achieve by instrumenting kitchen objects such as knives, spoons, and chopping boards with sensors. Context recognition in the kitchen lies at the heart of a broad range of real-world applications. In particular, activity and food ingredient recognition in the kitchen is an essential component for situated services such as automatic prompting services for cognitively impaired kitchen users and digital situated support for healthier eating interventions. Previous works, however, have addressed the activity recognition problem by exploring high-level-human activities using wearable sensing (i.e. worn sensors on human body) or using technologies that raise privacy concerns (i.e. computer vision). Although such approaches have yielded significant results for a number of activity recognition problems, they are not applicable to our domain of investigation, for which we argue that the technology itself must be genuinely “invisible”, thereby allowing users to perform their activities in a completely natural manner. In this thesis we describe the development of pervasive sensing technologies and algorithms for finegrained human activity and food ingredient recognition in the kitchen. After reviewing previous work on food and activity recognition we present three systems that constitute increasingly sophisticated approaches to the challenge of kitchen context recognition. Two of these systems, Slice&Dice and Classbased Threshold Dynamic Time Warping (CBT-DTW), recognize fine-grained food preparation activities. Slice&Dice is a proof-of-concept application, whereas CBT-DTW is a real-time application that also addresses the problem of recognising unknown activities. The final system, KitchenSense is a real-time context recognition framework that deals with the recognition of a more complex set of activities, and includes the recognition of food ingredients and events in the kitchen. For each system, we describe the prototyping of pervasive sensing technologies, algorithms, as well as real-world experiments and empirical evaluations that validate the proposed solutions.Vietnamese government’s 322 project, executed by the Vietnamese Ministry of Education and Training

    Vision2Sensor: Knowledge Transfer Across Sensing Modalities for Human Activity Recognition

    Get PDF

    NFC based dataset annotation within a behavioral alerting platform

    Get PDF

    A Simple Method for Synchronising Multiple IMUs Using the Magnetometer

    Get PDF
    This paper presents a novel method to synchronise multiple IMU (inertial measurement units) devices using their onboard magnetometers. The method described uses an external electromagnetic pulse to create a known event measured by the magnetometer of multiple IMUs and in turn used to synchronise these devices. The method is applied to 4 IMU devices decreasing their de-synchronisation from 270ms when using only the RTC (real time clock) to 40ms over a 1 hour recording. It is proposed that this can be further improved to approximately 3ms by increasing the magnetometer’s sample frequency from 25Hz to 300Hz

    Human Motion Analysis Using Very Few Inertial Measurement Units

    Get PDF
    Realistic character animation and human motion analysis have become major topics of research. In this doctoral research work, three different aspects of human motion analysis and synthesis have been explored. Firstly, on the level of better management of tens of gigabytes of publicly available human motion capture data sets, a relational database approach has been proposed. We show that organizing motion capture data in a relational database provides several benefits such as centralized access to major freely available mocap data sets, fast search and retrieval of data, annotations based retrieval of contents, entertaining data from non-mocap sensor modalities etc. Moreover, the same idea is also proposed for managing quadruped motion capture data. Secondly, a new method of full body human motion reconstruction using very sparse configuration of sensors is proposed. In this setup, two sensor are attached to the upper extremities and one sensor is attached to the lower trunk. The lower trunk sensor is used to estimate ground contacts, which are later used in the reconstruction process along with the low dimensional inputs from the sensors attached to the upper extremities. The reconstruction results of the proposed method have been compared with the reconstruction results of the existing approaches and it has been observed that the proposed method generates lower average reconstruction errors. Thirdly, in the field of human motion analysis, a novel method of estimation of human soft biometrics such as gender, height, and age from the inertial data of a simple human walk is proposed. The proposed method extracts several features from the time and frequency domains for each individual step. A random forest classifier is fed with the extracted features in order to estimate the soft biometrics of a human. The results of classification have shown that it is possible with a higher accuracy to estimate the gender, height, and age of a human from the inertial data of a single step of his/her walk

    Human Movement and Ergonomics: an Industry-Oriented Dataset for Collaborative Robotics

    Get PDF
    International audienceImproving work conditions in industry is a major challenge that can be addressed with new emerging technologies such as collaborative robots. Machine learning techniques can improve the performance of those robots, by endowing them with a degree of awareness of the human state and ergonomics condition. The availability of appropriate datasets to learn models and test prediction and control algorithms however remains an issue. This paper presents a dataset of human motions in industry-like activities, fully labeled according to the ergonomics assessment worksheet EAWS, widely used in industries such as car manufacturing. Thirteen participants performed several series of activities, such as screwing and manipulating loads in different conditions, resulting in more than 5 hours of data. The dataset contains the participants' whole-body kinematics recorded both with wearable inertial sensors and marker-based optical motion capture, finger pressure force, video recordings, and annotations by 3 independent annotators of the performed action and the adopted posture following the EAWS postural grid. Sensor data are available in different formats to facilitate their reuse. The dataset is intended for use by researchers developing algorithms for classifying, predicting or evaluating human motion in industrial settings, as well as researchers developing collaborative robotics solutions that aim at improving the workers' ergonomics. The annotation of the whole dataset following an ergonomics standard makes it valuable for ergonomics-related applications, but we expect its use to be broader in the robotics, machine learning and human movement communities
    • 

    corecore