155 research outputs found

    FROM TRADITIONAL TO INTERDISCIPLINARY APPROACHES FOR INERTIAL BODY MOTION CAPTURE

    Get PDF
    Inertial motion capture (mocap) is a widespread technology for capturing human motion outside the lab, e.g. for applications in sports, ergonomics, rehabilitation and personal fitness. Even though mature systems are commercially available, inertial mocap is still a subject of research due to a number of limitations: besides measurement errors and sparsity, also simplified body models and calibration routines, soft tissue artefacts and varying body shapes lead to limited precision and robustness compared to optical gold standard systems. The goal of the research group wearHEALTH at the TU Kaiserslautern is to tackle these challenges by bringing together ideas and approaches from different disciplines including biomechanics, sensor fusion, computer vision and (optimal control) simulation. In this talk, we will present an overview of our approaches and applications, starting from the more traditional ones

    EMDB: The Electromagnetic Database of Global 3D Human Pose and Shape in the Wild

    Full text link
    We present EMDB, the Electromagnetic Database of Global 3D Human Pose and Shape in the Wild. EMDB is a novel dataset that contains high-quality 3D SMPL pose and shape parameters with global body and camera trajectories for in-the-wild videos. We use body-worn, wireless electromagnetic (EM) sensors and a hand-held iPhone to record a total of 58 minutes of motion data, distributed over 81 indoor and outdoor sequences and 10 participants. Together with accurate body poses and shapes, we also provide global camera poses and body root trajectories. To construct EMDB, we propose a multi-stage optimization procedure, which first fits SMPL to the 6-DoF EM measurements and then refines the poses via image observations. To achieve high-quality results, we leverage a neural implicit avatar model to reconstruct detailed human surface geometry and appearance, which allows for improved alignment and smoothness via a dense pixel-level objective. Our evaluations, conducted with a multi-view volumetric capture system, indicate that EMDB has an expected accuracy of 2.3 cm positional and 10.6 degrees angular error, surpassing the accuracy of previous in-the-wild datasets. We evaluate existing state-of-the-art monocular RGB methods for camera-relative and global pose estimation on EMDB. EMDB is publicly available under https://ait.ethz.ch/emdbComment: Accepted to ICCV 202

    Underwater Robots Part I: Current Systems and Problem Pose

    Get PDF
    International audienceThis paper constitutes the first part of a general overview of underwater robotics. The second part is titled: Underwater Robots Part II: existing solutions and open issues

    A Survey on Augmented Reality Challenges and Tracking

    Get PDF
    This survey paper presents a classification of different challenges and tracking techniques in the field of augmented reality. The challenges in augmented reality are categorized into performance challenges, alignment challenges, interaction challenges, mobility/portability challenges and visualization challenges. Augmented reality tracking techniques are mainly divided into sensor-based tracking, visionbased tracking and hybrid tracking. The sensor-based tracking is further divided into optical tracking, magnetic tracking, acoustic tracking, inertial tracking or any combination of these to form hybrid sensors tracking. Similarly, the vision-based tracking is divided into marker-based tracking and markerless tracking. Each tracking technique has its advantages and limitations. Hybrid tracking provides a robust and accurate tracking but it involves financial and tehnical difficulties

    Survey of Motion Tracking Methods Based on Inertial Sensors: A Focus on Upper Limb Human Motion

    Get PDF
    Motion tracking based on commercial inertial measurements units (IMUs) has been widely studied in the latter years as it is a cost-effective enabling technology for those applications in which motion tracking based on optical technologies is unsuitable. This measurement method has a high impact in human performance assessment and human-robot interaction. IMU motion tracking systems are indeed self-contained and wearable, allowing for long-lasting tracking of the user motion in situated environments. After a survey on IMU-based human tracking, five techniques for motion reconstruction were selected and compared to reconstruct a human arm motion. IMU based estimation was matched against motion tracking based on the Vicon marker-based motion tracking system considered as ground truth. Results show that all but one of the selected models perform similarly (about 35 mm average position estimation error)

    Wearable Sensing for Solid Biomechanics: A Review

    Get PDF
    Understanding the solid biomechanics of the human body is important to the study of structure and function of the body, which can have a range of applications in health care, sport, well-being, and workflow analysis. Conventional laboratory-based biomechanical analysis systems and observation-based tests are designed only to capture brief snapshots of the mechanics of movement. With recent developments in wearable sensing technologies, biomechanical analysis can be conducted in less-constrained environments, thus allowing continuous monitoring and analysis beyond laboratory settings. In this paper, we review the current research in wearable sensing technologies for biomechanical analysis, focusing on sensing and analytics that enable continuous, long-term monitoring of kinematics and kinetics in a free-living environment. The main technical challenges, including measurement drift, external interferences, nonlinear sensor properties, sensor placement, and muscle variations, that can affect the accuracy and robustness of existing methods and different methods for reducing the impact of these sources of errors are described in this paper. Recent developments in motion estimation in kinematics, mobile force sensing in kinematics, sensor reduction for electromyography, and the future direction of sensing for biomechanics are also discussed

    Multimodal, Embodied and Location-Aware Interaction

    Get PDF
    This work demonstrates the development of mobile, location-aware, eyes-free applications which utilise multiple sensors to provide a continuous, rich and embodied interaction. We bring together ideas from the fields of gesture recognition, continuous multimodal interaction, probability theory and audio interfaces to design and develop location-aware applications and embodied interaction in both a small-scale, egocentric body-based case and a large-scale, exocentric `world-based' case. BodySpace is a gesture-based application, which utilises multiple sensors and pattern recognition enabling the human body to be used as the interface for an application. As an example, we describe the development of a gesture controlled music player, which functions by placing the device at different parts of the body. We describe a new approach to the segmentation and recognition of gestures for this kind of application and show how simulated physical model-based interaction techniques and the use of real world constraints can shape the gestural interaction. GpsTunes is a mobile, multimodal navigation system equipped with inertial control that enables users to actively explore and navigate through an area in an augmented physical space, incorporating and displaying uncertainty resulting from inaccurate sensing and unknown user intention. The system propagates uncertainty appropriately via Monte Carlo sampling and output is displayed both visually and in audio, with audio rendered via granular synthesis. We demonstrate the use of uncertain prediction in the real world and show that appropriate display of the full distribution of potential future user positions with respect to sites-of-interest can improve the quality of interaction over a simplistic interpretation of the sensed data. We show that this system enables eyes-free navigation around set trajectories or paths unfamiliar to the user for varying trajectory width and context. We demon- strate the possibility to create a simulated model of user behaviour, which may be used to gain an insight into the user behaviour observed in our field trials. The extension of this application to provide a general mechanism for highly interactive context aware applications via density exploration is also presented. AirMessages is an example application enabling users to take an embodied approach to scanning a local area to find messages left in their virtual environment

    Multimodal, Embodied and Location-Aware Interaction

    Get PDF
    This work demonstrates the development of mobile, location-aware, eyes-free applications which utilise multiple sensors to provide a continuous, rich and embodied interaction. We bring together ideas from the fields of gesture recognition, continuous multimodal interaction, probability theory and audio interfaces to design and develop location-aware applications and embodied interaction in both a small-scale, egocentric body-based case and a large-scale, exocentric `world-based' case. BodySpace is a gesture-based application, which utilises multiple sensors and pattern recognition enabling the human body to be used as the interface for an application. As an example, we describe the development of a gesture controlled music player, which functions by placing the device at different parts of the body. We describe a new approach to the segmentation and recognition of gestures for this kind of application and show how simulated physical model-based interaction techniques and the use of real world constraints can shape the gestural interaction. GpsTunes is a mobile, multimodal navigation system equipped with inertial control that enables users to actively explore and navigate through an area in an augmented physical space, incorporating and displaying uncertainty resulting from inaccurate sensing and unknown user intention. The system propagates uncertainty appropriately via Monte Carlo sampling and output is displayed both visually and in audio, with audio rendered via granular synthesis. We demonstrate the use of uncertain prediction in the real world and show that appropriate display of the full distribution of potential future user positions with respect to sites-of-interest can improve the quality of interaction over a simplistic interpretation of the sensed data. We show that this system enables eyes-free navigation around set trajectories or paths unfamiliar to the user for varying trajectory width and context. We demon- strate the possibility to create a simulated model of user behaviour, which may be used to gain an insight into the user behaviour observed in our field trials. The extension of this application to provide a general mechanism for highly interactive context aware applications via density exploration is also presented. AirMessages is an example application enabling users to take an embodied approach to scanning a local area to find messages left in their virtual environment
    corecore