830 research outputs found

    Sparse Inertial Poser: Automatic 3D Human Pose Estimation from Sparse IMUs

    Full text link
    We address the problem of making human motion capture in the wild more practical by using a small set of inertial sensors attached to the body. Since the problem is heavily under-constrained, previous methods either use a large number of sensors, which is intrusive, or they require additional video input. We take a different approach and constrain the problem by: (i) making use of a realistic statistical body model that includes anthropometric constraints and (ii) using a joint optimization framework to fit the model to orientation and acceleration measurements over multiple frames. The resulting tracker Sparse Inertial Poser (SIP) enables 3D human pose estimation using only 6 sensors (attached to the wrists, lower legs, back and head) and works for arbitrary human motions. Experiments on the recently released TNT15 dataset show that, using the same number of sensors, SIP achieves higher accuracy than the dataset baseline without using any video data. We further demonstrate the effectiveness of SIP on newly recorded challenging motions in outdoor scenarios such as climbing or jumping over a wall.Comment: 12 pages, Accepted at Eurographics 201

    Improving Dynamics Estimations and Low Level Torque Control Through Inertial Sensing

    Get PDF
    In 1996, professors J. Edward Colgate and Michael Peshkin invented the cobots as robotic equipment safe enough for interacting with human workers. Twenty years later, collaborative robots are highly demanded in the packaging industry, and have already been massively adopted by companies facing issues for meeting customer demands. Meantime, cobots are still making they way into environments where value-added tasks require more complex interactions between robots and human operators. For other applications like a rescue mission in a disaster scenario, robots have to deal with highly dynamic environments and uneven terrains. All these applications require robust, fine and fast control of the interaction forces, specially in the case of locomotion on uneven terrains in an environment where unexpected events can occur. Such interaction forces can only be modulated through the control of joint internal torques in the case of under-actuated systems which is typically the case of mobile robots. For that purpose, an efficient low level joint torque control is one of the critical requirements, and motivated the research presented here. This thesis addresses a thorough model analysis of a typical low level joint actuation sub-system, powered by a Brushless DC motor and suitable for torque control. It then proposes procedure improvements in the identification of model parameters, particularly challenging in the case of coupled joints, in view of improving their control. Along with these procedures, it proposes novel methods for the calibration of inertial sensors, as well as the use of such sensors in the estimation of joint torques

    Human Motion Analysis with Wearable Inertial Sensors

    Get PDF
    High-resolution, quantitative data obtained by a human motion capture system can be used to better understand the cause of many diseases for effective treatments. Talking about the daily care of the aging population, two issues are critical. One is to continuously track motions and position of aging people when they are at home, inside a building or in the unknown environment; the other is to monitor their health status in real time when they are in the free-living environment. Continuous monitoring of human movement in their natural living environment potentially provide more valuable feedback than these in laboratory settings. However, it has been extremely challenging to go beyond laboratory and obtain accurate measurements of human physical activity in free-living environments. Commercial motion capture systems produce excellent in-studio capture and reconstructions, but offer no comparable solution for acquisition in everyday environments. Therefore in this dissertation, a wearable human motion analysis system is developed for continuously tracking human motions, monitoring health status, positioning human location and recording the itinerary. In this dissertation, two systems are developed for seeking aforementioned two goals: tracking human body motions and positioning a human. Firstly, an inertial-based human body motion tracking system with our developed inertial measurement unit (IMU) is introduced. By arbitrarily attaching a wearable IMU to each segment, segment motions can be measured and translated into inertial data by IMUs. A human model can be reconstructed in real time based on the inertial data by applying high efficient twists and exponential maps techniques. Secondly, for validating the feasibility of developed tracking system in the practical application, model-based quantification approaches for resting tremor and lower extremity bradykinesia in Parkinson’s disease are proposed. By estimating all involved joint angles in PD symptoms based on reconstructed human model, angle characteristics with corresponding medical ratings are employed for training a HMM classifier for quantification. Besides, a pedestrian positioning system is developed for tracking user’s itinerary and positioning in the global frame. Corresponding tests have been carried out to assess the performance of each system

    State Derivation of a 12-Axis Gyroscope-Free Inertial Measurement Unit

    Get PDF
    The derivation of linear acceleration, angular acceleration, and angular velocity states from a 12-axis gyroscope-free inertial measurement unit that utilizes four 3-axis accelerometer measurements at four distinct locations is reported. Particularly, a new algorithm which derives the angular velocity from its quadratic form and derivative form based on the context-based interacting multiple model is demonstrated. The performance of the system was evaluated under arbitrary 3-dimensional motion

    Measuring motion with kinematically redundant accelerometer arrays: theory, simulation and implementation

    Get PDF
    This work presents two schemes of measuring the linear and angular kinematics of a rigid body using a kinematically redundant array of triple-axis accelerometers with potential applications in biomechanics. A novel angular velocity estimation algorithm is proposed and evaluated that can compensate for angular velocity errors using measurements of the direction of gravity. Analysis and discussion of optimal sensor array characteristics are provided. A damped 2 axis pendulum was used to excite all 6 DoF of the a suspended accelerometer array through determined complex motion and is the basis of both simulation and experimental studies. The relationship between accuracy and sensor redundancy is investigated for arrays of up to 100 triple axis (300 accelerometer axes) accelerometers in simulation and 10 equivalent sensors (30 accelerometer axes) in the laboratory test rig. The paper also reports on the sensor calibration techniques and hardware implementation

    Best Axes Composition Extended: Multiple Gyroscopes and Accelerometers Data Fusion to Reduce Systematic Error

    Full text link
    Multiple rigidly attached Inertial Measurement Unit (IMU) sensors provide a richer flow of data compared to a single IMU. State-of-the-art methods follow a probabilistic model of IMU measurements based on the random nature of errors combined under a Bayesian framework. However, affordable low-grade IMUs, in addition, suffer from systematic errors due to their imperfections not covered by their corresponding probabilistic model. In this paper, we propose a method, the Best Axes Composition (BAC) of combining Multiple IMU (MIMU) sensors data for accurate 3D-pose estimation that takes into account both random and systematic errors by dynamically choosing the best IMU axes from the set of all available axes. We evaluate our approach on our MIMU visual-inertial sensor and compare the performance of the method with a purely probabilistic state-of-the-art approach of MIMU data fusion. We show that BAC outperforms the latter and achieves up to 20% accuracy improvement for both orientation and position estimation in open loop, but needs proper treatment to keep the obtained gain.Comment: Accepted to Robotics and Autonomous Systems journal. arXiv admin note: substantial text overlap with arXiv:2107.0263

    Airborne Navigation by Fusing Inertial and Camera Data

    Get PDF
    Unmanned aircraft systems (UASs) are often used as measuring system. Therefore, precise knowledge of their position and orientation are required. This thesis provides research in the conception and realization of a system which combines GPS-assisted inertial navigation systems with the advances in the area of camera-based navigation. It is presented how these complementary approaches can be used in a joint framework. In contrast to widely used concepts utilizing only one of the two approaches, a more robust overall system is realized. The presented algorithms are based on the mathematical concepts of rigid body motions. After derivation of the underlying equations, the methods are evaluated in numerical studies and simulations. Based on the results, real-world systems are used to collect data, which is evaluated and discussed. Two approaches for the system calibration, which describes the offsets between the coordinate systems of the sensors, are proposed. The first approach integrates the parameters of the system calibration in the classical bundle adjustment. The optimization is presented very descriptive in a graph based formulation. Required is a high precision INS and data from a measurement flight. In contrast to classical methods, a flexible flight course can be used and no cost intensive ground control points are required. The second approach enables the calibration of inertial navigation systems with a low positional accuracy. Line observations are used to optimize the rotational part of the offsets. Knowledge of the offsets between the coordinate systems of the sensors allows transforming measurements bidirectional. This is the basis for a fusion concept combining measurements from the inertial navigation system with an approach for the visual navigation. As a result, more robust estimations of the own position and orientation are achieved. Moreover, the map created from the camera images is georeferenced. It is shown how this map can be used to navigate an unmanned aerial system back to its starting position in the case of a disturbed or failed GPS reception. The high precision of the map allows the navigation through previously unexplored area by taking into consideration the maximal drift for the camera-only navigation. The evaluated concept provides insight into the possibility of the robust navigation of unmanned aerial systems with complimentary sensors. The constantly increasing computing power allows the evaluation of big amounts of data and the development of new concept to fuse the information. Future navigation systems will use the data of all available sensors to achieve the best navigation solution at any time
    • …
    corecore