2,287 research outputs found

    Sparse Inertial Poser: Automatic 3D Human Pose Estimation from Sparse IMUs

    Full text link
    We address the problem of making human motion capture in the wild more practical by using a small set of inertial sensors attached to the body. Since the problem is heavily under-constrained, previous methods either use a large number of sensors, which is intrusive, or they require additional video input. We take a different approach and constrain the problem by: (i) making use of a realistic statistical body model that includes anthropometric constraints and (ii) using a joint optimization framework to fit the model to orientation and acceleration measurements over multiple frames. The resulting tracker Sparse Inertial Poser (SIP) enables 3D human pose estimation using only 6 sensors (attached to the wrists, lower legs, back and head) and works for arbitrary human motions. Experiments on the recently released TNT15 dataset show that, using the same number of sensors, SIP achieves higher accuracy than the dataset baseline without using any video data. We further demonstrate the effectiveness of SIP on newly recorded challenging motions in outdoor scenarios such as climbing or jumping over a wall.Comment: 12 pages, Accepted at Eurographics 201

    On sensor fusion for airborne wind energy systems

    Full text link
    A study on filtering aspects of airborne wind energy generators is presented. This class of renewable energy systems aims to convert the aerodynamic forces generated by tethered wings, flying in closed paths transverse to the wind flow, into electricity. The accurate reconstruction of the wing's position, velocity and heading is of fundamental importance for the automatic control of these kinds of systems. The difficulty of the estimation problem arises from the nonlinear dynamics, wide speed range, large accelerations and fast changes of direction that the wing experiences during operation. It is shown that the overall nonlinear system has a specific structure allowing its partitioning into sub-systems, hence leading to a series of simpler filtering problems. Different sensor setups are then considered, and the related sensor fusion algorithms are presented. The results of experimental tests carried out with a small-scale prototype and wings of different sizes are discussed. The designed filtering algorithms rely purely on kinematic laws, hence they are independent from features like wing area, aerodynamic efficiency, mass, etc. Therefore, the presented results are representative also of systems with larger size and different wing design, different number of tethers and/or rigid wings.Comment: This manuscript is a preprint of a paper accepted for publication on the IEEE Transactions on Control Systems Technology and is subject to IEEE Copyright. The copy of record is available at IEEEXplore library: http://ieeexplore.ieee.org

    Efficient Continuous-Time SLAM for 3D Lidar-Based Online Mapping

    Full text link
    Modern 3D laser-range scanners have a high data rate, making online simultaneous localization and mapping (SLAM) computationally challenging. Recursive state estimation techniques are efficient but commit to a state estimate immediately after a new scan is made, which may lead to misalignments of measurements. We present a 3D SLAM approach that allows for refining alignments during online mapping. Our method is based on efficient local mapping and a hierarchical optimization back-end. Measurements of a 3D laser scanner are aggregated in local multiresolution maps by means of surfel-based registration. The local maps are used in a multi-level graph for allocentric mapping and localization. In order to incorporate corrections when refining the alignment, the individual 3D scans in the local map are modeled as a sub-graph and graph optimization is performed to account for drift and misalignments in the local maps. Furthermore, in each sub-graph, a continuous-time representation of the sensor trajectory allows to correct measurements between scan poses. We evaluate our approach in multiple experiments by showing qualitative results. Furthermore, we quantify the map quality by an entropy-based measure.Comment: In: Proceedings of the International Conference on Robotics and Automation (ICRA) 201

    Robust 3D IMU-LIDAR Calibration and Multi Sensor Probabilistic State Estimation

    Get PDF
    Autonomous robots are highly complex systems. In order to operate in dynamic environments, adaptability in their decision-making algorithms is a must. Thus, the internal and external information that robots obtain from sensors is critical to re-evaluate their decisions in real time. Accuracy is key in this endeavor, both from the hardware side and the modeling point of view. In order to guarantee the highest performance, sensors need to be correctly calibrated. To this end, some parameters are tuned so that the particular realization of a sensor best matches a generalized mathematical model. This step grows in complexity with the integration of multiple sensors, which is generally a requirement in order to cope with the dynamic nature of real world applications. This project aims to deal with the calibration of an inertial measurement unit, or IMU, and a Light Detection and Ranging device, or LiDAR. An offline batch optimization procedure is proposed to optimally estimate the intrinsic and extrinsic parameters of the model. Then, an online state estimation module that makes use of the aforementioned parameters and the fusion of LiDAR-inertial data for local navigation is proposed. Additionally, it incorporates real time corrections to account for the time-varying nature of the model, essential to deal with exposure to continued operation and wear and tear. Keywords: sensor fusion, multi-sensor calibration, factor graphs, batch optimization, Gaussian Processes, state estimation, LiDAR-inertial odometry, Error State Kalman Filter, Normal Distributions Transform

    Wearable Movement Sensors for Rehabilitation: From Technology to Clinical Practice

    Get PDF
    This Special Issue shows a range of potential opportunities for the application of wearable movement sensors in motor rehabilitation. However, the papers surely do not cover the whole field of physical behavior monitoring in motor rehabilitation. Most studies in this Special Issue focused on the technical validation of wearable sensors and the development of algorithms. Clinical validation studies, studies applying wearable sensors for the monitoring of physical behavior in daily life conditions, and papers about the implementation of wearable sensors in motor rehabilitation are under-represented in this Special Issue. Studies investigating the usability and feasibility of wearable movement sensors in clinical populations were lacking. We encourage researchers to investigate the usability, acceptance, feasibility, reliability, and clinical validity of wearable sensors in clinical populations to facilitate the application of wearable movement sensors in motor rehabilitation

    A Comprehensive Introduction of Visual-Inertial Navigation

    Full text link
    In this article, a tutorial introduction to visual-inertial navigation(VIN) is presented. Visual and inertial perception are two complementary sensing modalities. Cameras and inertial measurement units (IMU) are the corresponding sensors for these two modalities. The low cost and light weight of camera-IMU sensor combinations make them ubiquitous in robotic navigation. Visual-inertial Navigation is a state estimation problem, that estimates the ego-motion and local environment of the sensor platform. This paper presents visual-inertial navigation in the classical state estimation framework, first illustrating the estimation problem in terms of state variables and system models, including related quantities representations (Parameterizations), IMU dynamic and camera measurement models, and corresponding general probabilistic graphical models (Factor Graph). Secondly, we investigate the existing model-based estimation methodologies, these involve filter-based and optimization-based frameworks and related on-manifold operations. We also discuss the calibration of some relevant parameters, also initialization of state of interest in optimization-based frameworks. Then the evaluation and improvement of VIN in terms of accuracy, efficiency, and robustness are discussed. Finally, we briefly mention the recent development of learning-based methods that may become alternatives to traditional model-based methods.Comment: 35 pages, 10 figure

    Overcoming Bandwidth Limitations in Wireless Sensor Networks by Exploitation of Cyclic Signal Patterns: An Event-triggered Learning Approach

    Get PDF
    Wireless sensor networks are used in a wide range of applications, many of which require real-time transmission of the measurements. Bandwidth limitations result in limitations on the sampling frequency and number of sensors. This problem can be addressed by reducing the communication load via data compression and event-based communication approaches. The present paper focuses on the class of applications in which the signals exhibit unknown and potentially time-varying cyclic patterns. We review recently proposed event-triggered learning (ETL) methods that identify and exploit these cyclic patterns, we show how these methods can be applied to the nonlinear multivariable dynamics of three-dimensional orientation data, and we propose a novel approach that uses Gaussian process models. In contrast to other approaches, all three ETL methods work in real time and assure a small upper bound on the reconstruction error. The proposed methods are compared to several conventional approaches in experimental data from human subjects walking with a wearable inertial sensor network. They are found to reduce the communication load by 60–70%, which implies that two to three times more sensor nodes could be used at the same bandwidth
    • …
    corecore