5,744 research outputs found

    Distributed data fusion algorithms for inertial network systems

    Get PDF
    New approaches to the development of data fusion algorithms for inertial network systems are described. The aim of this development is to increase the accuracy of estimates of inertial state vectors in all the network nodes, including the navigation states, and also to improve the fault tolerance of inertial network systems. An analysis of distributed inertial sensing models is presented and new distributed data fusion algorithms are developed for inertial network systems. The distributed data fusion algorithm comprises two steps: inertial measurement fusion and state fusion. The inertial measurement fusion allows each node to assimilate all the inertial measurements from an inertial network system, which can improve the performance of inertial sensor failure detection and isolation algorithms by providing more information. The state fusion further increases the accuracy and enhances the integrity of the local inertial states and navigation state estimates. The simulation results show that the two-step fusion procedure overcomes the disadvantages of traditional inertial sensor alignment procedures. The slave inertial nodes can be accurately aligned to the master node

    Compensation of Magnetic Disturbances Improves Inertial and Magnetic Sensing of Human Body Segment Orientation

    Get PDF
    This paper describes a complementary Kalman filter design to estimate orientation of human body segments by fusing gyroscope, accelerometer, and magnetometer signals from miniature sensors. Ferromagnetic materials or other magnetic fields near the sensor module disturb the local earth magnetic field and, therefore, the orientation estimation, which impedes many (ambulatory) applications. In the filter, the gyroscope bias error, orientation error, and magnetic disturbance error are estimated. The filter was tested under quasi-static and dynamic conditions with ferromagnetic materials close to the sensor module. The quasi-static experiments implied static positions and rotations around the three axes. In the dynamic experiments, three-dimensional rotations were performed near a metal tool case. The orientation estimated by the filter was compared with the orientation obtained with an optical reference system Vicon. Results show accurate and drift-free orientation estimates. The compensation results in a significant difference (p<0.01) between the orientation estimates with compensation of magnetic disturbances in comparison to no compensation or only gyroscopes. The average static error was 1.4/spl deg/ (standard deviation 0.4) in the magnetically disturbed experiments. The dynamic error was 2.6/spl deg/ root means square

    Cooperative localization by dual foot-mounted inertial sensors and inter-agent ranging

    Full text link
    The implementation challenges of cooperative localization by dual foot-mounted inertial sensors and inter-agent ranging are discussed and work on the subject is reviewed. System architecture and sensor fusion are identified as key challenges. A partially decentralized system architecture based on step-wise inertial navigation and step-wise dead reckoning is presented. This architecture is argued to reduce the computational cost and required communication bandwidth by around two orders of magnitude while only giving negligible information loss in comparison with a naive centralized implementation. This makes a joint global state estimation feasible for up to a platoon-sized group of agents. Furthermore, robust and low-cost sensor fusion for the considered setup, based on state space transformation and marginalization, is presented. The transformation and marginalization are used to give the necessary flexibility for presented sampling based updates for the inter-agent ranging and ranging free fusion of the two feet of an individual agent. Finally, characteristics of the suggested implementation are demonstrated with simulations and a real-time system implementation.Comment: 14 page

    Multiple IMU Sensor Fusion for SUAS Navigation and Photogrammetry

    Get PDF
    Inertial measurement units (IMUs) are devices that sense accelerations and angular rates in 3D so that vehicles and other devices can estimate their orientations, positions, and velocities. While traditionally large, heavy, and costly, using mechanical gyroscopes and stabilized platforms, the recent development of micro-electromechanical sensor (MEMS) IMUs that are small, light, and inexpensive has led to their adoption in many everyday systems such as cell phones, video game controllers, and commercial drones. MEMS IMUs, despite their advantages, have major drawbacks when it comes to accuracy and reliability. The idea of using more than one of these sensors in an array, instead of using only one, and fusing their outputs to generate an improved solution is explored in this thesis

    Estimating Epipolar Geometry With The Use of a Camera Mounted Orientation Sensor

    Get PDF
    Context: Image processing and computer vision are rapidly becoming more and more commonplace, and the amount of information about a scene, such as 3D geometry, that can be obtained from an image, or multiple images of the scene is steadily increasing due to increasing resolutions and availability of imaging sensors, and an active research community. In parallel, advances in hardware design and manufacturing are allowing for devices such as gyroscopes, accelerometers and magnetometers and GPS receivers to be included alongside imaging devices at a consumer level. Aims: This work aims to investigate the use of orientation sensors in the field of computer vision as sources of data to aid with image processing and the determination of a scene’s geometry, in particular, the epipolar geometry of a pair of images - and devises a hybrid methodology from two sets of previous works in order to exploit the information available from orientation sensors alongside data gathered from image processing techniques. Method: A readily available consumer-level orientation sensor was used alongside a digital camera to capture images of a set of scenes and record the orientation of the camera. The fundamental matrix of these pairs of images was calculated using a variety of techniques - both incorporating data from the orientation sensor and excluding its use Results: Some methodologies could not produce an acceptable result for the Fundamental Matrix on certain image pairs, however, a method described in the literature that used an orientation sensor always produced a result - however in cases where the hybrid or purely computer vision methods also produced a result - this was found to be the least accurate. Conclusion: Results from this work show that the use of an orientation sensor to capture information alongside an imaging device can be used to improve both the accuracy and reliability of calculations of the scene’s geometry - however noise from the orientation sensor can limit this accuracy and further research would be needed to determine the magnitude of this problem and methods of mitigation
    • 

    corecore