13,087 research outputs found

    On-Manifold Preintegration for Real-Time Visual-Inertial Odometry

    Get PDF
    Current approaches for visual-inertial odometry (VIO) are able to attain highly accurate state estimation via nonlinear optimization. However, real-time optimization quickly becomes infeasible as the trajectory grows over time, this problem is further emphasized by the fact that inertial measurements come at high rate, hence leading to fast growth of the number of variables in the optimization. In this paper, we address this issue by preintegrating inertial measurements between selected keyframes into single relative motion constraints. Our first contribution is a \emph{preintegration theory} that properly addresses the manifold structure of the rotation group. We formally discuss the generative measurement model as well as the nature of the rotation noise and derive the expression for the \emph{maximum a posteriori} state estimator. Our theoretical development enables the computation of all necessary Jacobians for the optimization and a-posteriori bias correction in analytic form. The second contribution is to show that the preintegrated IMU model can be seamlessly integrated into a visual-inertial pipeline under the unifying framework of factor graphs. This enables the application of incremental-smoothing algorithms and the use of a \emph{structureless} model for visual measurements, which avoids optimizing over the 3D points, further accelerating the computation. We perform an extensive evaluation of our monocular \VIO pipeline on real and simulated datasets. The results confirm that our modelling effort leads to accurate state estimation in real-time, outperforming state-of-the-art approaches.Comment: 20 pages, 24 figures, accepted for publication in IEEE Transactions on Robotics (TRO) 201

    Validity and Reliability of an Inertial Device for Measuring Dynamic Weight-Bearing Ankle Dorsiflexion

    Get PDF
    A decrease in ankle dorsiflexion causes changes in biomechanics, and different instruments have been used for ankle dorsiflexion testing under static conditions. Consequently, the industry of inertial sensors has developed easy-to-use devices, which measure dynamic ankle dorsiflexion and provide additional parameters such as velocity, acceleration, or movement deviation. Therefore, the aims of this study were to analyze the concurrent validity and test-retest reliability of an inertial device for measuring dynamic weight-bearing ankle dorsiflexion. Sixteen participants were tested using an inertial device (WIMU) and a digital inclinometer. Ankle dorsiflexion from left and right ankle repetitions was used for validity analysis, whereas test-retest reliability was analyzed by comparing measurements from the first and second days. The standard error of the measurement (SEM) between the instruments was very low for both ankle measurements (SEM 0.05) even though a significant systematic bias (~1.77°) was found for the right ankle (d = 0.79). R2 was very close to 1 in the left and right ankles (R2 = 0.85–0.89) as well as the intraclass correlation coefficient (ICC > 0.95). Test-retest reliability analysis showed that systematic bias was below 1° for both instruments, even though a systematic bias (~1.50°) with small effect size was found in the right ankle (d = 0.49) with WIMU. The ICC was very close to 1 and the coefficient of variation (CV) was lower than 4% in both instruments. Thus, WIMU is a valid and reliable inertial device for measuring dynamic weight-bearing ankle dorsiflexion

    Formulation of a new gradient descent MARG orientation algorithm: case study on robot teleoperation

    Get PDF
    We introduce a novel magnetic angular rate gravity (MARG) sensor fusion algorithm for inertial measurement. The new algorithm improves the popular gradient descent (ʻMadgwick’) algorithm increasing accuracy and robustness while preserving computa- tional efficiency. Analytic and experimental results demonstrate faster convergence for multiple variations of the algorithm through changing magnetic inclination. Furthermore, decoupling of magnetic field variance from roll and pitch estimation is pro- ven for enhanced robustness. The algorithm is validated in a human-machine interface (HMI) case study. The case study involves hardware implementation for wearable robot teleoperation in both Virtual Reality (VR) and in real-time on a 14 degree-of-freedom (DoF) humanoid robot. The experiment fuses inertial (movement) and mechanomyography (MMG) muscle sensing to control robot arm movement and grasp simultaneously, demon- strating algorithm efficacy and capacity to interface with other physiological sensors. To our knowledge, this is the first such formulation and the first fusion of inertial measure- ment and MMG in HMI. We believe the new algorithm holds the potential to impact a very wide range of inertial measurement applications where full orientation necessary. Physiological sensor synthesis and hardware interface further provides a foundation for robotic teleoperation systems with necessary robustness for use in the field

    Constructing a reference standard for sports science and clinical movement sets using IMU-based motion capture technology

    Get PDF
    Motion analysis has improved greatly over the years through the development of low-cost inertia sensors. Such sensors have shown promising accuracy for both sport and medical applications, facilitating the possibility of a new reference standard to be constructed. Current gold standards within motion capture, such as high-speed camera-based systems and image processing, are not suitable for many movement-sets within both sports science and clinical movement analysis due to restrictions introduced by the movement sets. These restrictions include cost, portability, local environment constraints (such as light level) and poor line of sight accessibility. This thesis focusses on developing a magnetometer-less IMU-based motion capturing system to detect and classify two challenging movement sets: Basic stances during a Shaolin Kung Fu dynamic form, and severity levels from the modified UPDRS (Unified Parkinson’s Disease Rating Scale) analysis tapping exercise. This project has contributed three datasets. The Shaolin Kung Fu dataset is comprised of 5 dynamic movements repeated over 350 times by 8 experienced practitioners. The dataset was labelled by a professional Shaolin Kung Fu master. Two modified UPDRS datasets were constructed, one for each of the two locations measured. The modified UPDRS datasets comprised of 5 severity levels each with 100 self-emulated movement samples. The modified UPDRS dataset was labelled by a researcher in neuropsychological assessment. The errors associated with IMU systems has been reduced significantly through a combination of a Complementary filter and applying the constraints imposed by the range of movements available in human joints. Novel features have been extracted from each dataset. A piecewise feature set based on a moving window approach has been applied to the Shaolin Kung Fu dataset. While a combination of standard statistical features and a Durbin Watson analysis has been extracted from the modified UPDRS measurements. The project has also contributed a comparison of 24 models has been done on all 3 datasets and the optimal model for each dataset has been determined. The resulting models were commensurate with current gold standards. The Shaolin Kung Fu dataset was classified with the computational costly fine decision tree algorithm using 400 splits, resulting in: an accuracy of 98.9%, a precision of 96.9%, a recall value of 99.1%, and a F1-score of 98.0%. A novel approach of using sequential forward feature analysis was used to determine the minimum number of IMU devices required as well as the optimal number of IMU devices. The modified UPDRS datasets were then classified using a support vector machine algorithm requiring various kernels to achieve their highest accuracies. The measurements were repeated with a sensor located on the wrist and finger, with the wrist requiring a linear kernel and the finger a quadratic kernel. Both locations achieved an accuracy, precision, recall, and F1-score of 99.2%. Additionally, the project contributed an evaluation to the effect sensor location has on the proposed models. It was concluded that the IMU-based system has the potential to construct a reference standard both in sports science and clinical movement analysis. Data protection security and communication speeds were limitations in the system constructed due to the measured data being transferred from the devices via Bluetooth Low Energy communication. These limitations were considered and evaluated in the future works of this project

    Fused mechanomyography and inertial measurement for human-robot interface

    Get PDF
    Human-Machine Interfaces (HMI) are the technology through which we interact with the ever-increasing quantity of smart devices surrounding us. The fundamental goal of an HMI is to facilitate robot control through uniting a human operator as the supervisor with a machine as the task executor. Sensors, actuators, and onboard intelligence have not reached the point where robotic manipulators may function with complete autonomy and therefore some form of HMI is still necessary in unstructured environments. These may include environments where direct human action is undesirable or infeasible, and situations where a robot must assist and/or interface with people. Contemporary literature has introduced concepts such as body-worn mechanical devices, instrumented gloves, inertial or electromagnetic motion tracking sensors on the arms, head, or legs, electroencephalographic (EEG) brain activity sensors, electromyographic (EMG) muscular activity sensors and camera-based (vision) interfaces to recognize hand gestures and/or track arm motions for assessment of operator intent and generation of robotic control signals. While these developments offer a wealth of future potential their utility has been largely restricted to laboratory demonstrations in controlled environments due to issues such as lack of portability and robustness and an inability to extract operator intent for both arm and hand motion. Wearable physiological sensors hold particular promise for capture of human intent/command. EMG-based gesture recognition systems in particular have received significant attention in recent literature. As wearable pervasive devices, they offer benefits over camera or physical input systems in that they neither inhibit the user physically nor constrain the user to a location where the sensors are deployed. Despite these benefits, EMG alone has yet to demonstrate the capacity to recognize both gross movement (e.g. arm motion) and finer grasping (e.g. hand movement). As such, many researchers have proposed fusing muscle activity (EMG) and motion tracking e.g. (inertial measurement) to combine arm motion and grasp intent as HMI input for manipulator control. However, such work has arguably reached a plateau since EMG suffers from interference from environmental factors which cause signal degradation over time, demands an electrical connection with the skin, and has not demonstrated the capacity to function out of controlled environments for long periods of time. This thesis proposes a new form of gesture-based interface utilising a novel combination of inertial measurement units (IMUs) and mechanomyography sensors (MMGs). The modular system permits numerous configurations of IMU to derive body kinematics in real-time and uses this to convert arm movements into control signals. Additionally, bands containing six mechanomyography sensors were used to observe muscular contractions in the forearm which are generated using specific hand motions. This combination of continuous and discrete control signals allows a large variety of smart devices to be controlled. Several methods of pattern recognition were implemented to provide accurate decoding of the mechanomyographic information, including Linear Discriminant Analysis and Support Vector Machines. Based on these techniques, accuracies of 94.5% and 94.6% respectively were achieved for 12 gesture classification. In real-time tests, accuracies of 95.6% were achieved in 5 gesture classification. It has previously been noted that MMG sensors are susceptible to motion induced interference. The thesis also established that arm pose also changes the measured signal. This thesis introduces a new method of fusing of IMU and MMG to provide a classification that is robust to both of these sources of interference. Additionally, an improvement in orientation estimation, and a new orientation estimation algorithm are proposed. These improvements to the robustness of the system provide the first solution that is able to reliably track both motion and muscle activity for extended periods of time for HMI outside a clinical environment. Application in robot teleoperation in both real-world and virtual environments were explored. With multiple degrees of freedom, robot teleoperation provides an ideal test platform for HMI devices, since it requires a combination of continuous and discrete control signals. The field of prosthetics also represents a unique challenge for HMI applications. In an ideal situation, the sensor suite should be capable of detecting the muscular activity in the residual limb which is naturally indicative of intent to perform a specific hand pose and trigger this post in the prosthetic device. Dynamic environmental conditions within a socket such as skin impedance have delayed the translation of gesture control systems into prosthetic devices, however mechanomyography sensors are unaffected by such issues. There is huge potential for a system like this to be utilised as a controller as ubiquitous computing systems become more prevalent, and as the desire for a simple, universal interface increases. Such systems have the potential to impact significantly on the quality of life of prosthetic users and others.Open Acces
    corecore