52 research outputs found

    Estimation of ground reaction forces and moments during gait using only inertial motion capture

    Get PDF
    Ground reaction forces and moments (GRF&M) are important measures used as input in biomechanical analysis to estimate joint kinetics, which often are used to infer information for many musculoskeletal diseases. Their assessment is conventionally achieved using laboratory-based equipment that cannot be applied in daily life monitoring. In this study, we propose a method to predict GRF&M during walking, using exclusively kinematic information from fully-ambulatory inertial motion capture (IMC). From the equations of motion, we derive the total external forces and moments. Then, we solve the indeterminacy problem during double stance using a distribution algorithm based on a smooth transition assumption. The agreement between the IMC-predicted and reference GRF&M was categorized over normal walking speed as excellent for the vertical (ρ = 0.992, rRMSE = 5.3%), anterior (ρ = 0.965, rRMSE = 9.4%) and sagittal (ρ = 0.933, rRMSE = 12.4%) GRF&M components and as strong for the lateral (ρ = 0.862, rRMSE = 13.1%), frontal (ρ = 0.710, rRMSE = 29.6%), and transverse GRF&M (ρ = 0.826, rRMSE = 18.2%). Sensitivity analysis was performed on the effect of the cut-off frequency used in the filtering of the input kinematics, as well as the threshold velocities for the gait event detection algorithm. This study was the first to use only inertial motion capture to estimate 3D GRF&M during gait, providing comparable accuracy with optical motion capture prediction. This approach enables applications that require estimation of the kinetics during walking outside the gait laboratory

    Musculoskeletal model-based inverse dynamic analysis under ambulatory conditions using inertial motion capture

    Get PDF
    Inverse dynamic analysis using musculoskeletal modeling is a powerful tool, which is utilized in a range of applications to estimate forces in ligaments, muscles, and joints, non-invasively. To date, the conventional input used in this analysis is derived from optical motion capture (OMC) and force plate (FP) systems, which restrict the application of musculoskeletal models to gait laboratories. To address this problem, we propose the use of inertial motion capture to perform musculoskeletal model-based inverse dynamics by utilizing a universally applicable ground reaction force and moment (GRF&M) prediction method. Validation against a conventional laboratory-based method showed excellent Pearson correlations for sagittal plane joint angles of ankle, knee, and hip (ρ=0.95, 0.99, and 0.99, respectively) and root-mean-squared-differences (RMSD) of 4.1 ± 1.3° 4.4 ± 2.0° and 5.7 ± 2.1° respectively. The GRF&M predicted using IMC input were found to have excellent correlations for three components (vertical: ρ=0.97, RMSD = 9.3 ± 3.0 %BW, anteroposterior: ρ=0.91, RMSD = 5.5 ± 1.2 %BW, sagittal: ρ=0.91, RMSD = 1.6 ± 0.6 %BW*BH), and strong correlations for mediolateral (ρ=0.80, RMSD = 2.1 ± 0.6 %BW) and transverse (ρ=0.82, RMSD = 0.2 ± 0.1 %BW*BH). The proposed IMC-based method removes the complexity and space restrictions of OMC and FP systems and could enable applications of musculoskeletal models in either monitoring patients during their daily lives or in wider clinical practice

    Predicting kinetics using musculoskeletal modeling and inertial motion capture

    Get PDF
    Inverse dynamic analysis using musculoskeletal modeling is a powerful tool, which is utilized in a range of applications to estimate forces in ligaments, muscles, and joints, non-invasively. To date, the conventional input used in this analysis is derived from optical motion capture (OMC) and force plate (FP) systems, which restrict the application of musculoskeletal models to gait laboratories. To address this problem, we propose a musculoskeletal model, capable of estimating the internal forces based solely on inertial motion capture (IMC) input and a ground reaction force and moment (GRF&M) prediction method. We validated the joint angle and kinetic estimates of the lower limbs against an equally constructed musculoskeletal model driven by OMC and FP system. The sagittal plane joint angles of ankle, knee, and hip presented excellent Pearson correlations (\rho = 0.95, 0.99, and 0.99, respectively) and root-mean-squared differences (RMSD) of 4.1 ±\pm 1.3\circ, 4.4 ±\pm 2.0\circ, and 5.7 ±\pm 2.1\circ, respectively. The GRF&M predicted using IMC input were found to have excellent correlations for three components (vertical:\rho = 0.97, RMSD=9.3 ±\pm 3.0 %BW, anteroposterior: \rho = 0.91, RMSD=5.5 ±\pm 1.2 %BW, sagittal: \rho = 0.91, RMSD=1.6 ±\pm 0.6 %BW*BH), and strong correlations for mediolateral (\rho = 0.80, RMSD=2.1 ±\pm 0.6%BW ) and transverse (\rho = 0.82, RMSD=0.2 ±\pm 0.1 %BW*BH). The proposed IMC-based method removes the complexity and space-restrictions of OMC and FP systems and could enable applications of musculoskeletal models in either monitoring patients during their daily lives or in wider clinical practice.Comment: 19 pages, 4 figures, 3 table

    Optimizing IoT-Based Asset and Utilization Tracking: Efficient Activity Classification with MiniRocket on Resource-Constrained Devices

    Full text link
    This paper introduces an effective solution for retrofitting construction power tools with low-power IoT to enable accurate activity classification. We address the challenge of distinguishing between when a power tool is being moved and when it is actually being used. To achieve classification accuracy and power consumption preservation a newly released algorithm called MiniRocket was employed. Known for its accuracy, scalability, and fast training for time-series classification, in this paper, it is proposed as a TinyML algorithm for inference on resource-constrained IoT devices. The paper demonstrates the portability and performance of MiniRocket on a resource-constrained, ultra-low power sensor node for floating-point and fixed-point arithmetic, matching up to 1% of the floating-point accuracy. The hyperparameters of the algorithm have been optimized for the task at hand to find a Pareto point that balances memory usage, accuracy and energy consumption. For the classification problem, we rely on an accelerometer as the sole sensor source, and BLE for data transmission. Extensive real-world construction data, using 16 different power tools, were collected, labeled, and used to validate the algorithm's performance directly embedded in the IoT device. Experimental results demonstrate that the proposed solution achieves an accuracy of 96.9% in distinguishing between real usage status and other motion statuses while consuming only 7kB of flash and 3kB of RAM. The final application exhibits an average current consumption of less than 15{\mu}W for the whole system, resulting in battery life performance ranging from 3 to 9 years depending on the battery capacity (250-500mAh) and the number of power tool usage hours (100-1500h)

    Anticipatory models of human movements and dynamics: the roadmap of the AnDy project

    Get PDF
    International audienceFuture robots will need more and more anticipation capabilities, to properly react to human actions and provide efficient collaboration. To achieve this goal, we need new technologies that not only estimate the motion of the humans, but that fully describe the whole-body dynamics of the interaction and that can also predict its outcome. These hardware and software technologies are the goal of the European project AnDy. In this paper, we describe the roadmap of AnDy, which leverages existing technologies to endow robots with the ability to control physical collaboration through intentional interaction. To achieve this goal, AnDy relies on three technological and scientific breakthroughs. First, AnDy will innovate the way of measuring human whole-body motions by developing the wearable AnDySuit, which tracks motions and records forces. Second, AnDy will develop the AnDyModel, which combines ergonomic models with cognitive predictive models of human dynamic behavior in collaborative tasks, learned from data acquired with the AnDySuit. Third, AnDy will propose AnDyControl, an innovative technology for assisting humans through pre-dictive physical control, based on AnDyModel. By measuring and modeling human whole-body dynamics, AnDy will provide robots with a new level of awareness about human intentions and ergonomy. By incorporating this awareness on-line in the robot's controllers, AnDy paves the way for novel applications of physical human-robot collaboration in manufacturing, health-care, and assisted living

    Optimization-Based Sensor Fusion of GNSS and IMU Using a Moving Horizon Approach

    No full text
    The rise of autonomous systems operating close to humans imposes new challenges in terms of robustness and precision on the estimation and control algorithms. Approaches based on nonlinear optimization, such as moving horizon estimation, have been shown to improve the accuracy of the estimated solution compared to traditional filter techniques. This paper introduces an optimization-based framework for multi-sensor fusion following a moving horizon scheme. The framework is applied to the often occurring estimation problem of motion tracking by fusing measurements of a global navigation satellite system receiver and an inertial measurement unit. The resulting algorithm is used to estimate position, velocity, and orientation of a maneuvering airplane and is evaluated against an accurate reference trajectory. A detailed study of the influence of the horizon length on the quality of the solution is presented and evaluated against filter-like and batch solutions of the problem. The versatile configuration possibilities of the framework are finally used to analyze the estimated solutions at different evaluation times exposing a nearly linear behavior of the sensor fusion problem

    IMU and Multiple RGB-D Camera Fusion for Assisting Indoor Stop-and-Go 3D Terrestrial Laser Scanning

    No full text
    Autonomous Simultaneous Localization and Mapping (SLAM) is an important topic in many engineering fields. Since stop-and-go systems are typically slow and full-kinematic systems may lack accuracy and integrity, this paper presents a novel hybrid “continuous stop-and-go” mobile mapping system called Scannect. A 3D terrestrial LiDAR system is integrated with a MEMS IMU and two Microsoft Kinect sensors to map indoor urban environments. The Kinects’ depth maps were processed using a new point-to-plane ICP that minimizes the reprojection error of the infrared camera and projector pair in an implicit iterative extended Kalman filter (IEKF). A new formulation of the 5-point visual odometry method is tightly coupled in the implicit IEKF without increasing the dimensions of the state space. The Scannect can map and navigate in areas with textureless walls and provides an effective means for mapping large areas with lots of occlusions. Mapping long corridors (total travel distance of 120 m) took approximately 30 minutes and achieved a Mean Radial Spherical Error of 17 cm before smoothing or global optimization
    corecore