368 research outputs found

    Proprioceptive External Torque Learning for Floating Base Robot and its Applications to Humanoid Locomotion

    Full text link
    The estimation of external joint torque and contact wrench is essential for achieving stable locomotion of humanoids and safety-oriented robots. Although the contact wrench on the foot of humanoids can be measured using a force-torque sensor (FTS), FTS increases the cost, inertia, complexity, and failure possibility of the system. This paper introduces a method for learning external joint torque solely using proprioceptive sensors (encoders and IMUs) for a floating base robot. For learning, the GRU network is used and random walking data is collected. Real robot experiments demonstrate that the network can estimate the external torque and contact wrench with significantly smaller errors compared to the model-based method, momentum observer (MOB) with friction modeling. The study also validates that the estimated contact wrench can be utilized for zero moment point (ZMP) feedback control, enabling stable walking. Moreover, even when the robot's feet and the inertia of the upper body are changed, the trained network shows consistent performance with a model-based calibration. This result demonstrates the possibility of removing FTS on the robot, which reduces the disadvantages of hardware sensors. The summary video is available at https://youtu.be/gT1D4tOiKpo.Comment: Accepted by 2023 IROS conferenc

    Indirect Estimation of Vertical Ground Reaction Force from a Body-Mounted INS/GPS Using Machine Learning

    Get PDF
    Vertical ground reaction force(vGRF)can be measured by forceplates or instrumented treadmills, but their application is limited to indoor environments. Insoles remove this restriction but suffer from low durability (several hundred hours). Therefore, interest in the indirect estimation of vGRF using inertial measurement units and machine learning techniques has increased. This paper presents a methodology for indirectly estimating vGRF and other features used in gait analysis from measurements of a wearable GPS-aided inertial navigation system (INS/GPS) device. A set of 27 features was extracted from the INS/GPS data. Feature analysis showed that six of these features suffice to provide precise estimates of 11 different gait parameters. Bagged ensembles of regression trees were then trained and used for predicting gait parameters for a dataset from the test subject from whom the training data were collected and for a dataset from a subject for whom no training data were available. The prediction accuracies for the latter were significantly worse than for the first subject but still sufficiently good. K-nearest neighbor (KNN) and long short-term memory (LSTM) neural networks were then used for predicting vGRF and ground contact times. The KNN yielded a lower normalized root mean square error than the neural network for vGRF predictions but cannot detect new patterns in force curves.publishedVersionPeer reviewe

    RobustStateNet: Robust ego vehicle state estimation for Autonomous Driving

    Get PDF
    Control of an ego vehicle for Autonomous Driving (AD) requires an accurate definition of its state. Implementation of various model-based Kalman Filtering (KF) techniques for state estimation is prevalent in the literature. These algorithms use measurements from IMU and input signals from steering and wheel encoders for motion prediction with physics-based models, and a Global Navigation Satellite System(GNSS) for global localization. Such methods are widely investigated and majorly focus on increasing the accuracy of the estimation. Ego motion prediction in these approaches does not model the sensor failure modes and assumes completely known dynamics with motion and measurement model noises. In this work, we propose a novel Recurrent Neural Network (RNN) based motion predictor that parallelly models the sensor measurement dynamics and selectively fuses the features to increase the robustness of prediction, in particular in scenarios where we witness sensor failures. This motion predictor is integrated into a KF-like framework, RobustStateNet that takes a global position from the GNSS sensor and updates the predicted state. We demonstrate that the proposed state estimation routine outperforms the Model-Based KF and KalmanNet architecture in terms of estimation accuracy and robustness. The proposed algorithms are validated in the modified NuScenes CAN bus dataset, designed to simulate various types of sensor failures

    Application of Machine Learning Methods for Human Gait Analysis

    Get PDF
    The majority of human gait analysis methods are limited to clinical gait laboratories. The calculation of gait parameters for athletes, during running in open environment, has endless possibilities of performance analysis to keep track of training. This thesis demonstrates a method to capture three-dimensional measurements of multidimensional human body movements during walking and running by means of GPS-aided-INS equipped data logger and also describes the two-dimensional (forward and vertical) analysis of captured three-dimensional movement. The gait segmentation based on the vertical velocity has been presented and the built data processing software can compute majority of traditional gait metrics such as stride duration, average speed, stride length, cadence and vertical oscillation. The equipment uses inexpensive pressure insoles to generate foot pressure data for model training and indirect estimation of vertical ground reaction force and ground contact time. Both machine and deep learning approaches are detailed for indirect estimation of vertical ground reaction force and ground contact time. The possibilities are also explored to make interpersonal gait parameter estimation by means of generalised prediction models. Both machine leaning and deep learning solution are presented to generate continuous vertical ground reaction force curves along with gait components. The methods, presented in this thesis, help to analyse human motion by means of gait segmentation and to calculate or estimate numerous spatio-temporal gait parameters. The intra-step variations in motion parameters are great help to analyse the different aspects of running in outdoor. The encouraging results reported in this thesis demonstrate the feasibility of device that provides detailed analysis about the performance of an athlete in outdoor running environment

    Off-line evaluation of indoor positioning systems in different scenarios: the experiences from IPIN 2020 competition

    Get PDF
    Every year, for ten years now, the IPIN competition has aimed at evaluating real-world indoor localisation systems by testing them in a realistic environment, with realistic movement, using the EvAAL framework. The competition provided a unique overview of the state-of-the-art of systems, technologies, and methods for indoor positioning and navigation purposes. Through fair comparison of the performance achieved by each system, the competition was able to identify the most promising approaches and to pinpoint the most critical working conditions. In 2020, the competition included 5 diverse off-site off-site Tracks, each resembling real use cases and challenges for indoor positioning. The results in terms of participation and accuracy of the proposed systems have been encouraging. The best performing competitors obtained a third quartile of error of 1 m for the Smartphone Track and 0.5 m for the Foot-mounted IMU Track. While not running on physical systems, but only as algorithms, these results represent impressive achievements.Track 3 organizers were supported by the European Union’s Horizon 2020 Research and Innovation programme under the Marie Skłodowska Curie Grant 813278 (A-WEAR: A network for dynamic WEarable Applications with pRivacy constraints), MICROCEBUS (MICINN, ref. RTI2018-095168-B-C55, MCIU/AEI/FEDER UE), INSIGNIA (MICINN ref. PTQ2018-009981), and REPNIN+ (MICINN, ref. TEC2017-90808-REDT). We would like to thanks the UJI’s Library managers and employees for their support while collecting the required datasets for Track 3. Track 5 organizers were supported by JST-OPERA Program, Japan, under Grant JPMJOP1612. Track 7 organizers were supported by the Bavarian Ministry for Economic Affairs, Infrastructure, Transport and Technology through the Center for Analytics-Data-Applications (ADA-Center) within the framework of “BAYERN DIGITAL II. ” Team UMinho (Track 3) was supported by FCT—Fundação para a Ciência e Tecnologia within the R&D Units Project Scope under Grant UIDB/00319/2020, and the Ph.D. Fellowship under Grant PD/BD/137401/2018. Team YAI (Track 3) was supported by the Ministry of Science and Technology (MOST) of Taiwan under Grant MOST 109-2221-E-197-026. Team Indora (Track 3) was supported in part by the Slovak Grant Agency, Ministry of Education and Academy of Science, Slovakia, under Grant 1/0177/21, and in part by the Slovak Research and Development Agency under Contract APVV-15-0091. Team TJU (Track 3) was supported in part by the National Natural Science Foundation of China under Grant 61771338 and in part by the Tianjin Research Funding under Grant 18ZXRHSY00190. Team Next-Newbie Reckoners (Track 3) were supported by the Singapore Government through the Industry Alignment Fund—Industry Collaboration Projects Grant. This research was conducted at Singtel Cognitive and Artificial Intelligence Lab for Enterprises (SCALE@NTU), which is a collaboration between Singapore Telecommunications Limited (Singtel) and Nanyang Technological University (NTU). Team KawaguchiLab (Track 5) was supported by JSPS KAKENHI under Grant JP17H01762. Team WHU&AutoNavi (Track 6) was supported by the National Key Research and Development Program of China under Grant 2016YFB0502202. Team YAI (Tracks 6 and 7) was supported by the Ministry of Science and Technology (MOST) of Taiwan under Grant MOST 110-2634-F-155-001

    DeepTIO: a deep thermal-inertial odometry with visual hallucination

    Get PDF
    This is the author accepted manuscript. The final version is available from the publisher via the DOI in this recordVisual odometry shows excellent performance in a wide range of environments. However, in visually-denied scenarios (e.g. heavy smoke or darkness), pose estimates degrade or even fail. Thermal cameras are commonly used for perception and inspection when the environment has low visibility. However, their use in odometry estimation is hampered by the lack of robust visual features. In part, this is as a result of the sensor measuring the ambient temperature profile rather than scene appearance and geometry. To overcome this issue, we propose a Deep Neural Network model for thermal-inertial odometry (DeepTIO) by incorporating a visual hallucination network to provide the thermal network with complementary information. The hallucination network is taught to predict fake visual features from thermal images by using Huber loss. We also employ selective fusion to attentively fuse the features from three different modalities, i.e thermal, hallucination, and inertial features. Extensive experiments are performed in hand-held and mobile robot data in benign and smoke-filled environments, showing the efficacy of the proposed model

    Off-Line Evaluation of Indoor Positioning Systems in Different Scenarios: The Experiences From IPIN 2020 Competition

    Get PDF
    Every year, for ten years now, the IPIN competition has aimed at evaluating real-world indoor localisation systems by testing them in a realistic environment, with realistic movement, using the EvAAL framework. The competition provided a unique overview of the state-of-the-art of systems, technologies, and methods for indoor positioning and navigation purposes. Through fair comparison of the performance achieved by each system, the competition was able to identify the most promising approaches and to pinpoint the most critical working conditions. In 2020, the competition included 5 diverse off-site off-site Tracks, each resembling real use cases and challenges for indoor positioning. The results in terms of participation and accuracy of the proposed systems have been encouraging. The best performing competitors obtained a third quartile of error of 1 m for the Smartphone Track and 0.5 m for the Foot-mounted IMU Track. While not running on physical systems, but only as algorithms, these results represent impressive achievements
    corecore