3,065 research outputs found

    Encoderless Gimbal Calibration of Dynamic Multi-Camera Clusters

    Full text link
    Dynamic Camera Clusters (DCCs) are multi-camera systems where one or more cameras are mounted on actuated mechanisms such as a gimbal. Existing methods for DCC calibration rely on joint angle measurements to resolve the time-varying transformation between the dynamic and static camera. This information is usually provided by motor encoders, however, joint angle measurements are not always readily available on off-the-shelf mechanisms. In this paper, we present an encoderless approach for DCC calibration which simultaneously estimates the kinematic parameters of the transformation chain as well as the unknown joint angles. We also demonstrate the integration of an encoderless gimbal mechanism with a state-of-the art VIO algorithm, and show the extensions required in order to perform simultaneous online estimation of the joint angles and vehicle localization state. The proposed calibration approach is validated both in simulation and on a physical DCC composed of a 2-DOF gimbal mounted on a UAV. Finally, we show the experimental results of the calibrated mechanism integrated into the OKVIS VIO package, and demonstrate successful online joint angle estimation while maintaining localization accuracy that is comparable to a standard static multi-camera configuration.Comment: ICRA 201

    DH-PTAM: A Deep Hybrid Stereo Events-Frames Parallel Tracking And Mapping System

    Full text link
    This paper presents a robust approach for a visual parallel tracking and mapping (PTAM) system that excels in challenging environments. Our proposed method combines the strengths of heterogeneous multi-modal visual sensors, including stereo event-based and frame-based sensors, in a unified reference frame through a novel spatio-temporal synchronization of stereo visual frames and stereo event streams. We employ deep learning-based feature extraction and description for estimation to enhance robustness further. We also introduce an end-to-end parallel tracking and mapping optimization layer complemented by a simple loop-closure algorithm for efficient SLAM behavior. Through comprehensive experiments on both small-scale and large-scale real-world sequences of VECtor and TUM-VIE benchmarks, our proposed method (DH-PTAM) demonstrates superior performance compared to state-of-the-art methods in terms of robustness and accuracy in adverse conditions. Our implementation's research-based Python API is publicly available on GitHub for further research and development: https://github.com/AbanobSoliman/DH-PTAM.Comment: Submitted for publication in IEEE RA-

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved

    A multi-camera and multimodal dataset for posture and gait analysis

    Get PDF
    Monitoring gait and posture while using assisting robotic devices is relevant to attain effective assistance and assess the user’s progression throughout time. This work presents a multi-camera, multimodal, and detailed dataset involving 14 healthy participants walking with a wheeled robotic walker equipped with a pair of affordable cameras. Depth data were acquired at 30 fps and synchronized with inertial data from Xsens MTw Awinda sensors and kinematic data from the segments of the Xsens biomechanical model, acquired at 60 Hz. Participants walked with the robotic walker at 3 different gait speeds, across 3 different walking scenarios/paths at 3 different locations. In total, this dataset provides approximately 92 minutes of total recording time, which corresponds to nearly 166.000 samples of synchronized data. This dataset may contribute to the scientific research by allowing the development and evaluation of: (i) vision-based pose estimation algorithms, exploring classic or deep learning approaches; (ii) human detection and tracking algorithms; (iii) movement forecasting; and (iv) biomechanical analysis of gait/posture when using a rehabilitation device.This work has been supported by the Fundação para a Ciência e Tecnologia (FCT) with the Reference Scholarship under Grant 2020.05708.BD and under the national support to R&D units grant, through the reference project UIDB/04436/2020 and UIDP/04436/2020

    Inertial Sensors in Swimming: Detection of Stroke Phases through 3D Wrist Trajectory.

    Get PDF
    Monitoring the upper arm propulsion is a crucial task for swimmer performance. The swimmer indeed can produce displacement of the body by modulating the upper limb kinematics. The present study proposes an approach for automatically recognize all stroke phases through three-dimensional (3D) wrist\u2019s trajectory estimated using inertial devices. Inertial data of 14 national-level male swimmer were collected while they performed 25 m front-crawl trial at intensity range from 75% to 100% of their 25 m maximal velocity. The 3D coordinates of the wrist were computed using the inertial sensors orientation and considering the kinematic chain of the upper arm biomechanical model. An algorithm that automatically estimates the duration of entry, pull, push, and recovery phases result from the 3D wrist\u2019s trajectory was tested using the bi-dimensional (2D) video-based systems as temporal reference system. A very large correlation (r = 0.87), low bias (0.8%), and reasonable Root Mean Square error (2.9%) for the stroke phases duration were observed using inertial devices versus 2D video-based system methods. The 95% limits of agreement (LoA) for each stroke phase duration were always lower than 7.7% of cycle duration. The mean values of entry, pull, push and recovery phases duration in percentage of the complete cycle detected using 3D wrist\u2019s trajectory using inertial devices were 34.7 (\ub1 6.8)%, 22.4 (\ub1 5.8)%, 14.2 (\ub1 4.4)%, 28.4 (\ub1 4.5)%. The swimmer\u2019s velocity and arm coordination model do not affect the performance of the algorithm in stroke phases detection. The 3D wrist trajectory can be used for an accurate and complete identification of the stroke phases in front crawl using inertial sensors. Results indicated the inertial sensor device technology as a viable option for swimming arm-stroke phase assessment

    Upper Limb Portable Motion Analysis System Based on Inertial Technology for Neurorehabilitation Purpose

    Get PDF
    Here an inertial sensor-based monitoring system for measuring and analyzing upper limb movements is presented. The final goal is the integration of this motion-tracking device within a portable rehabilitation system for brain injury patients. A set of four inertial sensors mounted on a special garment worn by the patient provides the quaternions representing the patient upper limb’s orientation in space. A kinematic model is built to estimate 3D upper limb motion for accurate therapeutic evaluation. The human upper limb is represented as a kinematic chain of rigid bodies with three joints and six degrees of freedom. Validation of the system has been performed by co-registration of movements with a commercial optoelectronic tracking system. Successful results are shown that exhibit a high correlation among signals provided by both devices and obtained at the Institut Guttmann Neurorehabilitation Hospital
    corecore