1,141 research outputs found

    Occlusion-Robust MVO: Multimotion Estimation Through Occlusion Via Motion Closure

    Full text link
    Visual motion estimation is an integral and well-studied challenge in autonomous navigation. Recent work has focused on addressing multimotion estimation, which is especially challenging in highly dynamic environments. Such environments not only comprise multiple, complex motions but also tend to exhibit significant occlusion. Previous work in object tracking focuses on maintaining the integrity of object tracks but usually relies on specific appearance-based descriptors or constrained motion models. These approaches are very effective in specific applications but do not generalize to the full multimotion estimation problem. This paper presents a pipeline for estimating multiple motions, including the camera egomotion, in the presence of occlusions. This approach uses an expressive motion prior to estimate the SE (3) trajectory of every motion in the scene, even during temporary occlusions, and identify the reappearance of motions through motion closure. The performance of this occlusion-robust multimotion visual odometry (MVO) pipeline is evaluated on real-world data and the Oxford Multimotion Dataset.Comment: To appear at the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). An earlier version of this work first appeared at the Long-term Human Motion Planning Workshop (ICRA 2019). 8 pages, 5 figures. Video available at https://www.youtube.com/watch?v=o_N71AA6FR

    GNSS/Multi-Sensor Fusion Using Continuous-Time Factor Graph Optimization for Robust Localization

    Full text link
    Accurate and robust vehicle localization in highly urbanized areas is challenging. Sensors are often corrupted in those complicated and large-scale environments. This paper introduces GNSS-FGO, an online and global trajectory estimator that fuses GNSS observations alongside multiple sensor measurements for robust vehicle localization. In GNSS-FGO, we fuse asynchronous sensor measurements into the graph with a continuous-time trajectory representation using Gaussian process regression. This enables querying states at arbitrary timestamps so that sensor observations are fused without requiring strict state and measurement synchronization. Thus, the proposed method presents a generalized factor graph for multi-sensor fusion. To evaluate and study different GNSS fusion strategies, we fuse GNSS measurements in loose and tight coupling with a speed sensor, IMU, and lidar-odometry. We employed datasets from measurement campaigns in Aachen, Duesseldorf, and Cologne in experimental studies and presented comprehensive discussions on sensor observations, smoother types, and hyperparameter tuning. Our results show that the proposed approach enables robust trajectory estimation in dense urban areas, where the classic multi-sensor fusion method fails due to sensor degradation. In a test sequence containing a 17km route through Aachen, the proposed method results in a mean 2D positioning error of 0.19m for loosely coupled GNSS fusion and 0.48m while fusing raw GNSS observations with lidar odometry in tight coupling.Comment: Revision of arXiv:2211.0540

    Continuous-Time Range-Only Pose Estimation

    Full text link
    Range-only (RO) localization involves determining the position of a mobile robot by measuring the distance to specific anchors. RO localization is challenging since the measurements are low-dimensional and a single range sensor does not have enough information to estimate the full pose of the robot. As such, range sensors are typically coupled with other sensing modalities such as wheel encoders or inertial measurement units (IMUs) to estimate the full pose. In this work, we propose a continuous-time Gaussian process (GP)- based trajectory estimation method to estimate the full pose of a robot using only range measurements from multiple range sensors. Results from simulation and real experiments show that our proposed method, using off-the-shelf range sensors, is able to achieve comparable performance and in some cases outperform alternative state-of-the-art sensor-fusion methods that use additional sensing modalities

    Self-supervised Learning of Primitive-based Robotic Manipulation

    Get PDF

    Simulating Flying Insects Using Dynamics and Data-Driven Noise Modeling to Generate Diverse Collective Behaviors

    Get PDF
    We present a biologically plausible dynamics model to simulate swarms of flying insects. Our formulation, which is based on biological conclusions and experimental observations, is designed to simulate large insect swarms of varying densities. We use a force-based model that captures different interactions between the insects and the environment and computes collision-free trajectories for each individual insect. Furthermore, we model the noise as a constructive force at the collective level and present a technique to generate noise-induced insect movements in a large swarm that are similar to those observed in real-world trajectories. We use a data-driven formulation that is based on pre-recorded insect trajectories. We also present a novel evaluation metric and a statistical validation approach that takes into account various characteristics of insect motions. In practice, the combination of Curl noise function with our dynamics model is used to generate realistic swarm simulations and emergent behaviors. We highlight its performance for simulating large flying swarms of midges, fruit fly, locusts and moths and demonstrate many collective behaviors, including aggregation, migration, phase transition, and escape responses

    Implementation of control algorithm for mechanical image stabilization

    Get PDF
    Cameras mounted on boats and in other similar environments can be hard to use if waves and wind cause unwanted motions of the camera which disturbs the desired image. However, this is a problem that can be fixed by applying mechanical image stabilization which is the goal of this thesis. The mechanical image stabilization is achieved by controlling two stepper motors in a pan-tilt-zoom (PTZ) camera provided by Axis Communications. Pan and tilt indicates that the camera can be rotated around two axes that are perpendicular to one another. The thesis begins with the problem of orientation estimation, i.e. finding out how the camera is oriented with respect to e.g., a fixed coordinate system. Sensor fusion is used for fusing accelerometer and gyroscope data to get a better estimate. Both the Kalman and Complementary filters are investigated and compared for this purpose. However, the Kalman filter is the one that is used in the final implementation, due to its better performance. In order to hold a desired camera orientation a compensation generator is used, in this thesis called reference generator. The name comes from the fact that it provides reference signals for the pan and tilt motors in order to compensate for external disturbances. The generator gets information from both pan and tilt encoders and the Kalman filter. The encoders provide camera position relative to the camera’s own chassi. If the compensation signals, also seen as reference values to the inner pan-tilt control, are tracked by the pan and tilt motors, disturbances are suppressed. In the control design a model obtained from system identification is used. The design and control simulations were carried out in the MATLAB extensions Control System Designer and Simulink. The choice of controller fell on the PID. The final part of the thesis describes the result from experiments that were carried out with the real process, i.e. the camera mounted in different setups, including a robotic arm simulating sea conditions. The result shows that the pan motor manages to track reference signals up to the required frequency of 1Hz. However, the tilt motor only manages to track 0.5Hz and is thereby below the required frequency. The result, however, proves that the concept of the thesis is possible

    Picking Up Speed: Continuous-Time Lidar-Only Odometry using Doppler Velocity Measurements

    Full text link
    Frequency-Modulated Continuous-Wave (FMCW) lidar is a recently emerging technology that additionally enables per-return instantaneous relative radial velocity measurements via the Doppler effect. In this letter, we present the first continuous-time lidar-only odometry algorithm using these Doppler velocity measurements from an FMCW lidar to aid odometry in geometrically degenerate environments. We apply an existing continuous-time framework that efficiently estimates the vehicle trajectory using Gaussian process regression to compensate for motion distortion due to the scanning-while-moving nature of any mechanically actuated lidar (FMCW and non-FMCW). We evaluate our proposed algorithm on several real-world datasets, including publicly available ones and datasets we collected. Our algorithm outperforms the only existing method that also uses Doppler velocity measurements, and we study difficult conditions where including this extra information greatly improves performance. We additionally demonstrate state-of-the-art performance of lidar-only odometry with and without using Doppler velocity measurements in nominal conditions. Code for this project can be found at: https://github.com/utiasASRL/steam_icp.Comment: Submitted to RA-

    Reducing Road Wear While Ensuring Comfort and Charging Constraints for Dynamically Charged Passenger Vehicles Through Noise Shaped Control Inputs

    Get PDF
    Dynamically charged vehicles suffer from power-loss during wireless power transfer due to vehicle coil misalignment while driving. Autonomous dynamically charged vehicles can maximize wireless power transfer by following an optimal charging path, but the repeated precision increases road wear. To avoid unnecessary road wear and rutting, a path planner can intentionally inject variability into an autonomous vehicle’s path. However, introducing variability into an optimal charging path risks depleting battery life prior to destination arrival, and it increases rider discomfort. Therefore, a path planner is proposed that guarantees average charging criteria and ensures rider comfort while reducing road wear

    On Robotic Work-Space Sensing and Control

    Get PDF
    Industrial robots are fast and accurate when working with known objects at precise locations in well-structured manufacturing environments, as done in the classical automation setting. In one sense, limited use of sensors leaves robots blind and numb, unaware of what is happening in their surroundings. Whereas equipping a system with sensors has the potential to add new functionality and increase the set of uncertainties a robot can handle, it is not as simple as that. Often it is difficult to interpret the measurements and use them to draw necessary conclusions about the state of the work space. For effective sensor-based control, it is necessary to both understand the sensor data and to know how to act on it, giving the robot perception-action capabilities. This thesis presents research on how sensors and estimation techniques can be used in robot control. The suggested methods are theoretically analyzed and evaluated with a large focus on experimental verification in real-time settings. One application class treated is the ability to react fast and accurately to events detected by vision, which is demonstrated by the realization of a ball-catching robot. A new approach is proposed for performing high-speed color-based image analysis that is robust to varying illumination conditions and motion blur. Furthermore, a method for object tracking is presented along with a novel way of Kalman-filter initialization that can handle initial-state estimates with infinite variance. A second application class treated is robotic assembly using force control. A study of two assembly scenarios is presented, investigating the possibility of using force-controlled assembly in industrial robotics. Two new approaches for robotic contact-force estimation without any force sensor are presented and validated in assembly operations. The treated topics represent some of the challenges in sensor-based robot control, and it is demonstrated how they can be used to extend the functionality of industrial robots
    • …
    corecore