18,357 research outputs found

    Path Following with Slip Compensation for a Mars Rover

    Get PDF
    A software system for autonomous operation of a Mars rover is composed of several key algorithms that enable the rover to accurately follow a designated path, compensate for slippage of its wheels on terrain, and reach intended goals. The techniques implemented by the algorithms are visual odometry, full vehicle kinematics, a Kalman filter, and path following with slip compensation. The visual-odometry algorithm tracks distinctive scene features in stereo imagery to estimate rover motion between successively acquired stereo image pairs, by use of a maximum-likelihood motion-estimation algorithm. The full-vehicle kinematics algorithm estimates motion, with a no-slip assumption, from measured wheel rates, steering angles, and angles of rockers and bogies in the rover suspension system. The Kalman filter merges data from an inertial measurement unit (IMU) and the visual-odometry algorithm. The merged estimate is then compared to the kinematic estimate to determine whether and how much slippage has occurred. The kinematic estimate is used to complement the Kalman-filter estimate if no statistically significant slippage has occurred. If slippage has occurred, then a slip vector is calculated by subtracting the current Kalman filter estimate from the kinematic estimate. This slip vector is then used, in conjunction with the inverse kinematics, to determine the wheel velocities and steering angles needed to compensate for slip and follow the desired path

    Photogrammetric restitution of a presumed ancient Asclepius temple in Titani, Peloponnesos, Greece

    Get PDF
    Close range photogrammetry is a useful tool for the documentation and registration of archaeological sites. In this case, photogrammetric restitution is applied to a presumed Esclepion Classical temple site in Titani, Peloponnesos, Greece. The archaeological remains that are recorded and processed in this stage are small fragments of walls, made out of irregular shaped stones. The fragmentary remains and the need to record both the facades of the stones as well as the upper surfaces, complicate the photogrammetric recording and processing workflow. The use of 3D documentation is important for the documentation, conservation and possible further excavation of the site. Stereographic pictures in combination with terrestrial topographic measurements are processed in the photogrammetric software VirtuoZoTM. The stereo photographs were taken by a non-metric high resolution digital single lens reflex camera with a minimum overlap of 65 percent. Targets placed on the remains of the walls were measured by total station to obtain ground control points for the orientation of each 3D stereo model in an absolute coordinate system (HGRS87). The photogrammetric processing of the stereo models results in very accurate digital elevation models and orthophotos of the walls. Further combining of these final products and merging these products in a CAD software leads to a 3D presentation of the archaeological excavation, which can be further used to evolve this archaeological site

    Combining Stereo Disparity and Optical Flow for Basic Scene Flow

    Full text link
    Scene flow is a description of real world motion in 3D that contains more information than optical flow. Because of its complexity there exists no applicable variant for real-time scene flow estimation in an automotive or commercial vehicle context that is sufficiently robust and accurate. Therefore, many applications estimate the 2D optical flow instead. In this paper, we examine the combination of top-performing state-of-the-art optical flow and stereo disparity algorithms in order to achieve a basic scene flow. On the public KITTI Scene Flow Benchmark we demonstrate the reasonable accuracy of the combination approach and show its speed in computation.Comment: Commercial Vehicle Technology Symposium (CVTS), 201

    Ego-motion and Surrounding Vehicle State Estimation Using a Monocular Camera

    Full text link
    Understanding ego-motion and surrounding vehicle state is essential to enable automated driving and advanced driving assistance technologies. Typical approaches to solve this problem use fusion of multiple sensors such as LiDAR, camera, and radar to recognize surrounding vehicle state, including position, velocity, and orientation. Such sensing modalities are overly complex and costly for production of personal use vehicles. In this paper, we propose a novel machine learning method to estimate ego-motion and surrounding vehicle state using a single monocular camera. Our approach is based on a combination of three deep neural networks to estimate the 3D vehicle bounding box, depth, and optical flow from a sequence of images. The main contribution of this paper is a new framework and algorithm that integrates these three networks in order to estimate the ego-motion and surrounding vehicle state. To realize more accurate 3D position estimation, we address ground plane correction in real-time. The efficacy of the proposed method is demonstrated through experimental evaluations that compare our results to ground truth data available from other sensors including Can-Bus and LiDAR

    Keyframe-based visual–inertial odometry using nonlinear optimization

    Get PDF
    Combining visual and inertial measurements has become popular in mobile robotics, since the two sensing modalities offer complementary characteristics that make them the ideal choice for accurate visual–inertial odometry or simultaneous localization and mapping (SLAM). While historically the problem has been addressed with filtering, advancements in visual estimation suggest that nonlinear optimization offers superior accuracy, while still tractable in complexity thanks to the sparsity of the underlying problem. Taking inspiration from these findings, we formulate a rigorously probabilistic cost function that combines reprojection errors of landmarks and inertial terms. The problem is kept tractable and thus ensuring real-time operation by limiting the optimization to a bounded window of keyframes through marginalization. Keyframes may be spaced in time by arbitrary intervals, while still related by linearized inertial terms. We present evaluation results on complementary datasets recorded with our custom-built stereo visual–inertial hardware that accurately synchronizes accelerometer and gyroscope measurements with imagery. A comparison of both a stereo and monocular version of our algorithm with and without online extrinsics estimation is shown with respect to ground truth. Furthermore, we compare the performance to an implementation of a state-of-the-art stochastic cloning sliding-window filter. This competitive reference implementation performs tightly coupled filtering-based visual–inertial odometry. While our approach declaredly demands more computation, we show its superior performance in terms of accuracy
    • …
    corecore