27 research outputs found

    Leader follower formation control of ground vehicles using dynamic pixel count and inverse perspective mapping

    Get PDF
    This paper deals with leader-follower formations of non-holonomic mobile robots, introducing a formation control strategy based on pixel counts using a commercial grade electro optics camera. Localization of the leader for motions along line of sight as well as the obliquely inclined directions are considered based on pixel variation of the images by referencing to two arbitrarily designated positions in the image frames. Based on an established relationship between the displacement of the camera movement along the viewing direction and the difference in pixel counts between reference points in the images, the range and the angle estimate between the follower camera and the leader is calculated. The Inverse Perspective Transform is used to account for non linear relationship between the height of vehicle in a forward facing image and its distance from the camera. The formulation is validated with experiments

    Graph-Based Classification of Omnidirectional Images

    Get PDF
    Omnidirectional cameras are widely used in such areas as robotics and virtual reality as they provide a wide field of view. Their images are often processed with classical methods, which might unfortunately lead to non-optimal solutions as these methods are designed for planar images that have different geometrical properties than omnidirectional ones. In this paper we study image classification task by taking into account the specific geometry of omnidirectional cameras with graph-based representations. In particular, we extend deep learning architectures to data on graphs; we propose a principled way of graph construction such that convolutional filters respond similarly for the same pattern on different positions of the image regardless of lens distortions. Our experiments show that the proposed method outperforms current techniques for the omnidirectional image classification problem

    Point and line feature-based observer design on SL(3) for Homography estimation and its application to image stabilization

    Get PDF
    This paper presents a new algorithm for online estimation of a sequence of homographies applicable to image sequences obtained from robotic vehicles equipped with a monocular camera. The approach taken exploits the underlying Special Linear group SL(3) structure of the set of homographies along with gyrometer measurements and direct point-and line-feature correspondences between images to develop temporal filter for the homography estimate. Theoretical analysis and experimental results are provided to demonstrate the robustness of the proposed algorithm. The experimental results show excellent performance even in the case of very fast camera motion (relative to frame rate), and in presence of severe occlusion, specular reflection, image blur, and light saturation

    Navigational Drift Analysis for Visual Odometry

    Get PDF
    Visual odometry estimates a robot's ego-motion with cameras installed on itself. With the advantages brought by camera being a sensor, visual odometry has been widely adopted in robotics and navigation fields. Drift (or error accumulation) from relative motion concatenation is an intrinsic problem of visual odometry in long-range navigation, as visual odometry is a sensor based on relative measurements. General error analysis using ``mean'' and ``covariance'' of positional error in each axis is not fully capable to describe the behavior of drift. Moreover, no theoretic drift analysis is available for performance evaluation and algorithms comparison. Drift distribution is established in the paper, as a function of the covariance matrix from positional error propagation model. To validate the drift model, experiment with a specific setting is conducted

    Appearance Guided Monocular Omnidirectional Visual Odometry for Outdoor Ground Vehicles

    No full text
    In this paper, we describe a real-time algorithm for computing the ego-motion of a vehicle relative to the road. The algorithm uses as only input images provided by a single omnidirectional camera mounted on the roof of the vehicle. The front ends of the system are two different trackers. The first one is a homography-based tracker that detects and matches robust scale invariant features that most likely belong to the ground plane. The second one uses an appearance based approach and gives high resolution estimates of the rotation of the vehicle. This planar pose estimation method has been successfully applied to videos from an automotive platform. We give an example of camera trajectory estimated purely from omnidirectional images over a distance of 400 meters. For performance evaluation, the estimated path is superimposed onto a satellite image. In the end, we use image mosaicing to obtain a textured 2D reconstruction of the estimated path

    Appearance-Guided Monocular Omnidirectional Visual Odometry for Outdoor Ground Vehicles

    No full text
    corecore