30 research outputs found

    Sensor Fusion of Structure-from-Motion, Bathymetric 3D, and Beacon-Based Navigation Modalities

    Full text link
    This paper describes an approach for the fusion of 30 data underwater obtained from multiple sensing modalities. In particular, we examine the combination of imagebased Structure-From-Motion (SFM) data with bathymetric data obtained using pencil-beam underwater sonar, in order to recover the shape of the seabed terrain. We also combine image-based egomotion estimation with acousticbased and inertial navigation data on board the underwater vehicle. We examine multiple types of fusion. When fusion is pe?$ormed at the data level, each modality is used to extract 30 information independently. The 30 representations are then aligned and compared. In this case, we use the bathymetric data as ground truth to measure the accuracy and drijl of the SFM approach. Similarly we use the navigation data as ground truth against which we measure the accuracy of the image-based ego-motion estimation. To our knowledge, this is the frst quantitative evaluation of image-based SFM and egomotion accuracy in a large-scale outdoor environment. Fusion at the signal level uses the raw signals from multiple sensors to produce a single coherent 30 representation which takes optimal advantage of the sensors' complementary strengths. In this papel; we examine how lowresolution bathymetric data can be used to seed the higherresolution SFM algorithm, improving convergence rates, and reducing drift error. Similarly, acoustic-based and inertial navigation data improves the convergence and driji properties of egomotion estimation.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/86044/1/hsingh-35.pd

    Towards Visual Ego-motion Learning in Robots

    Full text link
    Many model-based Visual Odometry (VO) algorithms have been proposed in the past decade, often restricted to the type of camera optics, or the underlying motion manifold observed. We envision robots to be able to learn and perform these tasks, in a minimally supervised setting, as they gain more experience. To this end, we propose a fully trainable solution to visual ego-motion estimation for varied camera optics. We propose a visual ego-motion learning architecture that maps observed optical flow vectors to an ego-motion density estimate via a Mixture Density Network (MDN). By modeling the architecture as a Conditional Variational Autoencoder (C-VAE), our model is able to provide introspective reasoning and prediction for ego-motion induced scene-flow. Additionally, our proposed model is especially amenable to bootstrapped ego-motion learning in robots where the supervision in ego-motion estimation for a particular camera sensor can be obtained from standard navigation-based sensor fusion strategies (GPS/INS and wheel-odometry fusion). Through experiments, we show the utility of our proposed approach in enabling the concept of self-supervised learning for visual ego-motion estimation in autonomous robots.Comment: Conference paper; Submitted to IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2017, Vancouver CA; 8 pages, 8 figures, 2 table

    Inspection with Robotic Microscopic Imaging

    Get PDF
    Future Mars rover missions will require more advanced onboard autonomy for increased scientific productivity and reduced mission operations cost. One such form of autonomy can be achieved by targeting precise science measurements to be made in a single command uplink cycle. In this paper we present an overview of our solution to the subproblems of navigating a rover into place for microscopic imaging, mapping an instrument target point selected by an operator using far away science camera images to close up hazard camera images, verifying the safety of placing a contact instrument on a sample or finding nearby safe points, and analyzing the data that comes back from the rover. The system developed includes portions used in the Multiple Target Single Cycle Instrument Placement demonstration at NASA Ames in October 2004, and portions of the MI Toolkit delivered to the Athena Microscopic Imager Instrument Team for the MER mission still operating on Mars today. Some of the component technologies are also under consideration for MSL mission infusion

    Egomotion Estimation Using Binocular Spatiotemporal Oriented Energy

    Full text link

    Intelligent Behavioral Action Aiding for Improved Autonomous Image Navigation

    Get PDF
    In egomotion image navigation, errors are common especially when traversing areas with few landmarks. Since image navigation is often used as a passive navigation technique in Global Positioning System (GPS) denied environments; egomotion accuracy is important for precise navigation in these challenging environments. One of the causes of egomotion errors is inaccurate landmark distance measurements, e.g., sensor noise. This research determines a landmark location egomotion error model that quantifies the effects of landmark locations on egomotion value uncertainty and errors. The error model accounts for increases in landmark uncertainty due to landmark distance and image centrality. A robot then uses the error model to actively orient to position landmarks in image positions that give the least egomotion calculation uncertainty. Two actions aiding solutions are proposed: (1) qualitative non-evaluative aiding action, and (2) quantitative evaluative aiding action with landmark tracking. Simulation results show that both action aiding techniques reduce the position uncertainty compared to no action aiding. Physical testing results substantiate simulation results. Compared to no action aiding, non-evaluative action aiding reduced egomotion position errors by an average 31.5%, while evaluative action aiding reduced egomotion position errors by an average 72.5%. Physical testing also showed that evaluative action aiding enables egomotion to work reliably in areas with few features, achieving 76% egomotion position error reduction compared to no aiding

    Visual Odometry Estimation Using Selective Features

    Get PDF
    The rapid growth in computational power and technology has enabled the automotive industry to do extensive research into autonomous vehicles. So called self- driven cars are seen everywhere, being developed from many companies like, Google, Mercedes Benz, Delphi, Tesla, Uber and many others. One of the challenging tasks for these vehicles is to track incremental motion in runtime and to analyze surroundings for accurate localization. This crucial information is used by many internal systems like active suspension control, autonomous steering, lane change assist and many such applications. All these systems rely on incremental motion to infer logical conclusions. Measurement of incremental change in pose or perspective, in other words, changes in motion, measured using visual only information is called Visual Odometry. This thesis proposes an approach to solve the Visual Odometry problem by using stereo-camera vision to incrementally estimate the pose of a vehicle by examining changes that motion induces on the background in the frame captured from stereo cameras. The approach in this thesis research uses a selective feature based motion tracking method to track the motion of the vehicle by analyzing the motion of its static surroundings and discarding the motion induced by dynamic background (outliers). The proposed approach considers that the surrounding may have moving objects like a truck, a car or a pedestrian body which has its own motion which may be different with respect to the vehicle. Use of stereo camera adds depth information which provides more crucial information necessary for detecting and rejecting outliers. Refining the interest point location using sinusoidal interpolation further increases the accuracy of the motion estimation results. The results show that by using a process that chooses features only on the static background and by tracking these features accurately, robust semantic information can be obtained

    Visual target tracking for rover-based planetary exploration

    Get PDF
    Abstract-To command a rover to go to a location of scientific interest on a remote planet, the rover must be capable of reliably tracking the target designated by a scientist from about ten rover lengths away. The rover must maintain lock on the target while traversing rough terrain and avoiding obstacles without the need for communication with Earth. Among the challenges of tracking targets from a rover are the large changes in the appearance and shape of the selected target as the rover approaches it, the limited frame rate at which images can be acquired and processed, and the sudden changes in camera pointing as the rover goes over rocky terrain. We have investigated various techniques for combining 2D and 3D information in order to increase the reliability of visually tracking targets under Mars like conditions. We will present the approaches that we have examined on simulated data and tested onboard the Rocky 8 rover in the JPL Mars Yard and the K9 rover in the ARC Marscape. These techniques include results for 2D trackers, ICP, visual odometry, and 2D/3D trackers

    Egomotion estimation using binocular spatiotemporal oriented energy

    Get PDF
    Camera egomotion estimation is concerned with the recovery of a camera's motion (e.g., instantaneous translation and rotation) as it moves through its environment. It has been demonstrated to be of both theoretical and practical interest. This thesis documents a novel algorithm for egomotion estimation based on binocularly matched spatiotemporal oriented energy distributions. Basing the estimation on oriented energy measurements makes it possible to recover egomotion without the need to establish temporal correspondences or convert disparity into 3D world coordinates. There sulting algorithm has been realized in software and evaluated quantitatively on a novel laboratory dataset with ground truth as well as qualitatively on both indoor and outdoor real-world datasets. Performance is evaluated relative to comparable alternative algorithms and shown to exhibit best overall performance

    Estimaci贸n de la localizaci贸n de un veh铆culo usando un sistema de visi贸n por computador

    Get PDF
    El proyecto presentado hace parte de un proyecto m谩s grande conocido como el Proyecto "Optimus" del Grupo de Investigaci贸n Sirius de la Universidad Tecnol贸gica de Pereira. "Optimus" busca darle a un autom贸vil completa autonom铆a para que se desplace por ambientes rurales o urbanos sin ayuda humana. En este momento el proyecto "Optimus" no cuenta con un mecanismo de localizaci贸n v谩lido que le permita conocer su ubicaci贸n y orientaci贸n en un marco de referencia determinado con su ciente con fiabilidad, debido a que dispone 煤nicamente de la estimaci贸n de la localizaci贸n provista por un GPS y una unidad de medida inercial, la cual no es lo suficientemente adecuada para un veh铆culo aut贸nomo
    corecore