109 research outputs found

    Egomotion estimation using binocular spatiotemporal oriented energy

    Get PDF
    Camera egomotion estimation is concerned with the recovery of a camera's motion (e.g., instantaneous translation and rotation) as it moves through its environment. It has been demonstrated to be of both theoretical and practical interest. This thesis documents a novel algorithm for egomotion estimation based on binocularly matched spatiotemporal oriented energy distributions. Basing the estimation on oriented energy measurements makes it possible to recover egomotion without the need to establish temporal correspondences or convert disparity into 3D world coordinates. There sulting algorithm has been realized in software and evaluated quantitatively on a novel laboratory dataset with ground truth as well as qualitatively on both indoor and outdoor real-world datasets. Performance is evaluated relative to comparable alternative algorithms and shown to exhibit best overall performance

    Vision systems for autonomous aircraft guidance

    Get PDF

    Enhancing 3D Visual Odometry with Single-Camera Stereo Omnidirectional Systems

    Full text link
    We explore low-cost solutions for efficiently improving the 3D pose estimation problem of a single camera moving in an unfamiliar environment. The visual odometry (VO) task -- as it is called when using computer vision to estimate egomotion -- is of particular interest to mobile robots as well as humans with visual impairments. The payload capacity of small robots like micro-aerial vehicles (drones) requires the use of portable perception equipment, which is constrained by size, weight, energy consumption, and processing power. Using a single camera as the passive sensor for the VO task satisfies these requirements, and it motivates the proposed solutions presented in this thesis. To deliver the portability goal with a single off-the-shelf camera, we have taken two approaches: The first one, and the most extensively studied here, revolves around an unorthodox camera-mirrors configuration (catadioptrics) achieving a stereo omnidirectional system (SOS). The second approach relies on expanding the visual features from the scene into higher dimensionalities to track the pose of a conventional camera in a photogrammetric fashion. The first goal has many interdependent challenges, which we address as part of this thesis: SOS design, projection model, adequate calibration procedure, and application to VO. We show several practical advantages for the single-camera SOS due to its complete 360-degree stereo views, that other conventional 3D sensors lack due to their limited field of view. Since our omnidirectional stereo (omnistereo) views are captured by a single camera, a truly instantaneous pair of panoramic images is possible for 3D perception tasks. Finally, we address the VO problem as a direct multichannel tracking approach, which increases the pose estimation accuracy of the baseline method (i.e., using only grayscale or color information) under the photometric error minimization as the heart of the “direct” tracking algorithm. Currently, this solution has been tested on standard monocular cameras, but it could also be applied to an SOS. We believe the challenges that we attempted to solve have not been considered previously with the level of detail needed for successfully performing VO with a single camera as the ultimate goal in both real-life and simulated scenes

    Markerless Tracking Using Polar Correlation Of Camera Optical Flow

    Get PDF
    We present a novel, real-time, markerless vision-based tracking system, employing a rigid orthogonal configuration of two pairs of opposing cameras. Our system uses optical flow over sparse features to overcome the limitation of vision-based systems that require markers or a pre-loaded model of the physical environment. We show how opposing cameras enable cancellation of common components of optical flow leading to an efficient tracking algorithm that captures five degrees of freedom including direction of translation and angular velocity. Experiments comparing our device with an electromagnetic tracker show that its average tracking accuracy is 80% over 185 frames, and it is able to track large range motions even in outdoor settings. We also present how opposing cameras in vision-based inside-looking-out systems can be used for gesture recognition. To demonstrate our approach, we discuss three different algorithms for recovering motion parameters at different levels of complete recovery. We show how optical flow in opposing cameras can be used to recover motion parameters of the multi-camera rig. Experimental results show gesture recognition accuracy of 88.0%, 90.7% and 86.7% for our three techniques, respectively, across a set of 15 gestures

    Real-Time Multi-Fisheye Camera Self-Localization and Egomotion Estimation in Complex Indoor Environments

    Get PDF
    In this work a real-time capable multi-fisheye camera self-localization and egomotion estimation framework is developed. The thesis covers all aspects ranging from omnidirectional camera calibration to the development of a complete multi-fisheye camera SLAM system based on a generic multi-camera bundle adjustment method

    Application of computer vision for roller operation management

    Get PDF
    Compaction is the last and possibly the most important phase in construction of asphalt concrete (AC) pavements. Compaction densifies the loose (AC) mat, producing a stable surface with low permeability. The process strongly affects the AC performance properties. Too much compaction may cause aggregate degradation and low air void content facilitating bleeding and rutting. On the other hand too little compaction may result in higher air void content facilitating oxidation and water permeability issues, rutting due to further densification by traffic and reduced fatigue life. Therefore, compaction is a critical issue in AC pavement construction.;The common practice for compacting a mat is to establish a roller pattern that determines the number of passes and coverages needed to achieve the desired density. Once the pattern is established, the roller\u27s operator must maintain the roller pattern uniformly over the entire mat.;Despite the importance of uniform compaction to achieve the expected durability and performance of AC pavements, having the roller operator as the only mean to manage the operation can involve human errors.;With the advancement of technology in recent years, the concept of intelligent compaction (IC) was developed to assist the roller operators and improve the construction quality. Commercial IC packages for construction rollers are available from different manufacturers. They can provide precise mapping of a roller\u27s location and provide the roller operator with feedback during the compaction process.;Although, the IC packages are able to track the roller passes with impressive results, there are also major hindrances. The high cost of acquisition and potential negative impact on productivity has inhibited implementation of IC.;This study applied computer vision technology to build a versatile and affordable system to count and map roller passes. An infrared camera is mounted on top of the roller to capture the operator view. Then, in a near real-time process, image features were extracted and tracked to estimate the incremental rotation and translation of the roller. Image featured are categorized into near and distant features based on the user defined horizon. The optical flow is estimated for near features located in the region below the horizon. The change in roller\u27s heading is constantly estimated from the distant features located in the sky region. Using the roller\u27s rotation angle, the incremental translation between two frames will be calculated from the optical flow. The roller\u27s incremental rotation and translation will put together to develop a tracking map.;During system development, it was noted that in environments with thermal uniformity, the background of the IR images exhibit less featured as compared to images captured with optical cameras which are insensitive to temperature. This issue is more significant overnight, since nature elements are not able to reflect the heat energy from sun. Therefore to improve roller\u27s heading estimation where less features are available in the sky region a unique methodology that allows heading detection based on the asphalt mat edges was developed for this research. The heading measurements based on the slope of the asphalt hot edges will be added to the pool of the headings measured from sky region. The median of all heading measurements will be used as the incremental roller\u27s rotation for the tracking analysis.;The record of tracking data is used for QC/QA purposes and verifying the proper implementation of the roller pattern throughout a job constructed under the roller pass specifications.;The system developed during this research was successful in mapping roller location for few projects tested. However the system should be independently validated
    • …
    corecore