54,152 research outputs found

    Near real-time stereo vision system

    Get PDF
    The apparatus for a near real-time stereo vision system for use with a robotic vehicle is described. The system is comprised of two cameras mounted on three-axis rotation platforms, image-processing boards, a CPU, and specialized stereo vision algorithms. Bandpass-filtered image pyramids are computed, stereo matching is performed by least-squares correlation, and confidence ranges are estimated by means of Bayes' theorem. In particular, Laplacian image pyramids are built and disparity maps are produced from the 60 x 64 level of the pyramids at rates of up to 2 seconds per image pair. The first autonomous cross-country robotic traverses (of up to 100 meters) have been achieved using the stereo vision system of the present invention with all computing done onboard the vehicle. The overall approach disclosed herein provides a unifying paradigm for practical domain-independent stereo ranging

    Autonomous Vehicle Control

    Get PDF
    A practical knowledge base in the emerging field of Robotics was developed and used to create a framework for further experiments. The framework was designed such that modular parts could be replaced, allowing for future development without reinventing the wheel . To prove the framework, a semi-autonomous robot was implemented, including stereo vision sensors, an inertial navigation system, and a simultaneous localization and mapping algorithm

    Projected texture stereo

    Full text link
    Abstract — Passive stereo vision is widely used as a range sensing technology in robots, but suffers from dropouts: areas of low texture where stereo matching fails. By supplementing a stereo system with a strong texture projector, dropouts can be eliminated or reduced. This paper develops a practical stereo projector system, first by finding good patterns to project in the ideal case, then by analyzing the effects of system blur and phase noise on these patterns, and finally by designing a compact projector that is capable of good performance out to 3m in indoor scenes. The system has been implemented and has excellent depth precision and resolution, especially in the range out to 1.5m. I

    System Integration and Intelligence Improvements for WPI’s UGV - Prometheus

    Get PDF
    This project focuses on realizing a series of operational improvements for WPI\u27s unmanned ground vehicle Prometheus with the end goal of a prize winning entry to the Intelligent Ground Vehicle Challenge. Operational improvements include a practical implementation of stereo vision on an NVIDIA GPU, a more reliable implementation of line detection, a better approach to mapping and path planning, and a modified system architecture realized by an easier to work with GPIO implementation. The end result of these improvements is better autonomy, accessibility, robustness, reliability, and usability for Prometheus

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions

    Influence of Stereoscopic Camera System Alignment Error on the Accuracy of 3D Reconstruction

    Get PDF
    The article deals with the influence of inaccurate rotation of cameras in camera system alignment on 3D reconstruction accuracy. The accuracy of the all three spatial coordinates is analyzed for two alignments (setups) of 3D cameras. In the first setup, a 3D system with parallel optical axes of the cameras is analyzed. In this stereoscopic setup, the deterministic relations are derived by the trigonometry and basic stereoscopic formulas. The second alignment is a generalized setup with cameras in arbitrary positions. The analysis of the situation in the general setup is closely related with the influence of errors of the points' correspondences. Therefore the relation between errors of points' correspondences and reconstruction of the spatial position of the point was investigated. This issue is very complex. The worst case analysis was executed with the use of Monte Carlo method. The aim is to estimate a critical situation and the possible extent of these errors. Analysis of the generalized system and derived relations for normal system represent a significant improvement of the spatial coordinates accuracy analysis. A practical experiment was executed which confirmed the proposed relations

    Cooperative monocular-based SLAM for multi-UAV systems in GPS-denied environments

    Get PDF
    This work presents a cooperative monocular-based SLAM approach for multi-UAV systems that can operate in GPS-denied environments. The main contribution of the work is to show that, using visual information obtained from monocular cameras mounted onboard aerial vehicles flying in formation, the observability properties of the whole system are improved. This fact is especially notorious when compared with other related visual SLAM configurations. In order to improve the observability properties, some measurements of the relative distance between the UAVs are included in the system. These relative distances are also obtained from visual information. The proposed approach is theoretically validated by means of a nonlinear observability analysis. Furthermore, an extensive set of computer simulations is presented in order to validate the proposed approach. The numerical simulation results show that the proposed system is able to provide a good position and orientation estimation of the aerial vehicles flying in formation.Peer ReviewedPostprint (published version
    • …
    corecore