704 research outputs found

    Feature detection in an indoor environment using Hardware Accelerators for time-efficient Monocular SLAM

    Get PDF
    In the field of Robotics, Monocular Simultaneous Localization and Mapping (Monocular SLAM) has gained immense popularity, as it replaces large and costly sensors such as laser range finders with a single cheap camera. Additionally, the well-developed area of Computer Vision provides robust image processing algorithms which aid in developing feature detection technique for the implementation of Monocular SLAM. Similarly, in the field of digital electronics and embedded systems, hardware acceleration using FPGAs, has become quite popular. Hardware acceleration is based upon the idea of offloading certain iterative algorithms from the processor and implementing them on a dedicated piece of hardware such as an ASIC or FPGA, to speed up performance in terms of timing and to possibly reduce the net power consumption of the system. Good strides have been taken in developing massively pipelined and resource efficient hardware implementations of several image processing algorithms on FPGAs, which achieve fairly decent speed-up of the processing time. In this thesis, we have developed a very simple algorithm for feature detection in an indoor environment by means of a single camera, based on Canny Edge Detection and Hough Transform algorithms using OpenCV library, and proposed its integration with existing feature initialization technique for a complete Monocular SLAM implementation. Following this, we have developed hardware accelerators for Canny Edge Detection & Hough Transform and we have compared the timing performance of implementation in hardware (using FPGAs) with an implementation in software (using C++ and OpenCV)

    Frequency Modulated Continuous Wave Radar and Video Fusion for Simultaneous Localization and Mapping

    Get PDF
    There has been a push recently to develop technology to enable the use of UAVs in GPS-denied environments. As UAVs become smaller, there is a need to reduce the number and sizes of sensor systems on board. A video camera on a UAV can serve multiple purposes. It can return imagery for processing by human users. The highly accurate bearing information provided by video makes it a useful tool to be incorporated into a navigation and tracking system. Radars can provide information about the types of objects in a scene and can operate in adverse weather conditions. The range and velocity measurements provided by the radar make it a good tool for navigation. FMCW radar and color video were fused to perform SLAM in an outdoor environment. A radar SLAM solution provided the basis for the fusion. Correlations between radar returns were used to estimate dead-reckoning parameters to obtain an estimate of the platform location. A new constraint was added in the radar detection process to prevent detecting poorly observable reflectors while maintaining a large number of measurements on highly observable reflectors. The radar measurements were mapped as landmarks, further improving the platform location estimates. As images were received from the video camera, changes in platform orientation were estimated, further improving the platform orientation estimates. The expected locations of radar measurements, whose uncertainty was modeled as Gaussian, were projected onto the images and used to estimate the location of the radar reflector in the image. The colors of the most likely reflector were saved and used to detect the reflector in subsequent images. The azimuth angles obtained from the image detections were used to improve the estimates of the landmarks in the SLAM map over previous estimates where only the radar was used

    Mobile robot localization using a Kalman filter and relative bearing measurements to known landmarks

    Get PDF
    This paper discusses mobile robot localization using a single, fixed camera that is capable of detecting predefined landmarks in the environment. For each visible landmark, the camera provides a relative bearing but not a relative range. This research represents work toward an inexpensive sensor that could be added to a mobile robot in order to provide more accurate estimates of the robot\u27s location. It uses the Kalman filter as a framework, which is a proven method for incorporating sensor data into navigation problems. In the simulations presented later, it is assumed that the filter can perform accurate feature recognition. In the experimental setup, however, a webcam and an open source library are used to recognize and track bearing to a set of unique markers. Although this research requires that the landmark locations be known, in contrast to research in simultaneous localization and mapping, the results are still useful in an industrial setting where placing known landmarks would be acceptable

    A multisensor SLAM for dense maps of large scale environments under poor lighting conditions

    Get PDF
    This thesis describes the development and implementation of a multisensor large scale autonomous mapping system for surveying tasks in underground mines. The hazardous nature of the underground mining industry has resulted in a push towards autonomous solutions to the most dangerous operations, including surveying tasks. Many existing autonomous mapping techniques rely on approaches to the Simultaneous Localization and Mapping (SLAM) problem which are not suited to the extreme characteristics of active underground mining environments. Our proposed multisensor system has been designed from the outset to address the unique challenges associated with underground SLAM. The robustness, self-containment and portability of the system maximize the potential applications.The multisensor mapping solution proposed as a result of this work is based on a fusion of omnidirectional bearing-only vision-based localization and 3D laser point cloud registration. By combining these two SLAM techniques it is possible to achieve some of the advantages of both approaches – the real-time attributes of vision-based SLAM and the dense, high precision maps obtained through 3D lasers. The result is a viable autonomous mapping solution suitable for application in challenging underground mining environments.A further improvement to the robustness of the proposed multisensor SLAM system is a consequence of incorporating colour information into vision-based localization. Underground mining environments are often dominated by dynamic sources of illumination which can cause inconsistent feature motion during localization. Colour information is utilized to identify and remove features resulting from illumination artefacts and to improve the monochrome based feature matching between frames.Finally, the proposed multisensor mapping system is implemented and evaluated in both above ground and underground scenarios. The resulting large scale maps contained a maximum offset error of ±30mm for mapping tasks with lengths over 100m

    Long Distance GNSS-Denied Visual Inertial Navigation for Autonomous Fixed Wing Unmanned Air Vehicles: SO(3) Manifold Filter based on Virtual Vision Sensor

    Full text link
    This article proposes a visual inertial navigation algorithm intended to diminish the horizontal position drift experienced by autonomous fixed wing UAVs (Unmanned Air Vehicles) in the absence of GNSS (Global Navigation Satellite System) signals. In addition to accelerometers, gyroscopes, and magnetometers, the proposed navigation filter relies on the accurate incremental displacement outputs generated by a VO (Visual Odometry) system, denoted here as a Virtual Vision Sensor or VVS, which relies on images of the Earth surface taken by an onboard camera and is itself assisted by the filter inertial estimations. Although not a full replacement for a GNSS receiver since its position observations are relative instead of absolute, the proposed system enables major reductions in the GNSS-Denied attitude and position estimation errors. In order to minimize the accumulation of errors in the absence of absolute observations, the filter is implemented in the manifold of rigid body rotations or SO (3). Stochastic high fidelity simulations of two representative scenarios involving the loss of GNSS signals are employed to evaluate the results. The authors release the C++ implementation of both the visual inertial navigation filter and the high fidelity simulation as open-source software.Comment: 27 pages, 14 figures. arXiv admin note: substantial text overlap with arXiv:2205.1324
    • …
    corecore