13 research outputs found

    Vision-Based Localization Algorithm Based on Landmark Matching, Triangulation, Reconstruction, and Comparison

    No full text
    Many generic position-estimation algorithms are vulnerable to ambiguity introduced by nonunique landmarks. Also, the available high-dimensional image data is not fully used when these techniques are extended to vision-based localization. This paper presents the landmark matching, triangulation, reconstruction, and comparison (LTRC) global localization algorithm, which is reasonably immune to ambiguous landmark matches. It extracts natural landmarks for the (rough) matching stage before generating the list of possible position estimates through triangulation. Reconstruction and comparison then rank the possible estimates. The LTRC algorithm has been implemented using an interpreted language, onto a robot equipped with a panoramic vision system. Empirical data shows remarkable improvement in accuracy when compared with the established random sample consensus method. LTRC is also robust against inaccurate map data

    Vision-based localization algorithm based on landmark matching, triangulation, reconstruction, and comparison

    Full text link

    Image-guided Landmark-based Localization and Mapping with LiDAR

    Get PDF
    Mobile robots must be able to determine their position to operate effectively in diverse environments. The presented work proposes a system that integrates LiDAR and camera sensors and utilizes the YOLO object detection model to identify objects in the robot's surroundings. The system, developed in ROS, groups detected objects into triangles, utilizing them as landmarks to determine the robot's position. A triangulation algorithm is employed to obtain the robot's position, which generates a set of nonlinear equations that are solved using the Levenberg-Marquardt algorithm. The presented work comprehensively discusses the proposed system's study, design, and implementation. The investigation begins with an overview of current SLAM techniques. Next, the system design considers the requirements for localization and mapping tasks and an analysis comparing the proposed approach to the contemporary SLAM methods. Finally, we evaluate the system's effectiveness and accuracy through experimentation in the Gazebo simulation environment, which allows for controlling various disturbances that a real scenario can introduce

    Monocular Vision SLAM for Indoor Aerial Vehicles

    Get PDF
    This paper presents a novel indoor navigation and ranging strategy by using a monocular camera. The proposed algorithms are integrated with simultaneous localization and mapping (SLAM) with a focus on indoor aerial vehicle applications. We experimentally validate the proposed algorithms by using a fully self-contained micro aerial vehicle (MAV) with on-board image processing and SLAM capabilities. The range measurement strategy is inspired by the key adaptive mechanisms for depth perception and pattern recognition found in humans and intelligent animals. The navigation strategy assumes an unknown, GPS-denied environment, which is representable via corner-like feature points and straight architectural lines. Experimental results show that the system is only limited by the capabilities of the camera and the availability of good corners

    Vision Based Position Control For Vertical Take-Off And Landing (VTOL) Using One Singular Landmark

    Get PDF
    This project presents a vision based position control for Vertical Take-off and Landing (VTOL) to recognise a singular landmark for landing and take-off. Position control can provide safe flight and an accurate navigation. The circle landmark which used is an artificial landmark at known locations in an environment. Initially, a camera mounted on VTOL facing downward detecting landmarks in environments. A single circle used as landmark and VTOL will be control the position to reach the landmark. The images from the down-looking camera provided vision data to estimates position of VTOL from landmark. A mathematical method based on projective geometry using to locate VTOL on desired landmark from projected point in capture image. By compute the x-y coordinates of the VTOL with respect to landmark, height of camera above landmark will be obtained. VTOL can localize itself in known environment with pose estimation from landmark. The graphic user interface system (GUI) generate by MATLAB software is used to communicate with VTOL to control the VTOL positio

    Mono-vision corner SLAM for indoor navigation

    Get PDF
    We present a real-time monocular vision based range measurement method for Simultaneous Localization and Mapping (SLAM) for an Autonomous Micro Aerial Vehicle (MAV) with significantly constrained payload. Our navigation strategy assumes a GPS denied manmade environment, whose indoor architecture is represented via corner based feature points obtained through a monocular camera. We experiment on a case study mission of vision based path-finding through a conventional maze of corridors in a large building

    Biologically Inspired Monocular Vision Based Navigation and Mapping in GPS-Denied Environments

    Get PDF
    This paper presents an in-depth theoretical study of bio-vision inspired feature extraction and depth perception method integrated with vision-based simultaneous localization and mapping (SLAM). We incorporate the key functions of developed visual cortex in several advanced species, including humans, for depth perception and pattern recognition. Our navigation strategy assumes GPS-denied manmade environment consisting of orthogonal walls, corridors and doors. By exploiting the architectural features of the indoors, we introduce a method for gathering useful landmarks from a monocular camera for SLAM use, with absolute range information without using active ranging sensors. Experimental results show that the system is only limited by the capabilities of the camera and the availability of good corners. The proposed methods are experimentally validated by our self-contained MAV inside a conventional building

    Biologically Inspired Monocular Vision Based Navigation and Mapping in GPS-Denied Environments

    Get PDF
    This paper presents an in-depth theoretical study of bio-vision inspired feature extraction and depth perception method integrated with vision-based simultaneous localization and mapping (SLAM). We incorporate the key functions of developed visual cortex in several advanced species, including humans, for depth perception and pattern recognition. Our navigation strategy assumes GPS-denied manmade environment consisting of orthogonal walls, corridors and doors. By exploiting the architectural features of the indoors, we introduce a method for gathering useful landmarks from a monocular camera for SLAM use, with absolute range information without using active ranging sensors. Experimental results show that the system is only limited by the capabilities of the camera and the availability of good corners. The proposed methods are experimentally validated by our self-contained MAV inside a conventional building
    corecore