14,116 research outputs found

    Visual servoing of an autonomous helicopter in urban areas using feature tracking

    Get PDF
    We present the design and implementation of a vision-based feature tracking system for an autonomous helicopter. Visual sensing is used for estimating the position and velocity of features in the image plane (urban features like windows) in order to generate velocity references for the flight control. These visual-based references are then combined with GPS-positioning references to navigate towards these features and then track them. We present results from experimental flight trials, performed in two UAV systems and under different conditions that show the feasibility and robustness of our approach

    A Climbing-Flying Robot for Power Line Inspection

    Get PDF

    Visual localisation of electricity pylons for power line inspection

    Get PDF
    Inspection of power infrastructure is a regular maintenance event. To date the inspection process has mostly been done manually, but there is growing interest in automating the process. The automation of the inspection process will require an accurate means for the localisation of the power infrastructure components. In this research, we studied the visual localisation of a pylon. The pylon is the most prominent component of the power infrastructure and can provide a context for the inspection of the other components. Point-based descriptors tend to perform poorly on texture less objects such as pylons, therefore we explored the localisation using convolutional neural networks and geometric constraints. The crossings of the pylon, or vertices, are salient points on the pylon. These vertices aid with recognition and pose estimation of the pylon. We were successfully able to use a convolutional neural network for the detection of the vertices. A model-based technique, geometric hashing, was used to establish the correspondence between the stored pylon model and the scene object. We showed the effectiveness of the method as a voting technique to determine the pose estimation from a single image. In a localisation framework, the method serves as the initialization of the tracking process. We were able to incorporate an extended Kalman filter for subsequent incremental tracking of the camera relative to the pylon. Also, we demonstrated an alternative tracking using heatmap details from the vertex detection. We successfully demonstrated the proposed algorithms and evaluated their effectiveness using a model pylon we built in the laboratory. Furthermore, we revalidated the results on a real-world outdoor electricity pylon. Our experiments illustrate that model-based techniques can be deployed as part of the navigation aspect of a robot

    Detection and recovery from camera bump during automated inspection of automobiles on an assembly line

    Get PDF
    This thesis details the steps taken to detect and compensate for camera bumps while per- forming part identification using VI at the BMW manufacturing plant and on the simulation testbed. For the system presented here to work, the user is required to record a video from the camera before the camera is bumped and one after the camera has been bumped. The premise behind the method suggested here is that the transformation between the background in the pre- and post-bump video will be equal to the transformation in the foreground. A Background extraction program is used to generate a background image from the pre- and post-bump videos. Feature tracking and matching is performed on the background images to find the transformation between them. This transfor- mation is then applied to the templates extracted from the pre-bump video. An additional manual compensation step is needed in cases where the transformation in the background is not equal to the transformation in the foreground. The resultant transformation is applied to all the templates of the pre-bump video and VI is seen to successfully identify parts with sufficient accuracy

    Simultaneous localization and map-building using active vision

    No full text
    An active approach to sensing can provide the focused measurement capability over a wide field of view which allows correctly formulated Simultaneous Localization and Map-Building (SLAM) to be implemented with vision, permitting repeatable long-term localization using only naturally occurring, automatically-detected features. In this paper, we present the first example of a general system for autonomous localization using active vision, enabled here by a high-performance stereo head, addressing such issues as uncertainty-based measurement selection, automatic map-maintenance, and goal-directed steering. We present varied real-time experiments in a complex environment.Published versio

    Fast, Autonomous Flight in GPS-Denied and Cluttered Environments

    Full text link
    One of the most challenging tasks for a flying robot is to autonomously navigate between target locations quickly and reliably while avoiding obstacles in its path, and with little to no a-priori knowledge of the operating environment. This challenge is addressed in the present paper. We describe the system design and software architecture of our proposed solution, and showcase how all the distinct components can be integrated to enable smooth robot operation. We provide critical insight on hardware and software component selection and development, and present results from extensive experimental testing in real-world warehouse environments. Experimental testing reveals that our proposed solution can deliver fast and robust aerial robot autonomous navigation in cluttered, GPS-denied environments.Comment: Pre-peer reviewed version of the article accepted in Journal of Field Robotic

    A Monocular SLAM Method to Estimate Relative Pose During Satellite Proximity Operations

    Get PDF
    Automated satellite proximity operations is an increasingly relevant area of mission operations for the US Air Force with potential to significantly enhance space situational awareness (SSA). Simultaneous localization and mapping (SLAM) is a computer vision method of constructing and updating a 3D map while keeping track of the location and orientation of the imaging agent inside the map. The main objective of this research effort is to design a monocular SLAM method customized for the space environment. The method developed in this research will be implemented in an indoor proximity operations simulation laboratory. A run-time analysis is performed, showing near real-time operation. The method is verified by comparing SLAM results to truth vertical rotation data from a CubeSat air bearing testbed. This work enables control and testing of simulated proximity operations hardware in a laboratory environment. Additionally, this research lays the foundation for autonomous satellite proximity operations with unknown targets and minimal additional size, weight, and power requirements, creating opportunities for numerous mission concepts not previously available
    corecore