4,991 research outputs found

    Vision-model-based Real-time Localization of Unmanned Aerial Vehicle for Autonomous Structure Inspection under GPS-denied Environment

    Full text link
    UAVs have been widely used in visual inspections of buildings, bridges and other structures. In either outdoor autonomous or semi-autonomous flights missions strong GPS signal is vital for UAV to locate its own positions. However, strong GPS signal is not always available, and it can degrade or fully loss underneath large structures or close to power lines, which can cause serious control issues or even UAV crashes. Such limitations highly restricted the applications of UAV as a routine inspection tool in various domains. In this paper a vision-model-based real-time self-positioning method is proposed to support autonomous aerial inspection without the need of GPS support. Compared to other localization methods that requires additional onboard sensors, the proposed method uses a single camera to continuously estimate the inflight poses of UAV. Each step of the proposed method is discussed in detail, and its performance is tested through an indoor test case.Comment: 8 pages, 5 figures, submitted to i3ce 201

    Towards Flight Trials for an Autonomous UAV Emergency Landing using Machine Vision

    Get PDF
    This paper presents the evolution and status of a number of research programs focussed on developing an automated fixed wing UAV landing system. Results obtained in each of the three main areas of research as vision-based site identification, path and trajectory planning and multi-criteria decision making are presented. The results obtained provide a baseline for further refinements and constitute the starting point for the implementation of a prototype system ready for flight testing

    Aerial-Ground collaborative sensing: Third-Person view for teleoperation

    Full text link
    Rapid deployment and operation are key requirements in time critical application, such as Search and Rescue (SaR). Efficiently teleoperated ground robots can support first-responders in such situations. However, first-person view teleoperation is sub-optimal in difficult terrains, while a third-person perspective can drastically increase teleoperation performance. Here, we propose a Micro Aerial Vehicle (MAV)-based system that can autonomously provide third-person perspective to ground robots. While our approach is based on local visual servoing, it further leverages the global localization of several ground robots to seamlessly transfer between these ground robots in GPS-denied environments. Therewith one MAV can support multiple ground robots on a demand basis. Furthermore, our system enables different visual detection regimes, and enhanced operability, and return-home functionality. We evaluate our system in real-world SaR scenarios.Comment: Accepted for publication in 2018 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR

    Self-Positioning Smart Buoys, The \u27Un-Buoy\u27 Solution: Logistic Considerations Using Autonomous Surface Craft Technology and Improved Communications Infrastructure

    Get PDF
    Moored buoys have long served national interests, but incur high development, construction, installation, and maintenance costs. Buoys which drift off-location can pose hazards to mariners, and in coastal waters may cause environmental damage. Moreover, retrieval, repair and replacement of drifting buoys may be delayed when data would be most useful. Such gaps in coastal buoy data can pose a threat to national security by reducing maritime domain awareness. The concept of self-positioning buoys has been advanced to reduce installation cost by eliminating mooring hardware. We here describe technology for operation of reduced cost self-positioning buoys which can be used in coastal or oceanic waters. The ASC SCOUT model is based on a self-propelled, GPS-positioned, autonomous surface craft that can be pre-programmed, autonomous, or directed in real time. Each vessel can communicate wirelessly with deployment vessels and other similar buoys directly or via satellite. Engineering options for short or longer term power requirements are considered, in addition to future options for improved energy delivery systems. Methods of reducing buoy drift and position-maintaining energy requirements for self-locating buoys are also discussed, based on the potential of incorporating traditional maritime solutions to these problems. We here include discussion of the advanced Delay Tolerant Networking (DTN) communications draft protocol which offers improved wireless communication capabilities underwater, to adjacent vessels, and to satellites. DTN is particularly adapted for noisy or loss-prone environments, thus it improves reliability. In addition to existing buoy communication via commercial satellites, a growing network of small satellites known as PICOSATs can be readily adapted to provide low-cost communications nodes for buoys. Coordination with planned vessel Automated Identification Systems (AIS) and International Maritime Organization standards for buoy and vessel notificat- - ion systems are reviewed and the legal framework for deployment of autonomous surface vessels is considered

    Sensor-assisted Video Mapping of the Seafloor

    Get PDF
    In recent years video surveys have become an increasingly important ground-truthing of acousticseafloor characterization and benthic habitat mapping studies. However, the ground-truthing and detailed characterization provided by video are still typically done using sparse sample imagery supplemented by physical samples. Combining single video frames in a seamless mosaic can provide a tool by which imagery has significant areal coverage, while at the same time showing small fauna and biological features at mm resolution. The generation of such a mosaic is a challenging task due to height variations of the imaged terrain and decimeter scale knowledge of camera position. This paper discusses the current role of underwater video survey, and the potential for generating consistent, quantitative image maps using video data, accompanied by data that can be measured by auxiliary sensors with sufficient accuracy, such as camera tilt and heading, and their use in automated mosaicking techniques. The camera attitude data also provide the necessary information to support the development of a video collage. The collage provides a quick look at the large spatial scale features in a scene and can be used to pinpoint regions that are likely to yield useful information when rendered into high-resolution mosaics. It is proposed that high quality mosaics can be produced using consumer-grade cameras and low-cost sensors, thereby allowing for the economical scientific video surveys. A case study is presented with the results from benthic habitat mapping and the ground-truthing ofseafloor acoustic data using both real underwater imagery and simulations. A computer modeling of the process of video data acquisition (in particular on a non-flat terrain) allows for a better understanding of the main sources of error in mosaic generation and for the choice of near-optimal processing strategies. Various spatial patterns of video survey coverage are compared and it is shown that some patterns have certain advantages in the sense of accumulated error and overall mosaic accuracy

    An optimal system design process for a Mars roving vehicle

    Get PDF
    The problem of determining the optimal design for a Mars roving vehicle is considered. A system model is generated by consideration of the physical constraints on the design parameters and the requirement that the system be deliverable to the Mars surface. An expression which evaluates system performance relative to mission goals as a function of the design parameters only is developed. The use of nonlinear programming techniques to optimize the design is proposed and an example considering only two of the vehicle subsystems is formulated and solved

    DYLEMA: Using walking robots for landmine detection and location

    Get PDF
    Detection and removal of antipersonnel landmines is an important worldwide concern. A huge number of landmines has been deployed over the last twenty years, and demining will take several more decades, even if no more mines were deployed in future. An adequate mineclearance rate can only be achieved by using new technologies such as improved sensors, efficient manipulators and mobile robots. This paper presents some basic ideas on the configuration of a mobile system for detecting and locating antipersonnel landmines efficiently and effectively. The paper describes the main features of the overall system, which consists of a sensor head that can detect certain landmine types, a manipulator to move the sensor head over large areas, a locating system based on a global-positioning system, a remote supervisor computer and a legged robot used as the subsystems’ carrier. The whole system has been configured to work in a semi-autonomous mode with a view also to robot mobility and energy efficiency.This work has been funded by the Spanish Ministry of Science and Technology under Grant CICYT DPI2001-1595 and DPI2004-05824.Peer reviewe

    Computer vision research at Marshall Space Flight Center

    Get PDF
    Orbital docking, inspection, and sevicing are operations which have the potential for capability enhancement as well as cost reduction for space operations by the application of computer vision technology. Research at MSFC has been a natural outgrowth of orbital docking simulations for remote manually controlled vehicles such as the Teleoperator Retrieval System and the Orbital Maneuvering Vehicle (OMV). Baseline design of the OMV dictates teleoperator control from a ground station. This necessitates a high data-rate communication network and results in several seconds of time delay. Operational costs and vehicle control difficulties could be alleviated by an autonomous or semi-autonomous control system onboard the OMV which would be based on a computer vision system having capability to recognize video images in real time. A concept under development at MSFC with these attributes is based on syntactic pattern recognition. It uses tree graphs for rapid recognition of binary images of known orbiting target vehicles. This technique and others being investigated at MSFC will be evaluated in realistic conditions by the use of MSFC orbital docking simulators. Computer vision is also being applied at MSFC as part of the supporting development for Work Package One of Space Station Freedom
    corecore