40 research outputs found

    Homography-Based State Estimation for Autonomous Exploration in Unknown Environments

    Get PDF
    This thesis presents the development of vision-based state estimation algorithms to enable a quadcopter UAV to navigate and explore a previously unknown GPS denied environment. These state estimation algorithms are based on tracked Speeded-Up Robust Features (SURF) points and the homography relationship that relates the camera motion to the locations of tracked planar feature points in the image plane. An extended Kalman filter implementation is developed to perform sensor fusion using measurements from an onboard inertial measurement unit (accelerometers and rate gyros) with vision-based measurements derived from the homography relationship. Therefore, the measurement update in the filter requires the processing of images from a monocular camera to detect and track planar feature points followed by the computation of homography parameters. The state estimation algorithms are designed to be independent of GPS since GPS can be unreliable or unavailable in many operational environments of interest such as urban environments. The state estimation algorithms are implemented using simulated data from a quadcopter UAV and then tested using post processed video and IMU data from flights of an autonomous quadcopter. The homography-based state estimation algorithm was effective, but accumulates drift errors over time due to the relativistic homography measurement of position

    Towards Robust Visual-Controlled Flight of Single and Multiple UAVs in GPS-Denied Indoor Environments

    Get PDF
    Having had its origins in the minds of science fiction authors, mobile robot hardware has become reality many years ago. However, most envisioned applications have yet remained fictional - a fact that is likely to be caused by the lack of sufficient perception systems. In particular, mobile robots need to be aware of their own location with respect to their environment at all times to act in a reasonable manner. Nevertheless, a promising application for mobile robots in the near future could be, e.g., search and rescue tasks on disaster sites. Here, small and agile flying robots are an ideal tool to effectively create an overview of the scene since they are largely unaffected by unstructured environments and blocked passageways. In this respect, this thesis first explores the problem of ego-motion estimation for quadrotor Unmanned Aerial Vehicles (UAVs) based entirely on onboard sensing and processing hardware. To this end, cameras are an ideal choice as the major sensory modality. They are light, cheap, and provide a dense amount of information on the environment. While the literature provides camera-based algorithms to estimate and track the pose of UAVs over time, these solutions lack the robustness required for many real-world applications due to their inability to recover a loss of tracking fast. Therefore, in the first part of this thesis, a robust algorithm to estimate the velocity of a quadrotor UAV based on optical flow is presented. Additionally, the influence of the incorporated measurements from an Inertia Measurement Unit (IMU) on the precision of the velocity estimates is discussed and experimentally validated. Finally, we introduce a novel nonlinear observation scheme to recover the metric scale factor of the state estimate through fusion with acceleration measurements. This nonlinear model allows now to predict the convergence behavior of the presented filtering approach. All findings are experimentally evaluated, including the first presented human-controlled closed-loop flights based entirely on onboard velocity estimation. In the second part of this thesis, we address the problem of collaborative multi robot operations based on onboard visual perception. For instances of a direct line-of-sight between the robots, we propose a distributed formation control based on ego-motion detection and visually detected bearing angles between the members of the formation. To overcome the limited field of view of real cameras, we add an artificial yaw-rotation to track robots that would be invisible to static cameras. Afterwards, without the need for direct visual detections, we present a novel contribution to the mutual localization problem. In particular, we demonstrate a precise global localization of a monocular camera with respect to a dense 3D map. To this end, we propose an iterative algorithm that aims to estimate the location of the camera for which the photometric error between a synthesized view of the dense map and the real camera image is minimal

    Use of Unmanned Aerial Systems in Civil Applications

    Get PDF
    Interest in drones has been exponentially growing in the last ten years and these machines are often presented as the optimal solution in a huge number of civil applications (monitoring, agriculture, emergency management etc). However the promises still do not match the data coming from the consumer market, suggesting that the only big field in which the use of small unmanned aerial vehicles is actually profitable is the video-makers’ one. This may be explained partly with the strong limits imposed by existing (and often "obsolete") national regulations, but also - and pheraps mainly - with the lack of real autonomy. The vast majority of vehicles on the market nowadays are infact autonomous only in the sense that they are able to follow a pre-determined list of latitude-longitude-altitude coordinates. The aim of this thesis is to demonstrate that complete autonomy for UAVs can be achieved only with a performing control, reliable and flexible planning platforms and strong perception capabilities; these topics are introduced and discussed by presenting the results of the main research activities performed by the candidate in the last three years which have resulted in 1) the design, integration and control of a test bed for validating and benchmarking visual-based algorithm for space applications; 2) the implementation of a cloud-based platform for multi-agent mission planning; 3) the on-board use of a multi-sensor fusion framework based on an Extended Kalman Filter architecture
    corecore