6 research outputs found

    Distributed Robotic Vision for Calibration, Localisation, and Mapping

    Get PDF
    This dissertation explores distributed algorithms for calibration, localisation, and mapping in the context of a multi-robot network equipped with cameras and onboard processing, comparing against centralised alternatives where all data is transmitted to a singular external node on which processing occurs. With the rise of large-scale camera networks, and as low-cost on-board processing becomes increasingly feasible in robotics networks, distributed algorithms are becoming important for robustness and scalability. Standard solutions to multi-camera computer vision require the data from all nodes to be processed at a central node which represents a significant single point of failure and incurs infeasible communication costs. Distributed solutions solve these issues by spreading the work over the entire network, operating only on local calculations and direct communication with nearby neighbours. This research considers a framework for a distributed robotic vision platform for calibration, localisation, mapping tasks where three main stages are identified: an initialisation stage where calibration and localisation are performed in a distributed manner, a local tracking stage where visual odometry is performed without inter-robot communication, and a global mapping stage where global alignment and optimisation strategies are applied. In consideration of this framework, this research investigates how algorithms can be developed to produce fundamentally distributed solutions, designed to minimise computational complexity whilst maintaining excellent performance, and designed to operate effectively in the long term. Therefore, three primary objectives are sought aligning with these three stages

    Uydu görüntülerinden yer kontrol noktasız sayısal yüzey haritaları.

    Get PDF
    Generation of Digital Surface Models (DSMs) from stereo satellite (spaceborne) images is classically performed by Ground Control Points (GCPs) which require site visits and precise measurement equipment. However, collection of GCPs is not always possible and such requirement limits the usage of spaceborne imagery. This study aims at developing a fast, fully automatic, GCP-free workflow for DSM generation. The problems caused by GCP-free workflow are overcome using freely-available, low resolution static DSMs (LR-DSM). LR-DSM is registered to the reference satellite image and the registered LR-DSM is used for i) correspondence generation and ii) initial estimate generation for 3-D reconstruction. Novel methods are developed for bias removal for LR-DSM registration and bias equalization for projection functions of satellite imaging. The LR-DSM registration is also shown to be useful for computing the parameters of simple, piecewise empirical projective models. Recent computer vision approaches on stereo correspondence generation and dense depth estimation are tested and adopted for spaceborne DSM generation. The study also presents a complete, fully automatic scheme for GCPfree DSM generation and demonstrates that GCP-free DSM generation is possible and can be performed in much faster time on computers. The resulting DSM can be used in various remote sensing applications including building extraction, disaster monitoring and change detection.Ph.D. - Doctoral Progra

    Inverse rendering techniques for physically grounded image editing

    Get PDF
    From a single picture of a scene, people can typically grasp the spatial layout immediately and even make good guesses at materials properties and where light is coming from to illuminate the scene. For example, we can reliably tell which objects occlude others, what an object is made of and its rough shape, regions that are illuminated or in shadow, and so on. It is interesting how little is known about our ability to make these determinations; as such, we are still not able to robustly "teach" computers to make the same high-level observations as people. This document presents algorithms for understanding intrinsic scene properties from single images. The goal of these inverse rendering techniques is to estimate the configurations of scene elements (geometry, materials, luminaires, camera parameters, etc) using only information visible in an image. Such algorithms have applications in robotics and computer graphics. One such application is in physically grounded image editing: photo editing made easier by leveraging knowledge of the physical space. These applications allow sophisticated editing operations to be performed in a matter of seconds, enabling seamless addition, removal, or relocation of objects in images
    corecore