6 research outputs found

    A Co-optimal Coverage Path Planning Method for Aerial Scanning of Complex Structures

    Get PDF
    The utilization of unmanned aerial vehicles (UAVs) in survey and inspection of civil infrastructure has been growing rapidly. However, computationally efficient solvers that find optimal flight paths while ensuring high-quality data acquisition of the complete 3D structure remains a difficult problem. Existing solvers typically prioritize efficient flight paths, or coverage, or reducing computational complexity of the algorithm – but these objectives are not co-optimized holistically. In this work we introduce a co-optimal coverage path planning (CCPP) method that simultaneously co-optimizes the UAV path, the quality of the captured images, and reducing computational complexity of the solver all while adhering to safety and inspection requirements. The result is a highly parallelizable algorithm that produces more efficient paths where quality of the useful image data is improved. The path optimization algorithm utilizes a particle swarm optimization (PSO) framework which iteratively optimizes the coverage paths without needing to discretize the motion space or simplify the sensing models as is done in similar methods. The core of the method consists of a cost function that measures both the quality and efficiency of a coverage inspection path, and a greedy heuristic for the optimization enhancement by aggressively exploring the viewpoints search spaces. To assess the proposed method, a coverage path quality evaluation method is also presented in this research, which can be utilized as the benchmark for assessing other CPP methods for structural inspection purpose. The effectiveness of the proposed method is demonstrated by comparing the quality and efficiency of the proposed approach with the state-of-art through both synthetic and real-world scenes. The experiments show that our method enables significant performance improvement in coverage inspection quality while preserving the path efficiency on different test geometries

    Image Space Coverage Model for Deployment of Multi-Camera Networks

    Get PDF
    When it comes to visual sensor networks deployment and optimization, modeling the coverage of a given camera network is a vital step. Due to many complex parameters and criteria that governs coverage quality of a given visual network, modeling such coverage accurately and efficiently represents a real challenge. This thesis explores the idea of simplifying the mathematical interpretation that describes a given visual sensor without incurring a cost on coverage measurement accuracy. In this thesis, coverage criteria are described in image space, in contrast to some of the more advanced models found in literature, that are formulated in 3D space, which in turn will have a direct impact on efficiency and time cost. In addition, this thesis also proposes a novel sensor deployment approach that examines the surface topology of the target object to be covered by means of a mesh segmentation algorithm, which is that a different way to tackle the problem other than the exhaustive search methods employed in the examined literature. There are two main contributions in this thesis. Firstly, a new coverage model that takes partial occlusion criterion into account is proposed, which is shown to be more accurate and more efficient than the competition. Next, a new sensor deployment method was presented that takes the target object shape topological properties into account, an approach that is to the best of our knowledge, was not attempted in literature before at the time of publication. This thesis attempts to support all of claims made above, the proposed model is validated and compared to an existing state of art coverage model. In addition, simulations and experiments were carried out to demonstrate the accuracy and time cost efficiency of the proposed work

    Automated camera ranking and selection using video content and scene context

    Get PDF
    PhDWhen observing a scene with multiple cameras, an important problem to solve is to automatically identify “what camera feed should be shown and when?” The answer to this question is of interest for a number of applications and scenarios ranging from sports to surveillance. In this thesis we present a framework for the ranking of each video frame and camera across time and the camera network, respectively. This ranking is then used for automated video production. In the first stage information from each camera view and from the objects in it is extracted and represented in a way that allows for object- and frame-ranking. First objects are detected and ranked within and across camera views. This ranking takes into account both visible and contextual information related to the object. Then content ranking is performed based on the objects in the view and camera-network level information. We propose two novel techniques for content ranking namely: Routing Based Ranking (RBR) and Multivariate Gaussian based Ranking (MVG). In RBR we use a rule based framework where weighted fusion of object and frame level information takes place while in MVG the rank is estimated as a multivariate Gaussian distribution. Through experimental and subjective validation we demonstrate that the proposed content ranking strategies allows the identification of the best-camera at each time. The second part of the thesis focuses on the automatic generation of N-to-1 videos based on the ranked content. We demonstrate that in such production settings it is undesirable to have frequent inter-camera switching. Thus motivating the need for a compromise, between selecting the best camera most of the time and minimising the frequent inter-camera switching, we demonstrate that state-of-the-art techniques for this task are inadequate and fail in dynamic scenes. We propose three novel methods for automated camera selection. The first method (¡go f ) performs a joint optimization of a cost function that depends on both the view quality and inter-camera switching so that a i Abstract ii pleasing best-view video sequence can be composed. The other two methods (¡dbn and ¡util) include the selection decision into the ranking-strategy. In ¡dbn we model the best-camera selection as a state sequence via Directed Acyclic Graphs (DAG) designed as a Dynamic Bayesian Network (DBN), which encodes the contextual knowledge about the camera network and employs the past information to minimize the inter camera switches. In comparison ¡util utilizes the past as well as the future information in a Partially Observable Markov Decision Process (POMDP) where the camera-selection at a certain time is influenced by the past information and its repercussions in the future. The performance of the proposed approach is demonstrated on multiple real and synthetic multi-camera setups. We compare the proposed architectures with various baseline methods with encouraging results. The performance of the proposed approaches is also validated through extensive subjective testing

    A multi-camera surveillance system that estimates quality-of-view measurement

    No full text
    In this paper, we propose a multi-camera video surveillance system with automatic camera selection. A new confidence measure, Quality-Of-View (QOV), is defined to automatically evaluate the camera’s view performance for each time instant. This measure takes into account view angle and distance from subjects. By comparing each camera’s QOVs, the system can select the most appropriate cameras to perform specific tasks. We also present an approach to determine the minimum number of cameras and their layout in a convex polygonal room under specific QOV constraints. Finally, we implement an experimental surveillance system, to confirm the stability of our algorithm and validate the critical underlying concepts of QOV. Index Terms — QOV, Quality-Of-View, multi-camera, camera selectio
    corecore