438 research outputs found

    On Martian Surface Exploration: Development of Automated 3D Reconstruction and Super-Resolution Restoration Techniques for Mars Orbital Images

    Get PDF
    Very high spatial resolution imaging and topographic (3D) data play an important role in modern Mars science research and engineering applications. This work describes a set of image processing and machine learning methods to produce the “best possible” high-resolution and high-quality 3D and imaging products from existing Mars orbital imaging datasets. The research work is described in nine chapters of which seven are based on separate published journal papers. These include a) a hybrid photogrammetric processing chain that combines the advantages of different stereo matching algorithms to compute stereo disparity with optimal completeness, fine-scale details, and minimised matching artefacts; b) image and 3D co-registration methods that correct a target image and/or 3D data to a reference image and/or 3D data to achieve robust cross-instrument multi-resolution 3D and image co-alignment; c) a deep learning network and processing chain to estimate pixel-scale surface topography from single-view imagery that outperforms traditional photogrammetric methods in terms of product quality and processing speed; d) a deep learning-based single-image super-resolution restoration (SRR) method to enhance the quality and effective resolution of Mars orbital imagery; e) a subpixel-scale 3D processing system using a combination of photogrammetric 3D reconstruction, SRR, and photoclinometric 3D refinement; and f) an optimised subpixel-scale 3D processing system using coupled deep learning based single-view SRR and deep learning based 3D estimation to derive the best possible (in terms of visual quality, effective resolution, and accuracy) 3D products out of present epoch Mars orbital images. The resultant 3D imaging products from the above listed new developments are qualitatively and quantitatively evaluated either in comparison with products from the official NASA planetary data system (PDS) and/or ESA planetary science archive (PSA) releases, and/or in comparison with products generated with different open-source systems. Examples of the scientific application of these novel 3D imaging products are discussed

    Autonomous Navigation in Complex Indoor and Outdoor Environments with Micro Aerial Vehicles

    Get PDF
    Micro aerial vehicles (MAVs) are ideal platforms for surveillance and search and rescue in confined indoor and outdoor environments due to their small size, superior mobility, and hover capability. In such missions, it is essential that the MAV is capable of autonomous flight to minimize operator workload. Despite recent successes in commercialization of GPS-based autonomous MAVs, autonomous navigation in complex and possibly GPS-denied environments gives rise to challenging engineering problems that require an integrated approach to perception, estimation, planning, control, and high level situational awareness. Among these, state estimation is the first and most critical component for autonomous flight, especially because of the inherently fast dynamics of MAVs and the possibly unknown environmental conditions. In this thesis, we present methodologies and system designs, with a focus on state estimation, that enable a light-weight off-the-shelf quadrotor MAV to autonomously navigate complex unknown indoor and outdoor environments using only onboard sensing and computation. We start by developing laser and vision-based state estimation methodologies for indoor autonomous flight. We then investigate fusion from heterogeneous sensors to improve robustness and enable operations in complex indoor and outdoor environments. We further propose estimation algorithms for on-the-fly initialization and online failure recovery. Finally, we present planning, control, and environment coverage strategies for integrated high-level autonomy behaviors. Extensive online experimental results are presented throughout the thesis. We conclude by proposing future research opportunities

    Book reports

    Get PDF

    Verification and Validation of Planning Domain Models

    Get PDF
    The verification and validation of planning domain models is one of the biggest challenges to deploying planning-based automated systems in the real world.The state-of-the-art verification methods of planning domain models are vulnerable to false positives, i.e. counterexamples that are unreachable by sound planners when using the domain under verification during planning tasks. False positives mislead designers into believing correct models are faulty. Consequently, designers needlessly debug correct models to remove these false positives. This process might unnecessarily constrain planning domain models, which can eradicate valid and sometimes required behaviours. Moreover, catching and debugging errors without knowing they are false positives can give verification engineers a false sense of achievement, which might cause them to overlook valid errors.To address this shortfall, the first part of this thesis introduces goal-constrained planning domain model verification, a novel approach that constrains the verification of planning domain models with planning goals to reduce the number of unreachable planning counterexamples. This thesis formally proves the correctness of this method and demonstrates the application of this approach using the model checker Spin and the planner MIPS-XXL. Furthermore, it reports the empirical experiments that validate the feasibility and investigates the performance of the goal-constrained verification approach. The experiments show that not only the goal-constrained verification method is robust against false positive errors, but it also outperforms under-constrained verification tasks in terms of time and memory in some cases.The second part of this thesis investigates the problem of validating the functional equivalence of planning domain models. The need for techniques to validate the functional equivalence of planning domain models has been highlighted in previous research and has applications in model learning, development and extension. Despite the need and importance of proving the functional equivalence of planning domain models, this problem attracted limited research interest.This thesis builds on and extends previous research by proposing a novel approach to validate the functional equivalence of planning domain models. First, this approach employs a planner to remove redundant operators from the given domain models; then, it uses a Satisfiability Modulo Theories (SMT) solver to check if a predicate mapping exists between the two domain models that makes them functionally equivalent. The soundness and completeness of this functional equivalence validation method are formally proven in this thesis.Furthermore, this thesis introduces D-VAL, the first planning domain model automatic validation tool. D-VAL uses the FF planner and the Z3 SMT solver to prove the functional equivalence of planning domain models. Moreover, this thesis demonstrates the feasibility and evaluates the performance of D-VAL against thirteen planning domain models from the International Planning Competition (IPC). Empirical evaluation shows that D-VAL validates the functional equivalence of the most challenging task in less than 43 seconds. These experiments and their results provide a benchmark to evaluate the feasibility and performance of future related work

    Nonlinear Dimensionality Reduction with Side Information

    Get PDF
    In this thesis, I look at three problems with important applications in data processing. Incorporating side information, provided by the user or derived from data, is a main theme of each of these problems. This thesis makes a number of contributions. The first is a technique for combining different embedding objectives, which is then exploited to incorporate side information expressed in terms of transformation invariants known to hold in the data. It also introduces two different ways of incorporating transformation invariants in order to make new similarity measures. Two algorithms are proposed which learn metrics based on different types of side information. These learned metrics can then be used in subsequent embedding methods. Finally, it introduces a manifold learning algorithm that is useful when applied to sequential decision problems. In this case we are given action labels in addition to data points. Actions in the manifold learned by this algorithm have meaningful representations in that they are represented as simple transformations

    Simulation-based Inference : From Approximate Bayesian Computation and Particle Methods to Neural Density Estimation

    Get PDF
    This doctoral thesis in computational statistics utilizes both Monte Carlo methods(approximate Bayesian computation and sequential Monte Carlo) and machine­-learning methods (deep learning and normalizing flows) to develop novel algorithms for infer­ence in implicit Bayesian models. Implicit models are those for which calculating the likelihood function is very challenging (and often impossible), but model simulation is feasible. The inference methods developed in the thesis are simulation­-based infer­ence methods since they leverage the possibility to simulate data from the implicit models. Several approaches are considered in the thesis: Paper II and IV focus on classical methods (sequential Monte Carlo­-based methods), while paper I and III fo­cus on more recent machine learning methods (deep learning and normalizing flows, respectively).Paper I constructs novel deep learning methods for learning summary statistics for approximate Bayesian computation (ABC). To achieve this paper I introduces the partially exchangeable network (PEN), a deep learning architecture specifically de­signed for Markovian data (i.e., partially exchangeable data).Paper II considers Bayesian inference in stochastic differential equation mixed-effects models (SDEMEM). Bayesian inference for SDEMEMs is challenging due to the intractable likelihood function of SDEMEMs. Paper II addresses this problem by designing a novel a Gibbs­-blocking strategy in combination with correlated pseudo­ marginal methods. The paper also discusses how custom particle filters can be adapted to the inference procedure.Paper III introduces the novel inference method sequential neural posterior and like­lihood approximation (SNPLA). SNPLA is a simulation­-based inference algorithm that utilizes normalizing flows for learning both the posterior distribution and the likelihood function of an implicit model via a sequential scheme. By learning both the likelihood and the posterior, and by leveraging the reverse Kullback Leibler (KL) divergence, SNPLA avoids ad­-hoc correction steps and Markov chain Monte Carlo (MCMC) sampling.Paper IV introduces the accelerated-delayed acceptance (ADA) algorithm. ADA can be viewed as an extension of the delayed­-acceptance (DA) MCMC algorithm that leverages connections between the two likelihood ratios of DA to further accelerate MCMC sampling from the posterior distribution of interest, although our approach introduces an approximation. The main case study of paper IV is a double­-well po­tential stochastic differential equation (DWP­SDE) model for protein-­folding data (reaction coordinate data)

    Pattern-theoretic foundations of automatic target recognition in clutter

    Get PDF
    Issued as final reportAir Force Office of Scientific Research (U.S.
    corecore