48 research outputs found

    Robust Estimation of Motion Parameters and Scene Geometry : Minimal Solvers and Convexification of Regularisers for Low-Rank Approximation

    Get PDF
    In the dawning age of autonomous driving, accurate and robust tracking of vehicles is a quintessential part. This is inextricably linked with the problem of Simultaneous Localisation and Mapping (SLAM), in which one tries to determine the position of a vehicle relative to its surroundings without prior knowledge of them. The more you know about the object you wish to track—through sensors or mechanical construction—the more likely you are to get good positioning estimates. In the first part of this thesis, we explore new ways of improving positioning for vehicles travelling on a planar surface. This is done in several different ways: first, we generalise the work done for monocular vision to include two cameras, we propose ways of speeding up the estimation time with polynomial solvers, and we develop an auto-calibration method to cope with radially distorted images, without enforcing pre-calibration procedures.We continue to investigate the case of constrained motion—this time using auxiliary data from inertial measurement units (IMUs) to improve positioning of unmanned aerial vehicles (UAVs). The proposed methods improve the state-of-the-art for partially calibrated cases (with unknown focal length) for indoor navigation. Furthermore, we propose the first-ever real-time compatible minimal solver for simultaneous estimation of radial distortion profile, focal length, and motion parameters while utilising the IMU data.In the third and final part of this thesis, we develop a bilinear framework for low-rank regularisation, with global optimality guarantees under certain conditions. We also show equivalence between the linear and the bilinear framework, in the sense that the objectives are equal. This enables users of alternating direction method of multipliers (ADMM)—or other subgradient or splitting methods—to transition to the new framework, while being able to enjoy the benefits of second order methods. Furthermore, we propose a novel regulariser fusing two popular methods. This way we are able to combine the best of two worlds by encouraging bias reduction while enforcing low-rank solutions

    PDE-Based Parameterisation Techniques for Planar Multipatch Domains

    Full text link
    This paper presents a PDE-based parameterisation framework for addressing the planar surface-to-volume (StV) problem of finding a valid description of the domain's interior given no more than a spline-based description of its boundary contours. The framework is geared towards isogeometric analysis (IGA) applications wherein the physical domain is comprised of more than four sides, hence requiring more than one patch. We adopt the concept of harmonic maps and propose several PDE-based problem formulations capable of finding a valid map between a convex parametric multipatch domain and the piecewise-smooth physical domain with an equal number of sides. In line with the isoparametric paradigm of IGA, we treat the StV problem using techniques that are characteristic for the analysis step. As such, this study proposes several IGA-based numerical algorithms for the problem's governing equations that can be effortlessly integrated into a well-developed IGA software suite. We augment the framework with mechanisms that enable controlling the parametric properties of the outcome. Parametric control is accomplished by, among other techniques, the introduction of a curvilinear coordinate system in the convex parametric domain that, depending on the application, builds desired features into the computed harmonic map, such as homogeneous cell sizes or boundary layers

    Variational surface reconstruction

    Get PDF
    The demand for capturing 3D models of real world objects or scenes has steadily increased in the past. Today, there are numerous developments that indicate an even greater importance in the future: Computer generated special effects are extensively used and highly benefit from such data, 3D printing is starting to become more affordable, and the ability to conveniently include 3D content in websites has quite matured. Thus, 3D reconstruction has been and still is one of the most important research topics in the area of computer vision. Here, the reconstruction of a 3D model from a number of colour images with given camera poses is one of the most common tasks known as multi-view stereo. We contribute to the two main stages that arise in popular strategies for solving this problem: The estimation of depth maps from multiple views and the integration of multiple depth maps into a single watertight surface. Subsequently, we relax the constraint that the camera poses have to be known and present a novel pipeline for 3D reconstruction from image sequences that solely relies on dense ideas. It proves to be an interesting alternative to popular sparse approaches and leads to competitive results. When relying on sparse features, this only allows to estimate an oriented point cloud instead of a surface. To this end, we finally propose a general higher order framework for the surface reconstruction from oriented points.In den letzten Jahrzehnten ist die Nachfrage nach digitalen 3D Modellen von Objekten und Szenen ständig gestiegen und vieles spricht dafür, dass sich dies auch in Zukunft fortsetzt: Computergenerierte Spezialeffekte werden immer flächendeckender eingesetzt, der Druck von dreidimensionalen Gegenständen macht große Fortschritte, und die Darstellung dreidimensionaler Modelle im Webbrowser wird immer ausgereifter. Deshalb ist die 3D Rekonstruktion eines der wichtigsten Forschungsthemen im Bereich des maschinellen Sehens. Die Rekonstruktion von einem 3D Modell aus mehreren Bildern mit gegebenen Kameramatritzen ist hier eine der häufigsten Problemstellungen, bekannt als multi-view stereo. Wir leisten einen Beitrag zu den zwei wichtigen Schritten, die in multi-view stereo Ansätzen angewandt werden: Die Schätzung von Tiefenkarten aus mehreren Bildern und die Fusion von mehreren Tiefenkarten zu einem einzigen 3D Modell. Anschließend lockern wir die Voraussetzung, dass die Kameramatritzen bekannt sein müssen und präsentieren ein neues Verfahren zur 3D Rekonstruktion aus Bildsequenzen, das vollständig auf dichten Ansätzen beruht. Dies erweist sich als interessante Alternative zu populären Methoden, die mit einzelnen Merkmalen arbeiten. Verfahren, die auf einzelnen Merkmalen beruhen, erlauben die Schätzung von orientierten Punktwolken. Daher entwickeln wir zum Schluss ein allgemeines Rahmenwerk für die Berechnung von wasserdichten Oberflächen aus orientierten Punktwolken

    Structured learning for information retrieval

    Get PDF
    Information retrieval is the area of study concerned with the process of searching, recovering and interpreting information from large amounts of data. In this Thesis we show that many of the problems in information retrieval consist of structured learning, where the goal is to learn predictors of complex output structures, consisting of many inter-dependent variables. We then attack these problems using principled machine learning methods that are specifically suited for such scenarios. In the process of doing so, we develop new models, new model extensions and new algorithms that, when integrated with existing methodology, comprise a new set of tools for solving a variety of information retrieval problems. Firstly, we cover the multi-label classification problem, where we seek to predict a set of labels associated with a given object; the output in this case is structured, as the output variables are interdependent. Secondly, we focus on document ranking, where given a query and a set of documents associated with it we want to rank them according to their relevance with respect to the query; here, again, we have a structured output - a ranking of documents. Thirdly, we address topic models, where we are given a set of documents and attempt to find a compact representation of them, by learning latent topics and associating a topic distribution to each document; the output is again structured, consisting of word and topic distributions. For all the above problems, we obtain state-of-the-art solutions as attested by empirical performance in publicly available real-world datasets

    Improving Quantification in Lung PET/CT for the Evaluation of Disease Progression and Treatment Effectiveness

    Get PDF
    Positron Emission Tomography (PET) allows imaging of functional processes in vivo by measuring the distribution of an administered radiotracer. Whilst one of its main uses is directed towards lung cancer, there is an increased interest in diffuse lung diseases, for which the incidences rise every year, mainly due to environmental reasons and population ageing. However, PET acquisitions in the lung are particularly challenging due to several effects, including the inevitable cardiac and respiratory motion and the loss of spatial resolution due to low density, causing increased positron range. This thesis will focus on Idiopathic Pulmonary Fibrosis (IPF), a disease whose aetiology is poorly understood while patient survival is limited to a few years only. Contrary to lung tumours, this diffuse lung disease modifies the lung architecture more globally. The changes result in small structures with varying densities. Previous work has developed data analysis techniques addressing some of the challenges of imaging patients with IPF. However, robust reconstruction techniques are still necessary to obtain quantitative measures for such data, where it should be beneficial to exploit recent advances in PET scanner hardware such as Time of Flight (TOF) and respiratory motion monitoring. Firstly, positron range in the lung will be discussed, evaluating its effect in density-varying media, such as fibrotic lung. Secondly, the general effect of using incorrect attenuation data in lung PET reconstructions will be assessed. The study will compare TOF and non-TOF reconstructions and quantify the local and global artefacts created by data inconsistencies and respiratory motion. Then, motion compensation will be addressed by proposing a method which takes into account the changes of density and activity in the lungs during the respiration, via the estimation of the volume changes using the deformation fields. The method is evaluated on late time frame PET acquisitions using š⁸F-FDG where the radiotracer distribution has stabilised. It is then used as the basis for a method for motion compensation of the early time frames (starting with the administration of the radiotracer), leading to a technique that could be used for motion compensation of kinetic measures. Preliminary results are provided for kinetic parameters extracted from short dynamic data using š⁸F-FDG

    Lattice and supersymmetric field theories

    Get PDF

    Regression modelling using priors depending on Fisher information covariance kernels (I-priors)

    Get PDF
    Regression analysis is undoubtedly an important tool to understand the relationship between one or more explanatory and independent variables of interest. In this thesis, we explore a novel methodology for fitting a wide range of parametric and nonparametric regression models, called the I-prior methodology (Bergsma, 2018). We assume that the regression function belongs to a reproducing kernel Hilbert or Kreĭn space of functions, and by doing so, allows us to utilise the convenient topologies of these vector spaces. This is important for the derivation of the Fisher information of the regression function, which might be infinite dimensional. Based on the principle of maximum entropy, an I-prior is an objective Gaussian process prior for the regression function with covariance function proportional to its Fisher information. Our work focusses on the statistical methodology and computational aspects of fitting I-priors models. We examine a likelihood-based approach (direct optimisation and EM algorithm) for fitting I-prior models with normally distributed errors. The culmination of this work is the R package iprior (Jamil, 2017) which has been made publicly available on CRAN. The normal I-prior methodology is subsequently extended to fit categorical response models, achieved by “squashing” the regression functions through a probit sigmoid function. Estimation of I-probit models, as we call it, proves challenging due to the intractable integral involved in computing the likelihood. We overcome this difficulty by way of variational approximations. Finally, we turn to a fully Bayesian approach of variable selection using I-priors for linear models to tackle multicollinearity. We illustrate the use of I-priors in various simulated and real-data examples. Our study advocates the I-prior methodology as being a simple, intuitive, and comparable alternative to similar leading state-of-the-art models

    Scalable Bayesian inversion with Poisson data

    Get PDF
    Poisson data arise in many important inverse problems, e.g., medical imaging. The stochastic nature of noisy observation processes and imprecise prior information implies that there exists an ensemble of solutions consistent with the given Poisson data to various extents. Existing approaches, e.g., maximum likelihood and penalised maximum likelihood, incorporate the statistical information for point estimates, but fail to provide the important uncertainty information of various possible solu- tions. While full Bayesian approaches can solve this problem, the posterior distributions are often intractable due to their complicated form and the curse of dimensionality. In this thesis, we investigate approximate Bayesian inference techniques, i.e., variational inference (VI), expectation propagation (EP) and Bayesian deep learning (BDL), for scalable posterior exploration. The scalability relies on leveraging 1) mathematical structures emerging in the problems, i.e., the low rank structure of forward operators and the rank 1 projection form of factors in the posterior distribution, and 2) efficient feed forward processes of neural networks and further reduced training time by flexibility of dimensions with incorporating forward and adjoint operators. Apart from the scalability, we also address theoretical analysis, algorithmic design and practical implementation. For VI, we derive explicit functional form and analyse the convergence of algorithms, which are long-standing problems in the literature. For EP, we discuss how to incorporate nonnegative constraints and how to design stable moment evaluation schemes, which are vital and nontrivial practical concerns. For BDL, specifically conditional variational auto-encoders (CVAEs), we investigate how to apply them for uncertainty quantification of inverse problems and develop flexible and novel frameworks for general Bayesian Inversion. Finally, we justify these contributions with numerical experiments and show the competitiveness of our proposed methods by comparing with state-of-the-art benchmarks
    corecore