113 research outputs found

    Hamiltonian Monte Carlo Acceleration Using Surrogate Functions with Random Bases

    Full text link
    For big data analysis, high computational cost for Bayesian methods often limits their applications in practice. In recent years, there have been many attempts to improve computational efficiency of Bayesian inference. Here we propose an efficient and scalable computational technique for a state-of-the-art Markov Chain Monte Carlo (MCMC) methods, namely, Hamiltonian Monte Carlo (HMC). The key idea is to explore and exploit the structure and regularity in parameter space for the underlying probabilistic model to construct an effective approximation of its geometric properties. To this end, we build a surrogate function to approximate the target distribution using properly chosen random bases and an efficient optimization process. The resulting method provides a flexible, scalable, and efficient sampling algorithm, which converges to the correct target distribution. We show that by choosing the basis functions and optimization process differently, our method can be related to other approaches for the construction of surrogate functions such as generalized additive models or Gaussian process models. Experiments based on simulated and real data show that our approach leads to substantially more efficient sampling algorithms compared to existing state-of-the art methods

    Parametric Level-sets Enhanced To Improve Reconstruction (PaLEnTIR)

    Full text link
    In this paper, we consider the restoration and reconstruction of piecewise constant objects in two and three dimensions using PaLEnTIR, a significantly enhanced Parametric level set (PaLS) model relative to the current state-of-the-art. The primary contribution of this paper is a new PaLS formulation which requires only a single level set function to recover a scene with piecewise constant objects possessing multiple unknown contrasts. Our model offers distinct advantages over current approaches to the multi-contrast, multi-object problem, all of which require multiple level sets and explicit estimation of the contrast magnitudes. Given upper and lower bounds on the contrast, our approach is able to recover objects with any distribution of contrasts and eliminates the need to know either the number of contrasts in a given scene or their values. We provide an iterative process for finding these space-varying contrast limits. Relative to most PaLS methods which employ radial basis functions (RBFs), our model makes use of non-isotropic basis functions, thereby expanding the class of shapes that a PaLS model of a given complexity can approximate. Finally, PaLEnTIR improves the conditioning of the Jacobian matrix required as part of the parameter identification process and consequently accelerates the optimization methods by controlling the magnitude of the PaLS expansion coefficients, fixing the centers of the basis functions, and the uniqueness of parametric to image mappings provided by the new parameterization. We demonstrate the performance of the new approach using both 2D and 3D variants of X-ray computed tomography, diffuse optical tomography (DOT), denoising, deconvolution problems. Application to experimental sparse CT data and simulated data with different types of noise are performed to further validate the proposed method.Comment: 31 pages, 56 figure

    Local Radial Basis Function Methods for Solving Partial Differential Equations

    Get PDF
    Meshless methods are relatively new numerical methods which have gained popularity in computational and engineering sciences during the last two decades. This dissertation develops two new localized meshless methods for solving a variety partial differential equations. Recently, some localized meshless methods have been introduced in order to handle large-scale problems, or to avoid ill-conditioned problems involving global radial basis function approximations. This dissertation explains two new localized meshelss methods, each derived from the global Method of Approximate Particular Solutions (MAPS). One method, the Localized Method of Approximate Particular Solutions (LMAPS), is used for elliptic and parabolic partial differential equations (PDEs) using a global sparse linear system of equations. The second method, the Explicit Localized Method of Approximate Particular Solutions (ELMAPS), is constructed for solving parabolic types of partial differential equations by inverting a finite number of small linear systems. For both methods, the only information that is needed in constructing the approximating solution to PDEs, consists of the local nodes that fall within the domain of influence of the data. Since the methods are completely mesh free, they can be used for irregularly shaped domains. Both methods are tested and compared with existing global and local meshless methods. The results illustrate the accuracy and efficiency of our proposed methods

    Constrained deformation for evolutionary optimization

    Get PDF
    Sieger D. Constrained deformation for evolutionary optimization. Bielefeld: Universität Bielefeld; 2017.This thesis investigates shape deformation techniques for their use in design optimization tasks. In the first part, we introduce state-of-the-art deformation methods and evaluate them in a set of representative benchmarks. Based on these benchmarking results, we derive essential criteria and features a deformation technique should satisfy in order to be successfully applicable within design optimization. In the second part, we concentrate on the application and improvement of deformation techniques based on radial basis functions. We present and evaluate a unified framework for surface and volume mesh deformation and investigate questions of performance and scalability. In the final third part, we concentrate on the integration of additional constraints into the deformation, thereby improving the overall effectiveness of the design optimization process and fostering the creation of more feasible and producible design variations. We present a novel shape deformation technique that effectively maintains different types of geometric constraints such as planarity, circularity, or characteristic feature lines during deformation. At the same time, our method provides a unique level of modeling flexibility, quality, robustness, and scalability. Finally, we integrate techniques for automatic constraint detection directly into our deformation framework, thereby making our method more easily applicable within complex design optimization scenarios

    Centralized and distributed learning methods for predictive health analytics

    Get PDF
    The U.S. health care system is considered costly and highly inefficient, devoting substantial resources to the treatment of acute conditions in a hospital setting rather than focusing on prevention and keeping patients out of the hospital. The potential for cost savings is large; in the U.S. more than $30 billion are spent each year on hospitalizations deemed preventable, 31% of which is attributed to heart diseases and 20% to diabetes. Motivated by this, our work focuses on developing centralized and distributed learning methods to predict future heart- or diabetes- related hospitalizations based on patient Electronic Health Records (EHRs). We explore a variety of supervised classification methods and we present a novel likelihood ratio based method (K-LRT) that predicts hospitalizations and offers interpretability by identifying the K most significant features that lead to a positive prediction for each patient. Next, assuming that the positive class consists of multiple clusters (hospitalized patients due to different reasons), while the negative class is drawn from a single cluster (non-hospitalized patients healthy in every aspect), we present an alternating optimization approach, which jointly discovers the clusters in the positive class and optimizes the classifiers that separate each positive cluster from the negative samples. We establish the convergence of the method and characterize its VC dimension. Last, we develop a decentralized cluster Primal-Dual Splitting (cPDS) method for large-scale problems, that is computationally efficient and privacy-aware. Such a distributed learning scheme is relevant for multi-institutional collaborations or peer-to-peer applications, allowing the agents to collaborate, while keeping every participant's data private. cPDS is proved to have an improved convergence rate compared to existing centralized and decentralized methods. We test all methods on real EHR data from the Boston Medical Center and compare results in terms of prediction accuracy and interpretability

    METAMODEL-BASED GLOBAL OPTIMIZATION AND GENERALIZED NASH EQUILIBRIUM PROBLEMS

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Systematic coarse-graining and dynamical simulations of anisotropic molecules with applications in organic semiconductors

    Get PDF
    Organic semiconductors are used widely in different applications, including organic photovoltaics (OPVs), devices that convert solar energy to electricity. These devices, if applicable commercially, can help to supply the world’s energy needs without requiring complicated manufacturing and maintenance. Moreover, OPVs possess several useful physical properties such as being light weight, highly transparent, and flexible. This makes organic electronic devices advantageous over those made of inorganic hard materials, especially in applications in which these conditions are required. Although experimental studies show that organic semiconductors can potentially yield high performing devices, the electronic processes that govern the conversion of light to energy are not fully understood. Specifically, how free electrons are created and transferred within the device when a photon is absorbed is strongly debated in the literature. Many experimental and theoretical results have shown that microstructure at the interfaces between the component organic semiconductor materials that make up the device plays an important role in these processes. The microstructure can be induced by directional forces between generally anisotropic organic-semiconductor molecules, combined with translational symmetry breaking at interfaces. In Chapter 3, the interface of a high-performing electron donor–acceptor OPV system consisting of two small organic semiconductors benzodithiophene quaterthiophene rhodanine (BQR) and [6,6]-phenyl-C71-butyric acid methyl ester (PC71BM) is studied using classical molecular dynamics (MD). Atomistic simulations at high temperatures indicate that the "face-on" configuration is more favorable at a liquid–solid interface between the materials. In addition, molecules close to the interface are less ordered with respect to one another than those far from the interface. These factors may benefit charge separation and transport, resulting in good device performance. In general, atomistic simulations are not feasible for studying donor–acceptor interface formation for the typical domain sizes found in devices. A solution to this is to use coarsegrained (CG) models, which increases the simulation efficiency by replacing a collection of atoms as a single interacting site. In Chapter 4, a new systematic methodology to generate CG models for MD simulations is introduced and validated, which constitutes the main result of this thesis. This algorithm is developed so that MD simulations can be simplified but still accurately represent the physical and thermodynamic properties of the simulated materials. More importantly, this method can produce models that capture the anisotropy of molecules, which is especially useful for theoretical studies of organic materials and has not previously been achieved via a systematic algorithm. To validate the method, a CG model of a simple anisotropic organic molecule (benzene) is produced in Chapter 5. Simulations using this model accurately describes the structural and thermodynamic properties of the FG model and is an improvement over previous CG benzene models. A future application of this method will be the study of the interface structure of materials in OPV systems on realistic time and spatial scales compared to experimental conditions. Ultimately, the studies presented in this thesis work towards the same goal, which is to discover optimal molecular design rules to increase the power conversion efficiency of OPVs.Thesis (MPhil) -- University of Adelaide, School of Physical Sciences, 201

    Meshfree Methods Using Localized Kernel Bases

    Get PDF
    Radial basis functions have been used to construct meshfree numerical methods for interpolation and for solving partial differential equations. Recently, a localized basis of radial basis functions has been developed on the sphere. In this dissertation, we investigate applying localized kernel bases for interpolation, approximation, and for novel discretization methods for numerically solving partial differential equations and integral equations. We investigate methods for partial differential equations on spheres using newly explored bases constructed from radial basis functions and associated quadrature methods. We explore applications of radial basis functions to anisotropic nonlocal diffusion problems and we develop theoretical frameworks for these methods

    Statistical computation with kernels

    Get PDF
    Modern statistical inference has seen a tremendous increase in the size and complexity of models and datasets. As such, it has become reliant on advanced com- putational tools for implementation. A first canonical problem in this area is the numerical approximation of integrals of complex and expensive functions. Numerical integration is required for a variety of tasks, including prediction, model comparison and model choice. A second canonical problem is that of statistical inference for models with intractable likelihoods. These include models with intractable normal- isation constants, or models which are so complex that their likelihood cannot be evaluated, but from which data can be generated. Examples include large graphical models, as well as many models in imaging or spatial statistics. This thesis proposes to tackle these two problems using tools from the kernel methods and Bayesian non-parametrics literature. First, we analyse a well-known algorithm for numerical integration called Bayesian quadrature, and provide consis- tency and contraction rates. The algorithm is then assessed on a variety of statistical inference problems, and extended in several directions in order to reduce its compu- tational requirements. We then demonstrate how the combination of reproducing kernels with Stein’s method can lead to computational tools which can be used with unnormalised densities, including numerical integration and approximation of probability measures. We conclude by studying two minimum distance estimators derived from kernel-based statistical divergences which can be used for unnormalised and generative models. In each instance, the tractability provided by reproducing kernels and their properties allows us to provide easily-implementable algorithms whose theoretical foundations can be studied in depth
    corecore