29 research outputs found

    Fast Method of Particular Solutions for Solving Partial Differential Equations

    Get PDF
    Method of particular solutions (MPS) has been implemented in many science and engineering problems but obtaining the closed-form particular solutions, the selection of the good shape parameter for various radial basis functions (RBFs) and simulation of the large-scale problems are some of the challenges which need to overcome. In this dissertation, we have used several techniques to overcome such challenges. The closed-form particular solutions for the Matérn and Gaussian RBFs were not known yet. With the help of the symbolic computational tools, we have derived the closed-form particular solutions of the Matérn and Gaussian RBFs for the Laplace and biharmonic operators in 2D and 3D. These derived particular solutions play an important role in solving inhomogeneous problems using MPS and boundary methods such as boundary element methods or boundary meshless methods. In this dissertation, to select the good shape parameter, various existing variable shape parameter strategies and some well-known global optimization algorithms have also been applied. These good shape parameters provide high accurate solutions in many RBFs collocation methods. Fast method of particular solutions (FMPS) has been developed for the simulation of the large-scale problems. FMPS is based on the global version of the MPS. In this method, partial differential equations are discretized by the usual MPS and the determination of the unknown coefficients is accelerated using a fast technique. Numerical results confirm the efficiency of the proposed technique for the PDEs with a large number of computational points in both two and three dimensions. We have also solved the time fractional diffusion equations by using MPS and FMPS

    Models and Methods for Random Fields in Spatial Statistics with Computational Efficiency from Markov Properties

    Get PDF
    The focus of this work is on the development of new random field models and methods suitable for the analysis of large environmental data sets. A large part is devoted to a number of extensions to the newly proposed Stochastic Partial Differential Equation (SPDE) approach for representing Gaussian fields using Gaussian Markov Random Fields (GMRFs). The method is based on that Gaussian Matérn field can be viewed as solutions to a certain SPDE, and is useful for large spatial problems where traditional methods are too computationally intensive to use. A variation of the method using wavelet basis functions is proposed and using a simulation-based study, the wavelet approximations are compared with two of the most popular methods for efficient approximations of Gaussian fields. A new class of spatial models, including the Gaussian Matérn fields and a wide family of fields with oscillating covariance functions, is also constructed using nested SPDEs. The SPDE method is extended to this model class and it is shown that all desirable properties are preserved, such as computational efficiency, applicability to data on general smooth manifolds, and simple non-stationary extensions. Finally, the SPDE method is extended to a larger class of non-Gaussian random fields with Matérn covariance functions, including certain Laplace Moving Average (LMA) models. In particular it is shown how the SPDE formulation can be used to obtain an efficient simulation method and an accurate parameter estimation technique for a LMA model. A method for estimating spatially dependent temporal trends is also developed. The method is based on using a space-varying regression model, accounting for spatial dependency in the data, and it is used to analyze temporal trends in vegetation data from the African Sahel in order to find regions that have experienced significant changes in the vegetation cover over the studied time period. The problem of estimating such regions is investigated further in the final part of the thesis where a method for estimating excursion sets, and the related problem of finding uncertainty regions for contour curves, for latent Gaussian fields is proposed. The method is based on using a parametric family for the excursion sets in combination with Integrated Nested Laplace Approximations (INLA) and an importance sampling-based algorithm for estimating joint probabilities

    Characterization and construction of max-stable processes

    Get PDF
    Max-stable processes provide a natural framework to model spatial extremal scenarios. Appropriate summary statistics include the extremal coefficients and the (upper) tail dependence coefficients. In this thesis, the full set of extremal coefficients of a max-stable process is captured in the so-called extremal coefficient function (ECF) and the full set of upper tail dependence coefficients in the tail correlation function (TCF). Chapter 2 deals with a complete characterization of the ECF in terms of negative definiteness. For each ECF a corresponding max-stable process is constructed, which takes an exceptional role among max-stable processes with identical ECF. This leads to sharp lower bounds for the finite dimensional distributions of arbitrary max-stable processes in terms of its ECF. Chapters 3 and 4 are concerned with the class of TCFs. Chapter 3 exhibits this class as an infinite-dimensional compact convex polytope. It is shown that the set of all TCFs (of not necessarily max-stable processes) coincides with the set of TCFs stemming from max-stable processes. Chapter 4 compares the TCFs of widely used stationary max-stable processes such as Mixed Moving Maxima, Extremal Gaussian and Brown-Resnick processes. Finally, in Chapter 5, Brown-Resnick processes on the sphere and other spaces admitting a compact group action are considered and a Mixed Moving Maxima representation is derived

    Gaussian Processes for Spatiotemporal Modelling

    Get PDF
    A statistical framework for spatiotemporal modelling should ideally be able to assimilate different types of data from different of sources. Gaussian processes are commonly used tool for interpolating values across time and space domains. In this thesis we work on extending the Gaussian processes framework to deal with diverse noise model assumptions. We present a model based on a hybrid approach that combines some of the features of the discriminative and generative perspectives, allowing continuous dimensionality reduction of hybrid discrete-continuous data, discriminative classification with missing inputs and manifold learning informed by class labels. We present an application of malaria density modelling across Uganda using administrative records. This disease represents a threat for approximately 3.3 billion people around the globe. The analysis of malaria based on the records available faces two main complications: noise induced by a highly variable rate of reporting health facilities; and lack of comparability across time, due to changes in districts delimitation. We define a Gaussian process model able to assimilate this features in the data and provide an insight on the generating process behind the records. Finally, a method to monitor malaria case-counts is proposed. We use vector-valued covariance kernels to analyze the time series components individually. The short term variations of the infection are divided into four cyclical phases. The graphical tool provided can help quick response planning and resources allocation

    Numerical Approximation of Partial Differential Equations Involving Fractional Differential Operators

    Get PDF
    The negative powers of an elliptic operator can be approximated via its Dunford-Taylor integral representation, i.e. we approximate the Dunford-Taylor integral with an exponential convergent sinc quadrature scheme and discretize the integrand (a diffusion-reaction problem) at each quadrature point using the finite element method. In this work, we apply this discretization strategy for a parabolic problem involving fractional powers of elliptic operators and a stationary problem involving the integral fractional Laplacian. The approximation of the parabolic problem is twofold: the homogenous problem and the non-homogeneous problem. We propose an approximation scheme for the homogeneous problem based on a complex-valued integral representation of the solution operator. An exponential convergent sinc quadrature scheme with a hyperbolic contour and a complex-valued finite element method are developed. The approximation of the non-homogeneous problem in space follows the same idea from the homogeneous problem but we need to additionally discretize the problem in the time domain. Here we consider two different approaches: a pseudo-midpoint quadrature scheme in time based on Duhamel’s principle and the Crank-Nicolson time stepping method. Both methods guarantee second order convergence in time but require different sinc quadrature schemes to approximate the corresponding fractional operators. The time stepping method is stable provided that the sinc quadrature spacing is sufficiently small. In terms of the approximation of the stationary problem involving integral fractional Laplacian, we consider a Dunford-Taylor integral representation of the bilinear form in the weak formulation. After approximating the integral with a sinc quadrature scheme, we need to approximate the integrand at each quadrature point which contains a solution of a diffusion-reaction equation defined on the whole space. We approximate the integrand problem on a truncated domain together with the finite element method. For both problems, we provide L² error estimates between solutions and their final approximations. Numerical implementation and results illustrating the behavior of the algorithms are also provided

    A Quasi-Likelihood Approach to Zero-Inflated Spatial Count Data

    Get PDF
    The increased accessibility of data that are geographically referenced and correlated increases the demand for techniques of spatial data analysis. The subset of such data comprised of discrete counts exhibit particular difficulties and the challenges further increase when a large proportion (typically 50% or more) of the counts are zero-valued. Such scenarios arise in many applications in numerous fields of research and it is often desirable to infer on subtleties of the process, despite the lack of substantive information obscuring the underlying stochastic mechanism generating the data. An ecological example provides the impetus for the research in this thesis: when observations for a species are recorded over a spatial region, and many of the counts are zero-valued, are the abundant zeros due to bad luck, or are aspects of the region making it unsuitable for the survival of the species? In the framework of generalized linear models, we first develop a zero-inflated Poisson generalized linear regression model, which explains the variability of the responses given a set of measured covariates, and additionally allows for the distinction of two kinds of zeros: sampling ("bad luck" zeros), and structural (zeros that provide insight into the data-generating process). We then adapt this model to the spatial setting by incorporating dependence within the model via a general, leniently-defined quasi-likelihood strategy, which provides consistent, efficient and asymptotically normal estimators, even under erroneous assumptions of the covariance structure. In addition to this advantage of robustness to dependence misspecification, our quasi-likelihood model overcomes the need for the complete specification of a probability model, thus rendering it very general and relevant to many settings. To complement the developed regression model, we further propose methods for the simulation of zero-inflated spatial stochastic processes. This is done by deconstructing the entire process into a mixed, marked spatial point process: we augment existing algorithms for the simulation of spatial marked point processes to comprise a stochastic mechanism to generate zero-abundant marks (counts) at each location. We propose several such mechanisms, and consider interaction and dependence processes for random locations as well as over a lattice

    Numerical Integration as and for Probabilistic Inference

    Get PDF
    Numerical integration or quadrature is one of the workhorses of modern scientific computing and a key operation to perform inference in intractable probabilistic models. The epistemic uncertainty about the true value of an analytically intractable integral identifies the integration task as an inference problem itself. Indeed, numerical integration can be cast as a probabilistic numerical method known as Bayesian quadrature (BQ). BQ leverages structural assumptions about the function to be integrated via properties encoded in the prior. A posterior belief over the unknown integral value emerges by conditioning the BQ model on an actively selected point set and corresponding function evaluations. Iterative updates to the Bayesian model turn BQ into an adaptive quadrature method that quantifies its uncertainty about the solution of the integral in a principled way. This thesis traces out the scope of probabilistic integration methods and highlights types of integration tasks that BQ excels at. These arise when sample efficiency is required and encodable prior knowledge about the integration problem of low to moderate dimensionality is at hand. The first contribution addresses transfer learning with BQ. It extends the notion of active learning schemes to cost-sensitive settings where cheap approximations to an expensive-to-evaluate integrand are available. The degeneracy of acquisition policies in simple BQ is lifted upon generalization to the multi-source, cost-sensitive setting. This motivates the formulation of a set of desirable properties for BQ acquisition functions. A second application considers integration tasks arising in statistical computations on Riemannian manifolds that have been learned from data. Unsupervised learning algorithms that respect the intrinsic geometry of the data rely on the repeated estimation of expensive and structured integrals. Our custom-made active BQ scheme outperforms conventional integration tools for Riemannian statistics. Despite their unarguable benefits, BQ schemes provide limited flexibility to construct suitable priors while keeping the inference step tractable. In a final contribution, we identify the ubiquitous integration problem of computing multivariate normal probabilities as a type of integration task that is structurally taxing for BQ. The instead proposed method is an elegant algorithm based on Markov chain Monte Carlo that permits both sampling from and estimating the normalization constant of linearly constrained Gaussians that contain an arbitrarily small probability mass. As a whole, this thesis contributes to the wider goal of advancing integration algorithms to satisfy the needs imposed by contemporary probabilistic machine learning applications
    corecore