377 research outputs found

    Numerically Stable Approximate Bayesian Methods for Generalized Linear Mixed Models and Linear Model Selection

    Get PDF
    Approximate Bayesian inference methods offer methodology for fitting Bayesian models as fast alternatives to Markov Chain Monte Carlo methods that sometimes have only a slight loss of accuracy. In this thesis, we consider variable selection for linear models, and zero inflated mixed models. Variable selection for linear regression models are ubiquitous in applied statistics. We use the popular g-prior (Zellner, 1986) for model selection of linear models with normal priors where g is a prior hyperparameter. We derive exact expressions for the model selection Bayes Factors in terms of special functions depending on the sample size, number of covariates and R-squared of the model. We show that these expressions are accurate, fast to evaluate, and numerically stable. An R package blma for doing Bayesian linear model averaging using these exact expressions has been released on GitHub. We extend the Particle EM method of (Rockova, 2017) using Particle Variational Approximation and the exact posterior marginal likelihood expressions to derive a computationally efficient algorithm for model selection on data sets with many covariates. Our algorithm performs well relative to existing algorithms, completing in 8 seconds on a model selection problem with a sample size of 600 and 7200 covariates. We consider zero-inflated models that have many applications in areas such as manufacturing and public health, but pose numerical issues when fitting them to data. We apply a variational approximation to zero-inflated Poisson mixed models with Gaussian distributed random effects using a combination of VB and the Gaussian Variational Approximation (GVA). We also incorporate a novel parameterisation of the covariance of the GVA using the Cholesky factor of the precision matrix, similar to Tan and Nott (2018) to resolve associated numerical difficulties

    Belief Propagation Reconstruction for Discrete Tomography

    Get PDF
    International audienceWe consider the reconstruction of a two-dimensional discrete image from a set of tomographic measurements corresponding to the Radon projection. Assuming that the image has a structure where neighbouring pixels have a larger probability to take the same value, we follow a Bayesian approach and introduce a fast message-passing reconstruction algorithm based on belief propagation. For numerical results, we specialize to the case of binary tomography. We test the algorithm on binary synthetic images with different length scales and compare our results against a more usual convex optimization approach. We investigate the reconstruction error as a function of the number of tomographic measurements, corresponding to the number of projection angles. The belief propagation algorithm turns out to be more efficient than the convex-optimization algorithm, both in terms of recovery bounds for noise-free projections, and in terms of reconstruction quality when moderate Gaussian noise is added to the projections

    Quantitative comparisons of satellite observations and cloud models

    Get PDF
    2011 Fall.Includes bibliographical references.Microwave radiation interacts directly with precipitating particles and can therefore be used to compare microphysical properties found in models with those found in nature. Lower frequencies (< 37 GHz) can detect the emission signals from the raining clouds over radiometrically cold ocean surfaces while higher frequencies (≥ 37 GHz) are more sensitive to the scattering of the precipitating-sized ice particles in the convective storms over high-emissivity land, which lend them particular capabilities for different applications. Both are explored with a different scenario for each case: a comparison of two rainfall retrievals over ocean and a comparison of a cloud model simulation to satellite observations over land. Both the Goddard Profiling algorithm (GPROF) and European Centre for Medium-Range Weather Forecasts (ECMWF) one-dimensional + four-dimensional variational analysis (1D+4D-Var) rainfall retrievals are inversion algorithms based on the Bayes' theorem. Differences stem primarily from the a-priori information. GPROF uses an observationally generated a-priori database while ECMWF 1D-Var uses the model forecast First Guess (FG) fields. The relative similarity in the two approaches means that comparisons can shed light on the differences that are produced by the a-priori information. Case studies have found that differences can be classified into four categories based upon the agreement in the brightness temperatures (Tbs) and in the microphysical properties of Cloud Water Path (CWP) and Rain Water Path (RWP) space. We found a category of special interest in which both retrievals converge to similar Tb through minimization procedures but produce different CWP and RWP. The similarity in Tb can be attributed to comparable Total Water Path (TWP) between the two retrievals while the disagreement in the microphysics is caused by their different degrees of constraint of the cloud/rain ratio by the observations. This situation occurs frequently and takes up 46.9% in the one month 1D-Var retrievals examined. To attain better constrained cloud/rain ratios and improved retrieval quality, this study suggests the implementation of higher microwave frequency channels in the 1D-Var algorithm. Cloud Resolving Models (CRMs) offer an important pathway to interpret satellite observations of microphysical properties of storms. High frequency microwave brightness temperatures (Tbs) respond to precipitating-sized ice particles and can, therefore, be compared with simulated Tbs at the same frequencies. By clustering the Tb vectors at these frequencies, the scene can be classified into distinct microphysical regimes, in other words, cloud types. The properties for each cloud type in the simulated scene are compared to those in the observation scene to identify the discrepancies in microphysics within that cloud type. A convective storm over the Amazon observed by the Tropical Rainfall Measuring Mission (TRMM) is simulated using the Regional Atmospheric Modeling System (RAMS) in a semi-ideal setting, and four regimes are defined within the scene using cluster analysis: the 'clear sky/thin cirrus' cluster, the 'cloudy' cluster, the 'stratiform anvil' cluster and the 'convective' cluster. The relationship between Tb difference of 37 and 85 GHz and Tb at 85 GHz is found to contain important information of microphysical properties such as hydrometeor species and size distributions. Cluster-by-cluster comparison between the observations and the simulations discloses biases in the model including overproduction of supercooled water and large hail particles. The detected biases shed light on how the model should be adjusted to generate more realistic microphysical relationships for each cluster. Guided by the model/observation discrepancies in the 'convective' cloud cluster, a new simulation is performed to provide dynamic adjustments by generating more but smaller hail particles

    Searching for Gravitational Waves Using Pulsar Timing Arrays

    Get PDF
    Gravitational Waves (GWs) are tiny ripples in the fabric of spacetime predicted by Einstein\u27s theory of General Relativity. Pulsar timing arrays (PTAs) offer a unique opportunity to detect low frequency GWs in the near future. Such a detection would be complementary to both LISA and LIGO GW efforts. In this frequency band, the expected source of GWs are Supermassive Black Hole Binaries (SMBHBs) that will most likely form an ensemble creating a stochastic GW background with possibly a few nearby/massive sources that will be individually resolvable. A direct detection of GWs will open a new window into the fields of astronomy and astrophysics by allowing us to constrain the coalescence rate of SMBHBs, providing us with further tests on the theory of General Relativity, and giving us access to properties of black holes not accessible by current astronomical techniques. This dissertation work focuses primarily on the development of several robust data analysis pipelines for the detection and characterization of continuous GWs and a stochastic GW background. The data analysis problem for PTAs is quite difficult as one must fully take into account the timing model that must be fit in order to obtain the residuals, uneven sampling (including large gaps), and potential red noise processes. The data analysis techniques presented here handle all of these effects completely while allowing additional freedom in parameterizing the noise present in the data. The accumulation of work from this dissertation has resulted in a fully functional, robust, and efficient data analysis pipeline that has been successfully applied to the 5- and 9-year NANOGrav data releases

    ISGSR 2011 - Proceedings of the 3rd International Symposium on Geotechnical Safety and Risk

    Get PDF
    Scientific standards applicable to publication of BAWProceedings: http://izw.baw.de/publikationen/vzb_dokumente_oeffentlich/0/2020_07_BAW_Scientific_standards_conference_proceedings.pd

    Stochastic Models of Critical Operations

    Get PDF

    Constraining the anisotropic expansion of the universe with type ia supernovae and improving the treatment of selection effects within bayesian hierarchical models

    Get PDF
    In thesis, I aim to apply advanced methods in Bayesian statistical modelling on Type Ia Supernovae (SNIa) data to determine tighter constraints on the fiducial Lambda-Cold-Dark-Matter (LCDM) cosmology and improve the modelling of systematic uncertainties in the data. The body of work covered herein can be broadly classified into two main topics: I re-examine the contentious question of constraints on anisotropic expansion from SNIa in the light of a novel determination of peculiar velocities, which are crucial to test isotropy with SNe, out to distances < 200/h Mpc.The Bayesian hierarchical model BAHAMAS is adopted to constrain a dipole in the distance modulus in the context of the LCDM model and the deceleration parameter in a phenomenological Cosmographic expansion. I find no evidence for anisotropic expansion, and place a tight upper bound on the amplitude of a dipole, in a LCDM setting, and the Cosmographic expansion approach. Using Bayesian model comparison, I obtain posterior odds in excess of 900:1 (640:1) against a constant-in-redshift dipole for LCDM (Cosmographic expansion). One of the modern problems of Supernovae cosmology is accounting for selection effects caused by Malmquist bias in a principled way. Here, I present a complete formalism for handling selection effects in Type Ia supernova (SNIa) cosmology in the context of Bayesian Hierarchical Modeling. I demonstrate the method on simulated data sets where selection cuts are made on the apparent magnitude and show that previous results by Rubin et al, (2015) are incorrect and can lead to biased cosmological parameters reconstruction. I how this formalism is easily extended to include the Phillips corrections that are used to standardize SNe. The formalism presented exhibits better statistical properties in terms of bias and mean squared error relative to a traditional ad hoc style correction and the model of Rubin et al, (2015)Open Acces
    corecore