16,875 research outputs found
Simulation based Bayesian econometric inference: principles and some recent computational advances
In this paper we discuss several aspects of simulation based Bayesian econometric inference. We start at an elementary level on basic concepts of Bayesian analysis; evaluating integrals by simulation methods is a crucial ingredient in Bayesian inference. Next, the most popular and well-known simulation techniques are discussed, the MetropolisHastings algorithm and Gibbs sampling (being the most popular Markov chain Monte Carlo methods) and importance sampling. After that, we discuss two recently developed sampling methods: adaptive radial based direction sampling [ARDS], which makes use of a transformation to radial coordinates, and neural network sampling, which makes use of a neural network approximation to the posterior distribution of interest. Both methods are especially useful in cases where the posterior distribution is not well-behaved, in the sense of having highly non-elliptical shapes. The simulation techniques are illustrated in several example models, such as a model for the real US GNP and models for binary data of a US recession indicator.
Good, great, or lucky? Screening for firms with sustained superior performance using heavy-tailed priors
This paper examines historical patterns of ROA (return on assets) for a
cohort of 53,038 publicly traded firms across 93 countries, measured over the
past 45 years. Our goal is to screen for firms whose ROA trajectories suggest
that they have systematically outperformed their peer groups over time. Such a
project faces at least three statistical difficulties: adjustment for relevant
covariates, massive multiplicity, and longitudinal dependence. We conclude
that, once these difficulties are taken into account, demonstrably superior
performance appears to be quite rare. We compare our findings with other recent
management studies on the same subject, and with the popular literature on
corporate success. Our methodological contribution is to propose a new class of
priors for use in large-scale simultaneous testing. These priors are based on
the hypergeometric inverted-beta family, and have two main attractive features:
heavy tails and computational tractability. The family is a four-parameter
generalization of the normal/inverted-beta prior, and is the natural conjugate
prior for shrinkage coefficients in a hierarchical normal model. Our results
emphasize the usefulness of these heavy-tailed priors in large multiple-testing
problems, as they have a mild rate of tail decay in the marginal likelihood
---a property long recognized to be important in testing.Comment: Published in at http://dx.doi.org/10.1214/11-AOAS512 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Simulation based bayesian econometric inference: principles and some recent computational advances.
In this paper we discuss several aspects of simulation basedBayesian econometric inference. We start at an elementary level on basic concepts of Bayesian analysis; evaluatingintegrals by simulation methods is a crucial ingredientin Bayesian inference. Next, the most popular and well-knownsimulation techniques are discussed, the Metropolis-Hastingsalgorithm and Gibbs sampling (being the most popular Markovchain Monte Carlo methods) and importance sampling. After that, we discuss two recently developed samplingmethods: adaptive radial based direction sampling [ARDS],which makes use of a transformation to radial coordinates,and neural network sampling, which makes use of a neural network approximation to the posterior distribution ofinterest. Both methods are especially useful in cases wherethe posterior distribution is not well-behaved, in the senseof having highly non-elliptical shapes. The simulationtechniques are illustrated in several example models, suchas a model for the real US GNP and models for binary data ofa US recession indicator.
Characterising poroelastic materials in the ultrasonic range - A Bayesian approach
Acoustic fields scattered by poroelastic materials contain key information
about the materials' pore structure and elastic properties. Therefore, such
materials are often characterised with inverse methods that use acoustic
measurements. However, it has been shown that results from many existing
inverse characterisation methods agree poorly. One reason is that inverse
methods are typically sensitive to even small uncertainties in a measurement
setup, but these uncertainties are difficult to model and hence often
neglected. In this paper, we study characterising poroelastic materials in the
Bayesian framework, where measurement uncertainties can be taken into account,
and which allows us to quantify uncertainty in the results. Using the finite
element method, we simulate measurements where ultrasonic waves are incident on
a water-saturated poroelastic material in normal and oblique angles. We
consider uncertainties in the incidence angle and level of measurement noise,
and then explore the solution of the Bayesian inverse problem, the posterior
density, with an adaptive parallel tempering Markov chain Monte Carlo
algorithm. Results show that both the elastic and pore structure parameters can
be feasibly estimated from ultrasonic measurements.Comment: Published in JSV. https://doi.org/10.1016/j.jsv.2019.05.02
Contributions to Mediation Analysis and First Principles Modeling for Mechanistic Statistical Analysis
This thesis contains three projects that propose novel methods for studying mechanisms that explain statistical relationships. The ultimate goal of each of these methods is to help researchers describe how or why complex relationships between observed variables exist.
The first project proposes and studies a method for recovering mediation structure in high dimensions. We take a dimension reduction approach that generalizes the ``product of coefficients'' concept for univariate mediation analysis through the optimization of a loss function. We devise an efficient algorithm for optimizing the product-of-coefficients inspired loss function. Through extensive simulation studies, we show that the method is capable of consistently identifying mediation structure. Finally, two case studies are presented that demonstrate how the method can be used to conduct multivariate mediation analysis.
The second project uses tools from conditional inference to improve the calibration of tests of univariate mediation hypotheses. The key insight of the project is that the non-Euclidean geometry of the null parameter space causes the test statistic’s sampling distribution to depend on a nuisance parameter. After identifying a statistic that is both sufficient for the nuisance parameter and approximately ancillary for the parameter of interest, we derive the test statistic’s limiting conditional sampling distribution. We additionally develop a non-standard bootstrap procedure for calibration in finite samples. We demonstrate through simulation studies that improved evidence calibration leads to substantial power increases over existing methods. This project suggests that conditional inference might be a useful tool in evidence calibration for other non-standard or otherwise challenging problems.
In the last project, we present a methodological contribution to a pharmaceutical science study of {em in vivo} ibuprofen pharmacokinetics. We demonstrate how model misspecification in a first-principles analysis can be addressed by augmenting the model to include a term corresponding to an omitted source of variation. In previously used first-principles models, gastric emptying, which is pulsatile and stochastic, is modeled as first-order diffusion for simplicity. However, analyses suggest that the actual gastric emptying process is expected to be a unimodal smooth function, with phase and amplitude varying by subject. Therefore, we adopt a flexible approach in which a highly idealized parametric version of gastric emptying is combined with a Gaussian process to capture deviations from the idealized form. These functions are characterized by their distributions, which allows us to learn their common and unique features across subjects despite that these features are not directly observed. Through simulation studies, we show that the proposed approach is able to identify certain features of latent function distributions.PHDStatisticsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163026/1/josephdi_1.pd
- …