67,147 research outputs found
Objective Bayes and Conditional Frequentist Inference
Objective Bayesian methods have garnered considerable interest and support among statisticians,
particularly over the past two decades. It has often been ignored, however, that in
some cases the appropriate frequentist inference to match is a conditional one. We present
various methods for extending the probability matching prior (PMP) methods to conditional
settings. A method based on saddlepoint approximations is found to be the most
tractable and we demonstrate its use in the most common exact ancillary statistic models.
As part of this analysis, we give a proof of an exactness property of a particular PMP in
location-scale models. We use the proposed matching methods to investigate the relationships
between conditional and unconditional PMPs. A key component of our analysis is a
numerical study of the performance of probability matching priors from both a conditional
and unconditional perspective in exact ancillary models. In concluding remarks we propose
many routes for future research
2D shape classification and retrieval
We present a novel correspondence-based technique for efficient shape classification and retrieval. Shape boundaries are described by a set of (ad hoc) equally spaced points – avoiding the need to extract “landmark points”. By formulating the correspondence problem in terms of a simple generative model, we are able to efficiently compute matches that incorporate scale, translation, rotation and reflection invariance. A hierarchical scheme with likelihood cut-off provides additional speed-up. In contrast to many shape descriptors, the concept of a mean (prototype) shape follows naturally in this setting. This enables model based classification, greatly reducing the cost of the testing phase. Equal spacing of points can be defined in terms of either perimeter distance or radial angle. It is shown that combining the two leads to improved classification/retrieval performance.
Computational Bayesian Methods Applied to Complex Problems in Bio and Astro Statistics
In this dissertation we apply computational Bayesian methods to three distinct problems. In the first chapter, we address the issue of unrealistic covariance matrices used to estimate collision probabilities. We model covariance matrices with a Bayesian Normal-Inverse-Wishart model, which we fit with Gibbs sampling. In the second chapter, we are interested in determining the sample sizes necessary to achieve a particular interval width and establish non-inferiority in the analysis of prevalences using two fallible tests. To this end, we use a third order asymptotic approximation. In the third chapter, we wish to synthesize evidence across multiple domains in measurements taken longitudinally across time, featuring a substantial amount of structurally missing data, and fit the model with Hamiltonian Monte Carlo in a simulation to analyze how estimates of a parameter of interest change across sample sizes
Identifying WIMP dark matter from particle and astroparticle data
One of the most promising strategies to identify the nature of dark matter
consists in the search for new particles at accelerators and with so-called
direct detection experiments. Working within the framework of simplified
models, and making use of machine learning tools to speed up statistical
inference, we address the question of what we can learn about dark matter from
a detection at the LHC and a forthcoming direct detection experiment. We show
that with a combination of accelerator and direct detection data, it is
possible to identify newly discovered particles as dark matter, by
reconstructing their relic density assuming they are weakly interacting massive
particles (WIMPs) thermally produced in the early Universe, and demonstrating
that it is consistent with the measured dark matter abundance. An inconsistency
between these two quantities would instead point either towards additional
physics in the dark sector, or towards a non-standard cosmology, with a thermal
history substantially different from that of the standard cosmological model.Comment: 24 pages (+21 pages of appendices and references) and 14 figures. v2:
Updated to match JCAP version; includes minor clarifications in text and
updated reference
Development of Landsat-based Technology for Crop Inventories: Appendices
There are no author-identified significant results in this report
Beyond first-order asymptotics for Cox regression
To go beyond standard first-order asymptotics for Cox regression, we develop
parametric bootstrap and second-order methods. In general, computation of
-values beyond first order requires more model specification than is
required for the likelihood function. It is problematic to specify a censoring
mechanism to be taken very seriously in detail, and it appears that
conditioning on censoring is not a viable alternative to that. We circumvent
this matter by employing a reference censoring model, matching the extent and
timing of observed censoring. Our primary proposal is a parametric bootstrap
method utilizing this reference censoring model to simulate inferential
repetitions of the experiment. It is shown that the most important part of
improvement on first-order methods - that pertaining to fitting nuisance
parameters - is insensitive to the assumed censoring model. This is supported
by numerical comparisons of our proposal to parametric bootstrap methods based
on usual random censoring models, which are far more unattractive to implement.
As an alternative to our primary proposal, we provide a second-order method
requiring less computing effort while providing more insight into the nature of
improvement on first-order methods. However, the parametric bootstrap method is
more transparent, and hence is our primary proposal. Indications are that
first-order partial likelihood methods are usually adequate in practice, so we
are not advocating routine use of the proposed methods. It is however useful to
see how best to check on first-order approximations, or improve on them, when
this is expressly desired.Comment: Published at http://dx.doi.org/10.3150/13-BEJ572 in the Bernoulli
(http://isi.cbs.nl/bernoulli/) by the International Statistical
Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm
Bayesian inference for skew-symmetric distributions
Skew-symmetric distributions are a popular family of flexible distributions that
conveniently model non-normal features such as skewness, kurtosis and multimodality.
Unfortunately, their frequentist inference poses several difficulties, which may be adequately
addressed by means of a Bayesian approach. This paper reviews the main prior distributions proposed
for the parameters of skew-symmetric distributions, with special emphasis on the skew-normal and
the skew-t distributions which are the most prominent skew-symmetric models. The paper focuses
on the univariate case in the absence of covariates, but more general models are also discussed
Keck Observations of the Young Metal-Poor Host Galaxy of the Super-Chandrasekhar-Mass Type Ia Supernova SN 2007if
We present Keck LRIS spectroscopy and -band photometry of the metal-poor,
low-luminosity host galaxy of the super-Chandrasekhar mass Type Ia supernova SN
2007if. Deep imaging of the host reveals its apparent magnitude to be
, which at the spectroscopically-measured redshift of
corresponds to an absolute magnitude of
. Galaxy color constrains the mass-to-light ratio,
giving a host stellar mass estimate of . Balmer
absorption in the stellar continuum, along with the strength of the 4000\AA\
break, constrain the age of the dominant starburst in the galaxy to be
Myr, corresponding to a main-sequence
turn-off mass of . Using the R method of
calculating metallicity from the fluxes of strong emission lines, we determine
the host oxygen abundance to be ,
significantly lower than any previously reported spectroscopically-measured
Type Ia supernova host galaxy metallicity. Our data show that SN 2007if is very
likely to have originated from a young, metal-poor progenitor.Comment: 15 pages, 9 figures; accepted for publication in Ap
- …