17,159 research outputs found
Bayesian astrostatistics: a backward look to the future
This perspective chapter briefly surveys: (1) past growth in the use of
Bayesian methods in astrophysics; (2) current misconceptions about both
frequentist and Bayesian statistical inference that hinder wider adoption of
Bayesian methods by astronomers; and (3) multilevel (hierarchical) Bayesian
modeling as a major future direction for research in Bayesian astrostatistics,
exemplified in part by presentations at the first ISI invited session on
astrostatistics, commemorated in this volume. It closes with an intentionally
provocative recommendation for astronomical survey data reporting, motivated by
the multilevel Bayesian perspective on modeling cosmic populations: that
astronomers cease producing catalogs of estimated fluxes and other source
properties from surveys. Instead, summaries of likelihood functions (or
marginal likelihood functions) for source properties should be reported (not
posterior probability density functions), including nontrivial summaries (not
simply upper limits) for candidate objects that do not pass traditional
detection thresholds.Comment: 27 pp, 4 figures. A lightly revised version of a chapter in
"Astrostatistical Challenges for the New Astronomy" (Joseph M. Hilbe, ed.,
Springer, New York, forthcoming in 2012), the inaugural volume for the
Springer Series in Astrostatistics. Version 2 has minor clarifications and an
additional referenc
A Naive Bayes Source Classifier for X-ray Sources
The Chandra Carina Complex Project (CCCP) provides a sensitive X-ray survey
of a nearby starburst region over >1 square degree in extent. Thousands of
faint X-ray sources are found, many concentrated into rich young stellar
clusters. However, significant contamination from unrelated Galactic and
extragalactic sources is present in the X-ray catalog. We describe the use of a
naive Bayes classifier to assign membership probabilities to individual
sources, based on source location, X-ray properties, and visual/infrared
properties. For the particular membership decision rule adopted, 75% of CCCP
sources are classified as members, 11% are classified as contaminants, and 14%
remain unclassified. The resulting sample of stars likely to be Carina members
is used in several other studies, which appear in a Special Issue of the ApJS
devoted to the CCCP.Comment: Accepted for the ApJS Special Issue on the Chandra Carina Complex
Project (CCCP), scheduled for publication in May 2011. All 16 CCCP Special
Issue papers are available at
http://cochise.astro.psu.edu/Carina_public/special_issue.html through 2011 at
least. 19 pages, 7 figure
Fast and scalable Gaussian process modeling with applications to astronomical time series
The growing field of large-scale time domain astronomy requires methods for
probabilistic data analysis that are computationally tractable, even with large
datasets. Gaussian Processes are a popular class of models used for this
purpose but, since the computational cost scales, in general, as the cube of
the number of data points, their application has been limited to small
datasets. In this paper, we present a novel method for Gaussian Process
modeling in one-dimension where the computational requirements scale linearly
with the size of the dataset. We demonstrate the method by applying it to
simulated and real astronomical time series datasets. These demonstrations are
examples of probabilistic inference of stellar rotation periods, asteroseismic
oscillation spectra, and transiting planet parameters. The method exploits
structure in the problem when the covariance function is expressed as a mixture
of complex exponentials, without requiring evenly spaced observations or
uniform noise. This form of covariance arises naturally when the process is a
mixture of stochastically-driven damped harmonic oscillators -- providing a
physical motivation for and interpretation of this choice -- but we also
demonstrate that it can be a useful effective model in some other cases. We
present a mathematical description of the method and compare it to existing
scalable Gaussian Process methods. The method is fast and interpretable, with a
range of potential applications within astronomical data analysis and beyond.
We provide well-tested and documented open-source implementations of this
method in C++, Python, and Julia.Comment: Updated in response to referee. Submitted to the AAS Journals.
Comments (still) welcome. Code available: https://github.com/dfm/celerit
Recommended from our members
A Mixed-Effects Location Scale Model for Dyadic Interactions.
We present a mixed-effects location scale model (MELSM) for examining the daily dynamics of affect in dyads. The MELSM includes person and time-varying variables to predict the location, or individual means, and the scale, or within-person variances. It also incorporates a submodel to account for between-person variances. The dyadic specification can accommodate individual and partner effects in both the location and the scale components, and allows random effects for all location and scale parameters. All covariances among the random effects, within and across the location and the scale are also estimated. These covariances offer new insights into the interplay of individual mean structures, intra-individual variability, and the influence of partner effects on such factors. To illustrate the model, we use data from 274 couples who provided daily ratings on their positive and negative emotions toward their relationship - up to 90 consecutive days. The model is fit using Hamiltonian Monte Carlo methods, and includes subsets of predictors in order to demonstrate the flexibility of this approach. We conclude with a discussion on the usefulness and the limitations of the MELSM for dyadic research
Recommended from our members
Real-time decoding of question-and-answer speech dialogue using human cortical activity.
Natural communication often occurs in dialogue, differentially engaging auditory and sensorimotor brain regions during listening and speaking. However, previous attempts to decode speech directly from the human brain typically consider listening or speaking tasks in isolation. Here, human participants listened to questions and responded aloud with answers while we used high-density electrocorticography (ECoG) recordings to detect when they heard or said an utterance and to then decode the utterance's identity. Because certain answers were only plausible responses to certain questions, we could dynamically update the prior probabilities of each answer using the decoded question likelihoods as context. We decode produced and perceived utterances with accuracy rates as high as 61% and 76%, respectively (chance is 7% and 20%). Contextual integration of decoded question likelihoods significantly improves answer decoding. These results demonstrate real-time decoding of speech in an interactive, conversational setting, which has important implications for patients who are unable to communicate
- …