1,184 research outputs found
Tests of Bayesian Model Selection Techniques for Gravitational Wave Astronomy
The analysis of gravitational wave data involves many model selection
problems. The most important example is the detection problem of selecting
between the data being consistent with instrument noise alone, or instrument
noise and a gravitational wave signal. The analysis of data from ground based
gravitational wave detectors is mostly conducted using classical statistics,
and methods such as the Neyman-Pearson criteria are used for model selection.
Future space based detectors, such as the \emph{Laser Interferometer Space
Antenna} (LISA), are expected to produced rich data streams containing the
signals from many millions of sources. Determining the number of sources that
are resolvable, and the most appropriate description of each source poses a
challenging model selection problem that may best be addressed in a Bayesian
framework. An important class of LISA sources are the millions of low-mass
binary systems within our own galaxy, tens of thousands of which will be
detectable. Not only are the number of sources unknown, but so are the number
of parameters required to model the waveforms. For example, a significant
subset of the resolvable galactic binaries will exhibit orbital frequency
evolution, while a smaller number will have measurable eccentricity. In the
Bayesian approach to model selection one needs to compute the Bayes factor
between competing models. Here we explore various methods for computing Bayes
factors in the context of determining which galactic binaries have measurable
frequency evolution. The methods explored include a Reverse Jump Markov Chain
Monte Carlo (RJMCMC) algorithm, Savage-Dickie density ratios, the Schwarz-Bayes
Information Criterion (BIC), and the Laplace approximation to the model
evidence. We find good agreement between all of the approaches.Comment: 11 pages, 6 figure
A Bayesian Approach to the Detection Problem in Gravitational Wave Astronomy
The analysis of data from gravitational wave detectors can be divided into
three phases: search, characterization, and evaluation. The evaluation of the
detection - determining whether a candidate event is astrophysical in origin or
some artifact created by instrument noise - is a crucial step in the analysis.
The on-going analyses of data from ground based detectors employ a frequentist
approach to the detection problem. A detection statistic is chosen, for which
background levels and detection efficiencies are estimated from Monte Carlo
studies. This approach frames the detection problem in terms of an infinite
collection of trials, with the actual measurement corresponding to some
realization of this hypothetical set. Here we explore an alternative, Bayesian
approach to the detection problem, that considers prior information and the
actual data in hand. Our particular focus is on the computational techniques
used to implement the Bayesian analysis. We find that the Parallel Tempered
Markov Chain Monte Carlo (PTMCMC) algorithm is able to address all three phases
of the anaylsis in a coherent framework. The signals are found by locating the
posterior modes, the model parameters are characterized by mapping out the
joint posterior distribution, and finally, the model evidence is computed by
thermodynamic integration. As a demonstration, we consider the detection
problem of selecting between models describing the data as instrument noise, or
instrument noise plus the signal from a single compact galactic binary. The
evidence ratios, or Bayes factors, computed by the PTMCMC algorithm are found
to be in close agreement with those computed using a Reversible Jump Markov
Chain Monte Carlo algorithm.Comment: 19 pages, 12 figures, revised to address referee's comment
Recommended from our members
Modelling national HIV/AIDS epidemics: revised approach in the UNAIDS Estimation and Projection Package 2011
Objective: United Nations Programme on HIV/AIDS reports regularly on estimated levels and trends in HIV/AIDS epidemics, which are evaluated using an epidemiological model within the Estimation and Projection Package (EPP). The relatively simple four-parameter model of HIV incidence used in EPP through the previous round of estimates has encountered challenges when attempting to fit certain data series on prevalence over time, particularly in settings with long running epidemics where prevalence has increased recently. To address this, the most recent version of the modelling package (EPP 2011) includes a more flexible epidemiological model that allows HIV infection risk to vary over time. This paper describes the technical details of this flexible approach to modelling HIV transmission dynamics within EPP 2011. Methodology For the flexible modelling approach, the force of infection parameter, r, is allowed to vary over time through a random walk formulation, and an informative prior distribution is used to improve short-term projections beyond the last year of data. Model parameters are estimated using a Bayesian estimation approach in which models are fit to HIV seroprevalence data from surveillance sites. Results: This flexible model can yield better estimates of HIV prevalence over time in situations where the classic EPP model has difficulties, such as in Uganda, where prevalence is no longer falling. Based on formal out-of-sample projection tests, the flexible modelling approach also improves predictions and CIs for extrapolations beyond the last observed data point. Conclusions: We recommend use of a flexible modelling approach where data are sufficient (eg, where at least 5 years of observations are available), and particularly where an epidemic is beyond its peak
The SWAP EUV Imaging Telescope Part I: Instrument Overview and Pre-Flight Testing
The Sun Watcher with Active Pixels and Image Processing (SWAP) is an EUV
solar telescope on board ESA's Project for Onboard Autonomy 2 (PROBA2) mission
launched on 2 November 2009. SWAP has a spectral bandpass centered on 17.4 nm
and provides images of the low solar corona over a 54x54 arcmin field-of-view
with 3.2 arcsec pixels and an imaging cadence of about two minutes. SWAP is
designed to monitor all space-weather-relevant events and features in the low
solar corona. Given the limited resources of the PROBA2 microsatellite, the
SWAP telescope is designed with various innovative technologies, including an
off-axis optical design and a CMOS-APS detector. This article provides
reference documentation for users of the SWAP image data.Comment: 26 pages, 9 figures, 1 movi
New stopping criteria for segmenting DNA sequences
We propose a solution on the stopping criterion in segmenting inhomogeneous
DNA sequences with complex statistical patterns. This new stopping criterion is
based on Bayesian Information Criterion (BIC) in the model selection framework.
When this stopping criterion is applied to a left telomere sequence of yeast
Saccharomyces cerevisiae and the complete genome sequence of bacterium
Escherichia coli, borders of biologically meaningful units were identified
(e.g. subtelomeric units, replication origin, and replication terminus), and a
more reasonable number of domains was obtained. We also introduce a measure
called segmentation strength which can be used to control the delineation of
large domains. The relationship between the average domain size and the
threshold of segmentation strength is determined for several genome sequences.Comment: 4 pages, 4 figures, Physical Review Letters, to appea
Home and Online Management and Evaluation of Blood Pressure (HOME BP) using a digital intervention in poorly controlled hypertension: randomised controlled trial
Objective: The HOME BP (Home and Online Management and Evaluation of Blood Pressure) trial aimed to test a digital intervention for hypertension management in primary care by combining self-monitoring of blood pressure with guided self-management.
Design: Unmasked randomised controlled trial with automated ascertainment of primary endpoint.
Setting: 76 general practices in the United Kingdom.
Participants: 622 people with treated but poorly controlled hypertension (>140/90 mm Hg) and access to the internet.
Interventions: Participants were randomised by using a minimisation algorithm to self-monitoring of blood pressure with a digital intervention (305 participants) or usual care (routine hypertension care, with appointments and drug changes made at the discretion of the general practitioner; 317 participants). The digital intervention provided feedback of blood pressure results to patients and professionals with optional lifestyle advice and motivational support. Target blood pressure for hypertension, diabetes, and people aged 80 or older followed UK national guidelines.
Main outcome measures: The primary outcome was the difference in systolic blood pressure (mean of second and third readings) after one year, adjusted for baseline blood pressure, blood pressure target, age, and practice, with multiple imputation for missing values.
Results: After one year, data were available from 552 participants (88.6%) with imputation for the remaining 70 participants (11.4%). Mean blood pressure dropped from 151.7/86.4 to 138.4/80.2 mm Hg in the intervention group and from 151.6/85.3 to 141.8/79.8 mm Hg in the usual care group, giving a mean difference in systolic blood pressure of −3.4 mm Hg (95% confidence interval −6.1 to −0.8 mm Hg) and a mean difference in diastolic blood pressure of −0.5 mm Hg (−1.9 to 0.9 mm Hg). Results were comparable in the complete case analysis and adverse effects were similar between groups. Within trial costs showed an incremental cost effectiveness ratio of £11 ($15, €12; 95% confidence interval £6 to £29) per mm Hg reduction.
Conclusions: The HOME BP digital intervention for the management of hypertension by using self-monitored blood pressure led to better control of systolic blood pressure after one year than usual care, with low incremental costs. Implementation in primary care will require integration into clinical workflows and consideration of people who are digitally excluded.
Trial registration: ISRCTN13790648
Factor structure of PTSD, and relation with gender in trauma survivors from India
Background: The factor structure of posttraumatic stress disorder (PTSD) has been extensively studied in Western countries. Some studies have assessed its factor structure in Asia (China, Sri Lanka, and Malaysia), but few have directly assessed the factor structure of PTSD in an Indian adult sample. Furthermore, in a largely patriarchal society in India with strong gender roles, it becomes imperative to assess the association between the factors of PTSD and gender. Objective: The purpose of the present study was to assess the factor structure of PTSD in an Indian sample of trauma survivors based on prevailing models of PTSD defined in the DSM-IV-TR (APA, 2000), and to assess the relation between PTSD factors and gender. Method: The sample comprised of 313 participants (55.9% female) from Jammu and Kashmir, India, who had experienced a natural disaster (N=200) or displacement due to cross-border firing (N=113). Results: Three existing PTSD models—two four-factor models (Emotional Numbing and Dysphoria), and a five-factor model (Dysphoric Arousal)—were tested using Confirmatory Factor Analysis with addition of gender as a covariate. The three competing models had similar fit indices although the Dysphoric Arousal model fit significantly better than Emotional Numbing and Dysphoria models. Gender differences were found across the factors of Re-experiencing and Anxious arousal. Conclusions: Findings indicate that the Dysphoric Arousal model of PTSD was the best model; albeit the fit indices of all models were fairly similar. Compared to males, females scored higher on factors of Re-experiencing and Anxious arousal. Gender differences found across two factors of PTSD are discussed in light of the social milieu in India
Systematic techniques for assisting recruitment to trials (START): study protocol for embedded, randomized controlled trials
BACKGROUND: Randomized controlled trials play a central role in evidence-based practice, but recruitment of participants, and retention of them once in the trial, is challenging. Moreover, there is a dearth of evidence that research teams can use to inform the development of their recruitment and retention strategies. As with other healthcare initiatives, the fairest test of the effectiveness of a recruitment strategy is a trial comparing alternatives, which for recruitment would mean embedding a recruitment trial within an ongoing host trial. Systematic reviews indicate that such studies are rare. Embedded trials are largely delivered in an ad hoc way, with interventions almost always developed in isolation and tested in the context of a single host trial, limiting their ability to contribute to a body of evidence with regard to a single recruitment intervention and to researchers working in different contexts. METHODS/DESIGN: The Systematic Techniques for Assisting Recruitment to Trials (START) program is funded by the United Kingdom Medical Research Council (MRC) Methodology Research Programme to support the routine adoption of embedded trials to test standardized recruitment interventions across ongoing host trials. To achieve this aim, the program involves three interrelated work packages: (1) methodology - to develop guidelines for the design, analysis and reporting of embedded recruitment studies; (2) interventions - to develop effective and useful recruitment interventions; and (3) implementation - to recruit host trials and test interventions through embedded studies. DISCUSSION: Successful completion of the START program will provide a model for a platform for the wider trials community to use to evaluate recruitment interventions or, potentially, other types of intervention linked to trial conduct. It will also increase the evidence base for two types of recruitment intervention. TRIAL REGISTRATION: The START protocol covers the methodology for embedded trials. Each embedded trial is registered separately or as a substudy of the host trial
Developmental trajectories of externalizing behaviors in childhood and adolescence [IF: 3.3]
This article describes the average and group-based developmental trajectories of aggression, opposition, property violations, and status violations using parent reports of externalizing behaviors on a longitudinal multiple birth cohort study of 2,076 children aged 4 to 18 years. Trajectories were estimated from multilevel growth curve analyses and semiparametric mixture models. Overall, males showed higher levels of externalizing behavior than did females. Aggression, opposition, and property violations decreased on average, whereas status violations increased over time. Group-based trajectories followed the shape of the average curves at different levels and were similar for males and females. The trajectories found in this study provide a basis against which deviations from the expected developmental course can be identified and classified as deviant or nondeviant
Bayesian Mode Regression
This article has been made available through the Brunel Open Access Publishing Fund.Like mean, quantile and variance, mode is also an important measure of central tendency of a distribution. Many practical questions, particularly in the analysis of big data, such as \Which element (gene or le or signal) is the most typical one among all elements in a network?" are directly related to mode. Mode regression, which provides a convenient summary of how the regressors a ect the conditional mode, is totally di erent from other models based on conditional mean or conditional quantile or conditional variance. Some inference methods for mode regression exist but none of them is from the Bayesian perspective. This paper introduces Bayesian mode regression by exploring three different approaches, including their theoretic properties. The proposed approacher are illustrated using simulated datasets and a real data set
- …