1,729 research outputs found
Tests of Bayesian Model Selection Techniques for Gravitational Wave Astronomy
The analysis of gravitational wave data involves many model selection
problems. The most important example is the detection problem of selecting
between the data being consistent with instrument noise alone, or instrument
noise and a gravitational wave signal. The analysis of data from ground based
gravitational wave detectors is mostly conducted using classical statistics,
and methods such as the Neyman-Pearson criteria are used for model selection.
Future space based detectors, such as the \emph{Laser Interferometer Space
Antenna} (LISA), are expected to produced rich data streams containing the
signals from many millions of sources. Determining the number of sources that
are resolvable, and the most appropriate description of each source poses a
challenging model selection problem that may best be addressed in a Bayesian
framework. An important class of LISA sources are the millions of low-mass
binary systems within our own galaxy, tens of thousands of which will be
detectable. Not only are the number of sources unknown, but so are the number
of parameters required to model the waveforms. For example, a significant
subset of the resolvable galactic binaries will exhibit orbital frequency
evolution, while a smaller number will have measurable eccentricity. In the
Bayesian approach to model selection one needs to compute the Bayes factor
between competing models. Here we explore various methods for computing Bayes
factors in the context of determining which galactic binaries have measurable
frequency evolution. The methods explored include a Reverse Jump Markov Chain
Monte Carlo (RJMCMC) algorithm, Savage-Dickie density ratios, the Schwarz-Bayes
Information Criterion (BIC), and the Laplace approximation to the model
evidence. We find good agreement between all of the approaches.Comment: 11 pages, 6 figure
A video analysis of head injuries satisfying the criteria for a head injury assessment in professional Rugby Union: a prospective cohort study
Objectives
Concussion is the most common match
injury in professional Rugby Union, accounting for 25%
of match injuries. The primary prevention of head injuries
requires that the injury mechanism be known so that
interventions can be targeted to specifically overall
incidence by focusing on characteristics with the greatest
propensity to cause a head injury.
Methods
611 head injury assessment (HIA) events
in professional Rugby Union over a 3-year period were
analysed, with specific reference to match events,
position, time and nature of head contact.
Results
464 (76%) of HIA events occur during
tackles, with the tackler experiencing a significantly
greater propensity for an HIA than the ball carrier (1.40
HIAs/1000 tackles for the tackler vs 0.54 HIAs/1000
tackles for the ball carrier, incidence rate ratio (IRR)
2.59). Propensity was significantly greater for backline
players than forwards (IRR 1.54, 95%CI 1.28 to 1.84),
but did not increase over the course of the match. Head
to head contact accounted for the most tackler HIAs,
with the greatest propensity.
Conclusions
By virtue of its high propensity
and frequency, the tackle should be the focus for
interventions that may include law change and technique
education. A specific investigation of the characteristics
of the tackle is warranted to refine the approach to
preventative strategies
A Bayesian Approach to the Detection Problem in Gravitational Wave Astronomy
The analysis of data from gravitational wave detectors can be divided into
three phases: search, characterization, and evaluation. The evaluation of the
detection - determining whether a candidate event is astrophysical in origin or
some artifact created by instrument noise - is a crucial step in the analysis.
The on-going analyses of data from ground based detectors employ a frequentist
approach to the detection problem. A detection statistic is chosen, for which
background levels and detection efficiencies are estimated from Monte Carlo
studies. This approach frames the detection problem in terms of an infinite
collection of trials, with the actual measurement corresponding to some
realization of this hypothetical set. Here we explore an alternative, Bayesian
approach to the detection problem, that considers prior information and the
actual data in hand. Our particular focus is on the computational techniques
used to implement the Bayesian analysis. We find that the Parallel Tempered
Markov Chain Monte Carlo (PTMCMC) algorithm is able to address all three phases
of the anaylsis in a coherent framework. The signals are found by locating the
posterior modes, the model parameters are characterized by mapping out the
joint posterior distribution, and finally, the model evidence is computed by
thermodynamic integration. As a demonstration, we consider the detection
problem of selecting between models describing the data as instrument noise, or
instrument noise plus the signal from a single compact galactic binary. The
evidence ratios, or Bayes factors, computed by the PTMCMC algorithm are found
to be in close agreement with those computed using a Reversible Jump Markov
Chain Monte Carlo algorithm.Comment: 19 pages, 12 figures, revised to address referee's comment
A detection pipeline for galactic binaries in LISA data
The Galaxy is suspected to contain hundreds of millions of binary white dwarf
systems, a large fraction of which will have sufficiently small orbital period
to emit gravitational radiation in band for space-based gravitational wave
detectors such as the Laser Interferometer Space Antenna (LISA). LISA's main
science goal is the detection of cosmological events (supermassive black hole
mergers, etc.) however the gravitational signal from the galaxy will be the
dominant contribution to the data -- including instrumental noise -- over
approximately two decades in frequency. The catalogue of detectable binary
systems will serve as an unparalleled means of studying the Galaxy.
Furthermore, to maximize the scientific return from the mission, the data must
be "cleansed" of the galactic foreground. We will present an algorithm that can
accurately resolve and subtract >10000 of these sources from simulated data
supplied by the Mock LISA Data Challenge Task Force. Using the time evolution
of the gravitational wave frequency, we will reconstruct the position of the
recovered binaries and show how LISA will sample the entire compact binary
population in the Galaxy.Comment: 12 pages, 8 figure
Evaluation of Beam Quality Study of Arbitrary Beam Profiles from On-Wafer Vertical Cavity Surface Emitting Lasers
Vertical cavity surface emitting lasers (VCSELs) have found mainstream use in data centers and short-haul optical fiber communications. Along with the increase in the capacity of such systems comes an increase in the demand for greater power efficiency. System evaluation now includes an assessment of the energy required for each bit of data, a metric referred to as ‘joules per bit’. One source of loss for VCSELs is coupling loss, which is due to a mismatch in the mode profiles of the VCSELs and the optical fiber into which the VSCEL light is coupled. One way to reduce this loss is to develop single-mode VCSEL devices that are modally matched to optical fiber. Efficient development of these devices requires a technique for rapidly evaluating beam quality. This study investigates the use of a vertically mounted commercial beam profiling system and hardware interface software to quickly evaluate the beam quality of arbitrary beam profiles from on-wafer mounted VCSEL devices. This system captures the beam profile emitted from a VCSEL device at fixed locations along the vertical axis. Each image is evaluated within software along a predetermined axis, and the beam quality, or M2, is calculated according to international standards. This system is quantitatively compared against a commercial software package designed for determining beam quality across a fixed axis
Evaluation of Beam Quality Study of Arbitrary Beam Profiles from On-Wafer Vertical Cavity Surface Emitting Lasers
Vertical cavity surface emitting lasers (VCSELs) have found mainstream use in data centers and short-haul optical fiber communications. Along with the increase in the capacity of such systems comes an increase in the demand for greater power efficiency. System evaluation now includes an assessment of the energy required for each bit of data, a metric referred to as ‘joules per bit’. One source of loss for VCSELs is coupling loss, which is due to a mismatch in the mode profiles of the VCSELs and the optical fiber into which the VSCEL light is coupled. One way to reduce this loss is to develop single-mode VCSEL devices that are modally matched to optical fiber. Efficient development of these devices requires a technique for rapidly evaluating beam quality. This study investigates the use of a vertically mounted commercial beam profiling system and hardware interface software to quickly evaluate the beam quality of arbitrary beam profiles from on-wafer mounted VCSEL devices. This system captures the beam profile emitted from a VCSEL device at fixed locations along the vertical axis. Each image is evaluated within software along a predetermined axis, and the beam quality, or M2, is calculated according to international standards. This system is quantitatively compared against a commercial software package designed for determining beam quality across a fixed axis
Recommended from our members
Modelling national HIV/AIDS epidemics: revised approach in the UNAIDS Estimation and Projection Package 2011
Objective: United Nations Programme on HIV/AIDS reports regularly on estimated levels and trends in HIV/AIDS epidemics, which are evaluated using an epidemiological model within the Estimation and Projection Package (EPP). The relatively simple four-parameter model of HIV incidence used in EPP through the previous round of estimates has encountered challenges when attempting to fit certain data series on prevalence over time, particularly in settings with long running epidemics where prevalence has increased recently. To address this, the most recent version of the modelling package (EPP 2011) includes a more flexible epidemiological model that allows HIV infection risk to vary over time. This paper describes the technical details of this flexible approach to modelling HIV transmission dynamics within EPP 2011. Methodology For the flexible modelling approach, the force of infection parameter, r, is allowed to vary over time through a random walk formulation, and an informative prior distribution is used to improve short-term projections beyond the last year of data. Model parameters are estimated using a Bayesian estimation approach in which models are fit to HIV seroprevalence data from surveillance sites. Results: This flexible model can yield better estimates of HIV prevalence over time in situations where the classic EPP model has difficulties, such as in Uganda, where prevalence is no longer falling. Based on formal out-of-sample projection tests, the flexible modelling approach also improves predictions and CIs for extrapolations beyond the last observed data point. Conclusions: We recommend use of a flexible modelling approach where data are sufficient (eg, where at least 5 years of observations are available), and particularly where an epidemic is beyond its peak
Risk factors for head injury events in professional rugby union: a video analysis of 464 head injury events to inform proposed injury prevention strategies
OBJECTIVES: The tackle is responsible for the majority of head injuries during rugby union. In order to address head injury risk, risk factors during the tackle must first be identified. This study analysed tackle characteristics in the professional game in order to inform potential interventions. METHODS: 464 tackles resulting in a head injury assessment (HIA) were analysed in detail, with tackle type, direction, speed, acceleration, nature of head contact and player body position the characteristics of interest. RESULTS: Propensity to cause an HIA was significantly greater for active shoulder tackles, front-on tackles, high speeder tackles and an accelerating tackler. Head contact between a tackler's head and ball carrier's head or shoulder was significantly more likely to cause an HIA than contact below the level of the shoulder (incident rate ratio (IRR) 4.25, 95%-CI 3.38 to 5.35). The tackler experiences the majority (78%) of HIAs when head-to-head contact occurs. An upright tackler was 1.5 times more likely to experience an HIA than a bent at the waist tackler (IRR 1.44, 95% CI 1.18 to 1.76). CONCLUSIONS: This study confirms that energy transfer in the tackle is a risk factor for head injury, since direction, type and speed all influence HIA propensity. The study provides evidence that body position and the height of tackles should be a focus for interventions, since lowering height and adopting a bent at the waist body position is associated with reduced risk for both tacklers and ball carriers. To this end, World Rugby has implemented law change based on the present data
A Link to the Past: Using Markov Chain Monte Carlo Fitting to Constrain Fundamental Parameters of High-Redshift Galaxies
We have a developed a new method for fitting spectral energy distributions
(SEDs) to identify and constrain the physical properties of high-redshift (4 <
z < 8) galaxies. Our approach uses an implementation of Bayesian based Markov
Chain Monte Carlo (PiMC^2) that allows us to compare observations to
arbitrarily complex models and to compute 95% credible intervals that provide
robust constraints for the model parameters. The work is presented in 2
sections. In the first, we test PiMC^2 using simulated SEDs to not only confirm
the recovery of the known inputs but to assess the limitations of the method
and identify potential hazards of SED fitting when applied specifically to high
redshift (z>4) galaxies. Our tests reveal five critical results: 1) the ability
to confidently constrain metallicity, population ages, and Av all require
photometric accuracy better than what is currently achievable (i.e. less than a
few percent); 2) the ability to confidently constrain stellar masses (within a
factor of two) can be achieved without the need for high-precision photometry;
3) the addition of IRAC photometry does not guarantee that tighter constraints
of the stellar masses and ages can be defined; 4) different assumptions about
the star formation history can lead to significant biases in mass and age
estimates; and 5) we are able to constrain stellar age and Av of objects that
are both young and relatively dust free. In the second part of the paper we
apply PiMC^2 to 17 4<z<8 objects, including the GRAPES Ly alpha sample (4<z<6),
supplemented by HST/WFC3 near-IR observations, and several broad band selected
z>6 galaxies. Using PiMC^2, we are able to constrain the stellar mass of these
objects and in some cases their stellar age and find no evidence that any of
these sources formed at a redshift much larger than z_f=8, a time when the
Universe was ~ 0.6 Gyr old.Comment: Submitted to ApJ (Full abstract, 47 pages, 17 figures, 7 tables
- …