849 research outputs found
Gravitational Wave Tests of General Relativity with the Parameterized Post-Einsteinian Framework
Gravitational wave astronomy has tremendous potential for studying extreme
astrophysical phenomena and exploring fundamental physics. The waves produced
by binary black hole mergers will provide a pristine environment in which to
study strong field, dynamical gravity. Extracting detailed information about
these systems requires accurate theoretical models of the gravitational wave
signals. If gravity is not described by General Relativity, analyses that are
based on waveforms derived from Einstein's field equations could result in
parameter biases and a loss of detection efficiency. A new class of
"parameterized post-Einsteinian" (ppE) waveforms has been proposed to cover
this eventuality. Here we apply the ppE approach to simulated data from a
network of advanced ground based interferometers (aLIGO/aVirgo) and from a
future spaced based interferometer (LISA). Bayesian inference and model
selection are used to investigate parameter biases, and to determine the level
at which departures from general relativity can be detected. We find that in
some cases the parameter biases from assuming the wrong theory can be severe.
We also find that gravitational wave observations will beat the existing bounds
on deviations from general relativity derived from the orbital decay of binary
pulsars by a large margin across a wide swath of parameter space.Comment: 16 pages, 10 figures. Modified in response to referee comment
Generalization bounds for averaged classifiers
We study a simple learning algorithm for binary classification. Instead of
predicting with the best hypothesis in the hypothesis class, that is, the
hypothesis that minimizes the training error, our algorithm predicts with a
weighted average of all hypotheses, weighted exponentially with respect to
their training error. We show that the prediction of this algorithm is much
more stable than the prediction of an algorithm that predicts with the best
hypothesis. By allowing the algorithm to abstain from predicting on some
examples, we show that the predictions it makes when it does not abstain are
very reliable. Finally, we show that the probability that the algorithm
abstains is comparable to the generalization error of the best hypothesis in
the class.Comment: Published by the Institute of Mathematical Statistics
(http://www.imstat.org) in the Annals of Statistics
(http://www.imstat.org/aos/) at http://dx.doi.org/10.1214/00905360400000005
Recommended from our members
A study of aspects of synchronisation and communication in certain parallel computer architectures
This paper examines methods for synchronisation and communication between tasks in highly parallel arrays of processors. The development of various methods is researched and simulation techniques are applied to specific structures, to examine their effectiveness. Two approaches to simulation are presented, in the first case a discrete event simulator is applied to task synchronisation implemented with semaphores in a close coupled environment. Secondly the concurrent programming language Occam is used to simulate a systolic configuration of processors. In this case the design is verified, through actual system construction.
Conclusions are drawn regarding the design disciplines and structure imposed by the use of these simulation techniques. A close relationship is found between the behaviour of a simulation written in Occam and the same structure constructed from multiple processors.
Further research is suggested into the subject of dataflow processors, to find suitable means for simulating such systems, prior to implementation. A type of test vehicle is proposed that would operate a dataflow processor under the control of the development system
Parallel Algorithms for the Maximum Flow
The problem of finding the maximal flow through a given network has been intensively studied over the years. The classic algorithm for this problem given by Ford and Fulkerson has been developed and improved by a number of authors including Edmonds and Karp. With the advent of parallel computers, it is of great interest to see whether more efficient algorithms can be designed and implemented. The networks which we will consider will be both capacitated and bounded. Compared with a capacitated network, the problem of finding a flow through a bounded network is much more complicated in that a transformation into an auxiliary network is required before a feasible flow can be found. In this thesis, we review the algorithms of Ford and Fulkerson and Edmonds and Karp and implement them in a standard sequential way. We also implement the transformation required to handle the case of a bounded network. We then develop two parallel algorithms, the first being a parallel version of the Edmonds and Karp algorithm while the second applies the Breadth-First search technique to extract as much parallelism as possible from the problem. Both these algorithms have been written in the Occam programming language and implemented on a transputer system consisting of an IBM PC host, a B004 single transputer board and a network of four transputers contained on a B003 board supplied by Inmos Ltd. This is an example of a multiprocessor machine with independent memory. The relative efficiency of the algorithms has been studied and we present tables of the execution times taken over a variety of test networks. The transformation of the original network into an auxiliary network has also been implemented using parallel techniques and the problems encountered in the development of the algorithm are described. We have also investigated in detail one of the few parallel algorithms for this problem described in the literature due to Shiloach and Vishkin. This algorithm is described in the thesis. It has not been possible to implement this algorithm because it is specifically designed to run on a multiprocessor machine with shared memory
Experimentally-calibrated population of models predicts and explains inter-subject variability in cardiac cellular\ud electrophysiology
Cellular and ionic causes of variability in the electrophysiological activity of hearts from individuals of the same species are unknown. However, improved understanding of this variability is key to enable prediction of the response of specific hearts to disease and therapies. Limitations of current mathematical modeling and experimental techniques hamper our ability to provide insight into variability. Here we describe a methodology to unravel the ionic determinants of inter-subject variability exhibited in experimental recordings, based on the construction and calibration of populations of models. We illustrate the methodology through its application to rabbit Purkinje preparations, due to their importance in arrhythmias and safety pharmacology assessment. We consider a set of equations describing the biophysical processes underlying rabbit Purkinje electrophysiology and we construct a population of over 10,000 models by randomly assigning specific parameter values corresponding to ionic current conductances and kinetics. We calibrate the model population by closely comparing simulation output and experimental recordings at three pacing frequencies. We show that 213 of the 10,000 candidate models are fully consistent with the experimental dataset. Ionic properties in the 213 models cover a wide range of values, including differences up to ±100% in several conductances. Partial correlation analysis shows that particular combinations of ionic properties determine the precise shape, amplitude and rate dependence of specific action potentials. Finally, we demonstrate that the population of models calibrated using data obtained under physiological conditions quantitatively predicts the action potential duration prolongation caused by exposure to four concentrations of the potassium channel blocker dofetilide
- …