320 research outputs found
BBO and the Neutron-Star-Binary Subtraction Problem
The Big Bang Observer (BBO) is a proposed space-based gravitational-wave (GW)
mission designed primarily to search for an inflation-generated GW background
in the frequency range 0.1-1 Hz. The major astrophysical foreground in this
range is gravitational radiation from inspiraling compact binaries. This
foreground is expected to be much larger than the inflation-generated
background, so to accomplish its main goal, BBO must be sensitive enough to
identify and subtract out practically all such binaries in the observable
universe. It is somewhat subtle to decide whether BBO's current baseline design
is sufficiently sensitive for this task, since, at least initially, the
dominant noise source impeding identification of any one binary is confusion
noise from all the others. Here we present a self-consistent scheme for
deciding whether BBO's baseline design is indeed adequate for subtracting out
the binary foreground. We conclude that the current baseline should be
sufficient. However if BBO's instrumental sensitivity were degraded by a factor
2-4, it could no longer perform its main mission. It is impossible to perfectly
subtract out each of the binary inspiral waveforms, so an important question is
how to deal with the "residual" errors in the post-subtraction data stream. We
sketch a strategy of "projecting out" these residual errors, at the cost of
some effective bandwidth. We also provide estimates of the sizes of various
post-Newtonian effects in the inspiral waveforms that must be accounted for in
the BBO analysis.Comment: corrects some errors in figure captions that are present in the
published versio
The Effect of the LISA Response Function on Observations of Monochromatic Sources
The Laser Interferometer Space Antenna (LISA) is expected to provide the
largest observational sample of binary systems of faint sub-solar mass compact
objects, in particular white-dwarfs, whose radiation is monochromatic over most
of the LISA observational window. Current astrophysical estimates suggest that
the instrument will be able to resolve about 10000 such systems, with a large
fraction of them at frequencies above 3 mHz, where the wavelength of
gravitational waves becomes comparable to or shorter than the LISA arm-length.
This affects the structure of the so-called LISA transfer function which cannot
be treated as constant in this frequency range: it introduces characteristic
phase and amplitude modulations that depend on the source location in the sky
and the emission frequency. Here we investigate the effect of the LISA transfer
function on detection and parameter estimation for monochromatic sources. For
signal detection we show that filters constructed by approximating the transfer
function as a constant (long wavelength approximation) introduce a negligible
loss of signal-to-noise ratio -- the fitting factor always exceeds 0.97 -- for
f below 10mHz, therefore in a frequency range where one would actually expect
the approximation to fail. For parameter estimation, we conclude that in the
range 3mHz to 30mHz the errors associated with parameter measurements differ
from about 5% up to a factor of 10 (depending on the actual source parameters
and emission frequency) with respect to those computed using the long
wavelength approximation.Comment: replacement version with typos correcte
Particle Swarm Optimization and gravitational wave data analysis: Performance on a binary inspiral testbed
The detection and estimation of gravitational wave (GW) signals belonging to
a parameterized family of waveforms requires, in general, the numerical
maximization of a data-dependent function of the signal parameters. Due to
noise in the data, the function to be maximized is often highly multi-modal
with numerous local maxima. Searching for the global maximum then becomes
computationally expensive, which in turn can limit the scientific scope of the
search. Stochastic optimization is one possible approach to reducing
computational costs in such applications. We report results from a first
investigation of the Particle Swarm Optimization (PSO) method in this context.
The method is applied to a testbed motivated by the problem of detection and
estimation of a binary inspiral signal. Our results show that PSO works well in
the presence of high multi-modality, making it a viable candidate method for
further applications in GW data analysis.Comment: 13 pages, 5 figure
Pulsar timing arrays as imaging gravitational wave telescopes: angular resolution and source (de)confusion
Pulsar timing arrays (PTAs) will be sensitive to a finite number of
gravitational wave (GW) "point" sources (e.g. supermassive black hole
binaries). N quiet pulsars with accurately known distances d_{pulsar} can
characterize up to 2N/7 distant chirping sources per frequency bin \Delta
f_{gw}=1/T, and localize them with "diffraction limited" precision \delta\theta
\gtrsim (1/SNR)(\lambda_{gw}/d_{pulsar}). Even if the pulsar distances are
poorly known, a PTA with F frequency bins can still characterize up to
(2N/7)[1-(1/2F)] sources per bin, and the quasi-singular pattern of timing
residuals in the vicinity of a GW source still allows the source to be
localized quasi-topologically within roughly the smallest quadrilateral of
quiet pulsars that encircles it on the sky, down to a limiting resolution
\delta\theta \gtrsim (1/SNR) \sqrt{\lambda_{gw}/d_{pulsar}}. PTAs may be
unconfused, even at the lowest frequencies, with matched filtering always
appropriate.Comment: 7 pages, 1 figure, matches Phys.Rev.D versio
Singular value decomposition applied to compact binary coalescence gravitational-wave signals
We investigate the application of the singular value decomposition to
compact-binary, gravitational-wave data-analysis. We find that the truncated
singular value decomposition reduces the number of filters required to analyze
a given region of parameter space of compact binary coalescence waveforms by an
order of magnitude with high reconstruction accuracy. We also compute an
analytic expression for the expected signal-loss due to the singular value
decomposition truncation.Comment: 4 figures, 6 page
Data analysis strategies for the detection of gravitational waves in non-Gaussian noise
In order to analyze data produced by the kilometer-scale gravitational wave
detectors that will begin operation early next century, one needs to develop
robust statistical tools capable of extracting weak signals from the detector
noise. This noise will likely have non-stationary and non-Gaussian components.
To facilitate the construction of robust detection techniques, I present a
simple two-component noise model that consists of a background of Gaussian
noise as well as stochastic noise bursts. The optimal detection statistic
obtained for such a noise model incorporates a natural veto which suppresses
spurious events that would be caused by the noise bursts. When two detectors
are present, I show that the optimal statistic for the non-Gaussian noise model
can be approximated by a simple coincidence detection strategy. For simulated
detector noise containing noise bursts, I compare the operating characteristics
of (i) a locally optimal detection statistic (which has nearly-optimal behavior
for small signal amplitudes) for the non-Gaussian noise model, (ii) a standard
coincidence-style detection strategy, and (iii) the optimal statistic for
Gaussian noise.Comment: 5 pages RevTeX, 4 figure
Parametrized tests of post-Newtonian theory using Advanced LIGO and Einstein Telescope
General relativity has very specific predictions for the gravitational
waveforms from inspiralling compact binaries obtained using the post-Newtonian
(PN) approximation. We investigate the extent to which the measurement of the
PN coefficients, possible with the second generation gravitationalwave
detectors such as the Advanced Laser Interferometer Gravitational-Wave
Observatory (LIGO) and the third generation gravitational-wave detectors such
as the Einstein Telescope (ET), could be used to test post-Newtonian theory and
to put bounds on a subclass of parametrized-post-Einstein theories which differ
from general relativity in a parametrized sense. We demonstrate this
possibility by employing the best inspiralling waveform model for nonspinning
compact binaries which is 3.5PN accurate in phase and 3PN in amplitude. Within
the class of theories considered, Advanced LIGO can test the theory at 1.5PN
and thus the leading tail term. Future observations of stellar mass black hole
binaries by ET can test the consistency between the various PN coefficients in
the gravitational-wave phasing over the mass range of 11-44 Msun. The choice of
the lower frequency cut off is important for testing post-Newtonian theory
using the ET. The bias in the test arising from the assumption of nonspinning
binaries is indicated.Comment: 18 pages, 11 figures, Matches with the published versio
Composite gravitational-wave detection of compact binary coalescence
The detection of gravitational waves from compact binaries relies on a
computationally burdensome processing of gravitational-wave detector data. The
parameter space of compact-binary-coalescence gravitational waves is large and
optimal detection strategies often require nearly redundant calculations.
Previously, it has been shown that singular value decomposition of search
filters removes redundancy. Here we will demonstrate the use of singular value
decomposition for a composite detection statistic. This can greatly improve the
prospects for a computationally feasible rapid detection scheme across a large
compact binary parameter space.Comment: 6 pages, 3 figure
Gravitational radiation in d>4 from effective field theory
Some years ago, a new powerful technique, known as the Classical Effective
Field Theory, was proposed to describe classical phenomena in gravitational
systems. Here we show how this approach can be useful to investigate
theoretically important issues, such as gravitational radiation in any
spacetime dimension. In particular, we derive for the first time the
Einstein-Infeld-Hoffman Lagrangian and we compute Einstein's quadrupole formula
for any number of flat spacetime dimensions.Comment: 32 pages, 10 figures. v2: Factor in eq. (3.11) fixed. References
adde
Caracterización clínica, funcional y hemodinámica de la población con hipertensión pulmonar arterial evaluada en el Instituto Nacional del Tórax
Pulmonary Arterial Hypertension is a rare, progressive and devastating disease with severe consequences in quality of life and survival. Aim: A clinical, functional and hemodynamic assessment of patients with pulmonary arterial hypertension and categorization according to severity. Material and methods: Prospective registry of patients with arterial pulmonary hypertension, hemodynamically defined. Clinical evaluation was performed using World Health Organization functional score (I to IV) and Borg dyspnea scale. Six minute walking test, echocardiography and right heart catheterization were used for functional and hemodynamic assessment. Intravenous Adenosine was used to assess vascular reactivity during the hemodynamic evaluation. Results: Twenty nine patients were included (25 women, age range 16-72 years). Pulmonary hypertension was idiopathic in 11, associated to connective tissue disease in seven, associated to congenital heart disease in nine and associated to chronic thromboembolism in two. The mean lapse of symptoms before assessment was 2.9 years and 100% had dyspnea (Borg 5.1). Functional class I, II, III and IV was observed in 0, 5, 21 and 3 patients respectively. Six minutes walking test was 378±113 m. Mean pulmonary pressure was 59.4±12.2 mmHg, cardiac index was 2.57±0.88 and pulmonary vascular resistance index: 1798.4±855 (dyne.sec)/cm5. Nine patients had a mean pulmonary arterial pressure >55 mmHg and a cardiac index <2.1, considered as bad prognosis criteria. Adenosine test was positive in 17%. Conclusions: This group of patients with Pulmonary Arterial Hypertension was mainly conformed by young females, with a moderate to severe disease.http://www.scielo.cl/scielo.php?script=sci_arttext&pid=S0034-98872006000500007&nrm=is
- …