12,818 research outputs found

    The GstLAL Search Analysis Methods for Compact Binary Mergers in Advanced LIGO's Second and Advanced Virgo's First Observing Runs

    Get PDF
    After their successful first observing run (September 12, 2015 - January 12, 2016), the Advanced LIGO detectors were upgraded to increase their sensitivity for the second observing run (November 30, 2016 - August 26, 2017). The Advanced Virgo detector joined the second observing run on August 1, 2017. We discuss the updates that happened during this period in the GstLAL-based inspiral pipeline, which is used to detect gravitational waves from the coalescence of compact binaries both in low latency and an offline configuration. These updates include deployment of a zero-latency whitening filter to reduce the over-all latency of the pipeline by up to 32 seconds, incorporation of the Virgo data stream in the analysis, introduction of a single-detector search to analyze data from the periods when only one of the detectors is running, addition of new parameters to the likelihood ratio ranking statistic, increase in the parameter space of the search, and introduction of a template mass-dependent glitch-excision thresholding method.Comment: 12 pages, 7 figures, to be submitted to Phys. Rev. D, comments welcom

    A learning approach to the detection of gravitational wave transients

    Get PDF
    We investigate the class of quadratic detectors (i.e., the statistic is a bilinear function of the data) for the detection of poorly modeled gravitational transients of short duration. We point out that all such detection methods are equivalent to passing the signal through a filter bank and linearly combine the output energy. Existing methods for the choice of the filter bank and of the weight parameters rely essentially on the two following ideas: (i) the use of the likelihood function based on a (possibly non-informative) statistical model of the signal and the noise, (ii) the use of Monte-Carlo simulations for the tuning of parametric filters to get the best detection probability keeping fixed the false alarm rate. We propose a third approach according to which the filter bank is "learned" from a set of training data. By-products of this viewpoint are that, contrarily to previous methods, (i) there is no requirement of an explicit description of the probability density function of the data when the signal is present and (ii) the filters we use are non-parametric. The learning procedure may be described as a two step process: first, estimate the mean and covariance of the signal with the training data; second, find the filters which maximize a contrast criterion referred to as deflection between the "noise only" and "signal+noise" hypothesis. The deflection is homogeneous to the signal-to-noise ratio and it uses the quantities estimated at the first step. We apply this original method to the problem of the detection of supernovae core collapses. We use the catalog of waveforms provided recently by Dimmelmeier et al. to train our algorithm. We expect such detector to have better performances on this particular problem provided that the reference signals are reliable.Comment: 22 pages, 4 figure

    Testing gravitational-wave searches with numerical relativity waveforms: Results from the first Numerical INJection Analysis (NINJA) project

    Get PDF
    The Numerical INJection Analysis (NINJA) project is a collaborative effort between members of the numerical relativity and gravitational-wave data analysis communities. The purpose of NINJA is to study the sensitivity of existing gravitational-wave search algorithms using numerically generated waveforms and to foster closer collaboration between the numerical relativity and data analysis communities. We describe the results of the first NINJA analysis which focused on gravitational waveforms from binary black hole coalescence. Ten numerical relativity groups contributed numerical data which were used to generate a set of gravitational-wave signals. These signals were injected into a simulated data set, designed to mimic the response of the Initial LIGO and Virgo gravitational-wave detectors. Nine groups analysed this data using search and parameter-estimation pipelines. Matched filter algorithms, un-modelled-burst searches and Bayesian parameter-estimation and model-selection algorithms were applied to the data. We report the efficiency of these search methods in detecting the numerical waveforms and measuring their parameters. We describe preliminary comparisons between the different search methods and suggest improvements for future NINJA analyses.Comment: 56 pages, 25 figures; various clarifications; accepted to CQ

    A particle filtering approach for joint detection/estimation of multipath effects on GPS measurements

    Get PDF
    Multipath propagation causes major impairments to Global Positioning System (GPS) based navigation. Multipath results in biased GPS measurements, hence inaccurate position estimates. In this work, multipath effects are considered as abrupt changes affecting the navigation system. A multiple model formulation is proposed whereby the changes are represented by a discrete valued process. The detection of the errors induced by multipath is handled by a Rao-Blackwellized particle filter (RBPF). The RBPF estimates the indicator process jointly with the navigation states and multipath biases. The interest of this approach is its ability to integrate a priori constraints about the propagation environment. The detection is improved by using information from near future GPS measurements at the particle filter (PF) sampling step. A computationally modest delayed sampling is developed, which is based on a minimal duration assumption for multipath effects. Finally, the standard PF resampling stage is modified to include an hypothesis test based decision step

    Trends, Cycles and Convergence

    Get PDF
    This article first discusses ways of decomposing a time series into trend and cyclical components, paying particular attention to a new class of model for cycles. It is shown how using an auxiliary series can help to achieve a more satisfactory decomposition. A discussion of balanced growth then leads on to the construction of new models for converging economies. The preferred models combine unobserved components with an error correction mechanism and allow a decomposition into trend, cycle and convergence components. This provides insight into what has happened in the past, enables the current state of an economy to be more accurately assessed and gives a procedure for the prediction of future observations. The methods are applied to data on the US, Japan and Chile.

    The Role of Credibility in the Cyclical Properties of Macroeconomic Policies in Emerging Economies

    Get PDF
    Optimal stabilization policy is counter-cyclical, aiming at keeping output close to its potential. However it has been traditionally argued that emerging economies are unable to adopt counter-cyclical monetary and fiscal policy. Here we argue that the cyclical properties of macroeconomic policies depend critically on policy credibility. We test this proposition by making use of recent panel data for eleven emerging market economies and time-series data for Chile. The evidence supports that countries with higher credibility, as reflected by lower country risk levels, are able to conduct countercyclical fiscal and monetary policies. Conversely, countries with less credible policies (and, therefore, with higher country risk spreads) contribute to larger cyclical fluctuations by applying pro-cyclical policies. For Chile we find that both monetary and fiscal policies have been largely counter-cyclical after 1993.
    • 

    corecore