4,353 research outputs found

    Gravitational waves: search results, data analysis and parameter estimation

    Get PDF
    The Amaldi 10 Parallel Session C2 on gravitational wave (GW) search results, data analysis and parameter estimation included three lively sessions of lectures by 13 presenters, and 34 posters. The talks and posters covered a huge range of material, including results and analysis techniques for ground-based GW detectors, targeting anticipated signals from different astrophysical sources: compact binary inspiral, merger and ringdown; GW bursts from intermediate mass binary black hole mergers, cosmic string cusps, core-collapse supernovae, and other unmodeled sources; continuous waves from spinning neutron stars; and a stochastic GW background. There was considerable emphasis on Bayesian techniques for estimating the parameters of coalescing compact binary systems from the gravitational waveforms extracted from the data from the advanced detector network. This included methods to distinguish deviations of the signals from what is expected in the context of General Relativity

    An information-theoretic approach to the gravitational-wave burst detection problem

    Get PDF
    The observational era of gravitational-wave astronomy began in the Fall of 2015 with the detection of GW150914. One potential type of detectable gravitational wave is short-duration gravitational-wave bursts, whose waveforms can be difficult to predict. We present the framework for a new detection algorithm for such burst events -- \textit{oLIB} -- that can be used in low-latency to identify gravitational-wave transients independently of other search algorithms. This algorithm consists of 1) an excess-power event generator based on the Q-transform -- \textit{Omicron} --, 2) coincidence of these events across a detector network, and 3) an analysis of the coincident events using a Markov chain Monte Carlo Bayesian evidence calculator -- \textit{LALInferenceBurst}. These steps compress the full data streams into a set of Bayes factors for each event; through this process, we use elements from information theory to minimize the amount of information regarding the signal-versus-noise hypothesis that is lost. We optimally extract this information using a likelihood-ratio test to estimate a detection significance for each event. Using representative archival LIGO data, we show that the algorithm can detect gravitational-wave burst events of astrophysical strength in realistic instrumental noise across different burst waveform morphologies. We also demonstrate that the combination of Bayes factors by means of a likelihood-ratio test can improve the detection efficiency of a gravitational-wave burst search. Finally, we show that oLIB's performance is robust against the choice of gravitational-wave populations used to model the likelihood-ratio test likelihoods

    Classification methods for noise transients in advanced gravitational-wave detectors II: performance tests on Advanced LIGO data

    Get PDF
    The data taken by the advanced LIGO and Virgo gravitational-wave detectors contains short duration noise transients that limit the significance of astrophysical detections and reduce the duty cycle of the instruments. As the advanced detectors are reaching sensitivity levels that allow for multiple detections of astrophysical gravitational-wave sources it is crucial to achieve a fast and accurate characterization of non-astrophysical transient noise shortly after it occurs in the detectors. Previously we presented three methods for the classification of transient noise sources. They are Principal Component Analysis for Transients (PCAT), Principal Component LALInference Burst (PC-LIB) and Wavelet Detection Filter with Machine Learning (WDF-ML). In this study we carry out the first performance tests of these algorithms on gravitational-wave data from the Advanced LIGO detectors. We use the data taken between the 3rd of June 2015 and the 14th of June 2015 during the 7th engineering run (ER7), and outline the improvements made to increase the performance and lower the latency of the algorithms on real data. This work provides an important test for understanding the performance of these methods on real, non stationary data in preparation for the second advanced gravitational-wave detector observation run, planned for later this year. We show that all methods can classify transients in non stationary data with a high level of accuracy and show the benefits of using multiple classifiers

    Reconstructing the calibrated strain signal in the Advanced LIGO detectors

    Get PDF
    Advanced LIGO's raw detector output needs to be calibrated to compute dimensionless strain h(t). Calibrated strain data is produced in the time domain using both a low-latency, online procedure and a high-latency, offline procedure. The low-latency h(t) data stream is produced in two stages, the first of which is performed on the same computers that operate the detector's feedback control system. This stage, referred to as the front-end calibration, uses infinite impulse response (IIR) filtering and performs all operations at a 16384 Hz digital sampling rate. Due to several limitations, this procedure currently introduces certain systematic errors in the calibrated strain data, motivating the second stage of the low-latency procedure, known as the low-latency gstlal calibration pipeline. The gstlal calibration pipeline uses finite impulse response (FIR) filtering to apply corrections to the output of the front-end calibration. It applies time-dependent correction factors to the sensing and actuation components of the calibrated strain to reduce systematic errors. The gstlal calibration pipeline is also used in high latency to recalibrate the data, which is necessary due mainly to online dropouts in the calibrated data and identified improvements to the calibration models or filters.Comment: 20 pages including appendices and bibliography. 11 Figures. 3 Table

    Measuring Cerebral Activation From fNIRS Signals: An Approach Based on Compressive Sensing and Taylor-Fourier Model

    Get PDF
    Functional near-infrared spectroscopy (fNIRS) is a noninvasive and portable neuroimaging technique that uses NIR light to monitor cerebral activity by the so-called haemodynamic responses (HRs). The measurement is challenging because of the presence of severe physiological noise, such as respiratory and vasomotor waves. In this paper, a novel technique for fNIRS signal denoising and HR estimation is described. The method relies on a joint application of compressed sensing theory principles and Taylor-Fourier modeling of nonstationary spectral components. It operates in the frequency domain and models physiological noise as a linear combination of sinusoidal tones, characterized in terms of frequency, amplitude, and initial phase. Algorithm performance is assessed over both synthetic and experimental data sets, and compared with that of two reference techniques from fNIRS literature

    Classification methods for noise transients in advanced gravitational-wave detectors

    Get PDF
    Noise of non-astrophysical origin will contaminate science data taken by the Advanced Laser Interferometer Gravitational-wave Observatory (aLIGO) and Advanced Virgo gravitational-wave detectors. Prompt characterization of instrumental and environmental noise transients will be critical for improving the sensitivity of the advanced detectors in the upcoming science runs. During the science runs of the initial gravitational-wave detectors, noise transients were manually classified by visually examining the time-frequency scan of each event. Here, we present three new algorithms designed for the automatic classification of noise transients in advanced detectors. Two of these algorithms are based on Principal Component Analysis. They are Principal Component Analysis for Transients (PCAT), and an adaptation of LALInference Burst (LIB). The third algorithm is a combination of an event generator called Wavelet Detection Filter (WDF) and machine learning techniques for classification. We test these algorithms on simulated data sets, and we show their ability to automatically classify transients by frequency, SNR and waveform morphology

    Determination and evaluation of clinically efficient stopping criteria for the multiple auditory steady-state response technique

    Get PDF
    Background: Although the auditory steady-state response (ASSR) technique utilizes objective statistical detection algorithms to estimate behavioural hearing thresholds, the audiologist still has to decide when to terminate ASSR recordings introducing once more a certain degree of subjectivity. Aims: The present study aimed at establishing clinically efficient stopping criteria for a multiple 80-Hz ASSR system. Methods: In Experiment 1, data of 31 normal hearing subjects were analyzed off-line to propose stopping rules. Consequently, ASSR recordings will be stopped when (1) all 8 responses reach significance and significance can be maintained for 8 consecutive sweeps; (2) the mean noise levels were ≤ 4 nV (if at this “≤ 4-nV” criterion, p-values were between 0.05 and 0.1, measurements were extended only once by 8 sweeps); and (3) a maximum amount of 48 sweeps was attained. In Experiment 2, these stopping criteria were applied on 10 normal hearing and 10 hearing-impaired adults to asses the efficiency. Results: The application of these stopping rules resulted in ASSR threshold values that were comparable to other multiple-ASSR research with normal hearing and hearing-impaired adults. Furthermore, in 80% of the cases, ASSR thresholds could be obtained within a time-frame of 1 hour. Investigating the significant response-amplitudes of the hearing-impaired adults through cumulative curves indicated that probably a higher noise-stop criterion than “≤ 4 nV” can be used. Conclusions: The proposed stopping rules can be used in adults to determine accurate ASSR thresholds within an acceptable time-frame of about 1 hour. However, additional research with infants and adults with varying degrees and configurations of hearing loss is needed to optimize these criteria

    Applying stochastic spike train theory for high-accuracy human MEG/EEG

    Get PDF
    Background The accuracy of electroencephalography (EEG) and magnetoencephalography (MEG) in measuring neural evoked responses (ERs) is challenged by overlapping neural sources. This lack of accuracy is a severe limitation to the application of ERs to clinical diagnostics. New method We here introduce a theory of stochastic neuronal spike timing probability densities for describing the large-scale spiking activity in neural assemblies, and a spike density component analysis (SCA) method for isolating specific neural sources. The method is tested in three empirical studies with 564 cases of ERs to auditory stimuli from 94 humans, each measured with 60 EEG electrodes and 306 MEG sensors, and a simulation study with 12,300 ERs. Results The first study showed that neural sources (but not non-encephalic artifacts) in individual averaged MEG/EEG waveforms are modelled accurately with temporal Gaussian probability density functions (median 99.7 %–99.9 % variance explained). The following studies confirmed that SCA can isolate an ER, namely the mismatch negativity (MMN), and that SCA reveals inter-individual variation in MMN amplitude. Finally, SCA reduced errors by suppressing interfering sources in simulated cases. Comparison with existing methods We found that gamma and sine functions fail to adequately describe individual MEG/EEG waveforms. Also, we observed that principal component analysis (PCA) and independent component analysis (ICA) does not consistently suppress interference from overlapping brain activity in neither empirical nor simulated cases. Conclusions These findings suggest that the overlapping neural sources in single-subject or patient data can be more accurately separated by applying SCA in comparison to PCA and ICA.Peer reviewe
    • …
    corecore