32,643 research outputs found

    Signal Detection for Cognitive Radios with Smashed Filtering

    Get PDF
    Compressed Sensing and the related recently intro duced Smashed Filter are novel signal processing methods, which allow for low-complexity parameter estimation by projecting the signal under analysis on a random subspace. In this paper the Smashed Filter of Davenport et al. is applied to a principal problem of digital communications: pilot-based time offset and frequency offset estimation. An application, motivated by current Cognitive Radio research, is wide-band detection of a narrow-band signal, e.g. to synchronize terminals without prior channel or frequency allocation. Smashed Filter estimation and maximum likelihood-based, uncompressed estimation for a signal corrupted by additive white Gaussian noise (Matched Filter estimation) are compared. Smashed Filtering adds a degree of freedom to signal detection and estimation problems, which effectively allows to trade signal-to-noise ratio against processing bandwidth for arbitrary signals

    Aperture synthesis for gravitational-wave data analysis: Deterministic Sources

    Get PDF
    Gravitational wave detectors now under construction are sensitive to the phase of the incident gravitational waves. Correspondingly, the signals from the different detectors can be combined, in the analysis, to simulate a single detector of greater amplitude and directional sensitivity: in short, aperture synthesis. Here we consider the problem of aperture synthesis in the special case of a search for a source whose waveform is known in detail: \textit{e.g.,} compact binary inspiral. We derive the likelihood function for joint output of several detectors as a function of the parameters that describe the signal and find the optimal matched filter for the detection of the known signal. Our results allow for the presence of noise that is correlated between the several detectors. While their derivation is specialized to the case of Gaussian noise we show that the results obtained are, in fact, appropriate in a well-defined, information-theoretic sense even when the noise is non-Gaussian in character. The analysis described here stands in distinction to ``coincidence analyses'', wherein the data from each of several detectors is studied in isolation to produce a list of candidate events, which are then compared to search for coincidences that might indicate common origin in a gravitational wave signal. We compare these two analyses --- optimal filtering and coincidence --- in a series of numerical examples, showing that the optimal filtering analysis always yields a greater detection efficiency for given false alarm rate, even when the detector noise is strongly non-Gaussian.Comment: 39 pages, 4 figures, submitted to Phys. Rev.

    Echo Cancellation : the generalized likelihood ratio test for double-talk vs. channel change

    Get PDF
    Echo cancellers are required in both electrical (impedance mismatch) and acoustic (speaker-microphone coupling) applications. One of the main design problems is the control logic for adaptation. Basically, the algorithm weights should be frozen in the presence of double-talk and adapt quickly in the absence of double-talk. The optimum likelihood ratio test (LRT) for this problem was studied in a recent paper. The LRT requires a priori knowledge of the background noise and double-talk power levels. Instead, this paper derives a generalized log likelihood ratio test (GLRT) that does not require this knowledge. The probability density function of a sufficient statistic under each hypothesis is obtained and the performance of the test is evaluated as a function of the system parameters. The receiver operating characteristics (ROCs) indicate that it is difficult to correctly decide between double-talk and a channel change, based upon a single look. However, detection based on about 200 successive samples yields a detection probability close to unity (0.99) with a small false alarm probability (0.01) for the theoretical GLRT model. Application of a GLRT-based echo canceller (EC) to real voice data shows comparable performance to that of the LRT-based EC given in a recent paper

    Introduction to the Analysis of Low-Frequency Gravitational Wave Data

    Get PDF
    The space-based gravitational wave detector LISA will observe in the low-frequency gravitational-wave band (0.1 mHz up to 1 Hz). LISA will search for a variety of expected signals, and when it detects a signal it will have to determine a number of parameters, such as the location of the source on the sky and the signal's polarisation. This requires pattern-matching, called matched filtering, which uses the best available theoretical predictions about the characteristics of waveforms. All the estimates of the sensitivity of LISA to various sources assume that the data analysis is done in the optimum way. Because these techniques are unfamiliar to many young physicists, I use the first part of this lecture to give a very basic introduction to time-series data analysis, including matched filtering. The second part of the lecture applies these techniques to LISA, showing how estimates of LISA's sensitivity can be made, and briefly commenting on aspects of the signal-analysis problem that are special to LISA.Comment: 20 page

    An excess power statistic for detection of burst sources of gravitational radiation

    Get PDF
    We examine the properties of an excess power method to detect gravitational waves in interferometric detector data. This method is designed to detect short-duration (< 0.5 s) burst signals of unknown waveform, such as those from supernovae or black hole mergers. If only the bursts' duration and frequency band are known, the method is an optimal detection strategy in both Bayesian and frequentist senses. It consists of summing the data power over the known time interval and frequency band of the burst. If the detector noise is stationary and Gaussian, this sum is distributed as a chi-squared (non-central chi-squared) deviate in the absence (presence) of a signal. One can use these distributions to compute frequentist detection thresholds for the measured power. We derive the method from Bayesian analyses and show how to compute Bayesian thresholds. More generically, when only upper and/or lower bounds on the bursts duration and frequency band are known, one must search for excess power in all concordant durations and bands. Two search schemes are presented and their computational efficiencies are compared. We find that given reasonable constraints on the effective duration and bandwidth of signals, the excess power search can be performed on a single workstation. Furthermore, the method can be almost as efficient as matched filtering when a large template bank is required. Finally, we derive generalizations of the method to a network of several interferometers under the assumption of Gaussian noise.Comment: 22 pages, 6 figure

    Edge and Line Feature Extraction Based on Covariance Models

    Get PDF
    age segmentation based on contour extraction usually involves three stages of image operations: feature extraction, edge detection and edge linking. This paper is devoted to the first stage: a method to design feature extractors used to detect edges from noisy and/or blurred images. The method relies on a model that describes the existence of image discontinuities (e.g. edges) in terms of covariance functions. The feature extractor transforms the input image into a “log-likelihood ratio” image. Such an image is a good starting point of the edge detection stage since it represents a balanced trade-off between signal-to-noise ratio and the ability to resolve detailed structures. For 1-D signals, the performance of the edge detector based on this feature extractor is quantitatively assessed by the so called “average risk measure”. The results are compared with the performances of 1-D edge detectors known from literature. Generalizations to 2-D operators are given. Applications on real world images are presented showing the capability of the covariance model to build edge and line feature extractors. Finally it is shown that the covariance model can be coupled to a MRF-model of edge configurations so as to arrive at a maximum a posteriori estimate of the edges or lines in the image

    A space communications study Final report, 15 Sep. 1966 - 15 Sep. 1967

    Get PDF
    Investigation of signal to noise ratios and signal transmission efficiency for space communication system

    Optimizing gravitational-wave searches for a population of coalescing binaries: Intrinsic parameters

    Full text link
    We revisit the problem of searching for gravitational waves from inspiralling compact binaries in Gaussian coloured noise. For binaries with quasicircular orbits and non-precessing component spins, considering dominant mode emission only, if the intrinsic parameters of the binary are known then the optimal statistic for a single detector is the well-known two-phase matched filter. However, the matched filter signal-to-noise ratio is /not/ in general an optimal statistic for an astrophysical population of signals, since their distribution over the intrinsic parameters will almost certainly not mirror that of noise events, which is determined by the (Fisher) information metric. Instead, the optimal statistic for a given astrophysical distribution will be the Bayes factor, which we approximate using the output of a standard template matched filter search. We then quantify the possible improvement in number of signals detected for various populations of non-spinning binaries: for a distribution of signals uniformly distributed in volume and with component masses distributed uniformly over the range 1≀m1,2/M⊙≀241\leq m_{1,2}/M_\odot\leq 24, (m1+m2)/M⊙≀25(m_1+m_2) /M_\odot\leq 25 at fixed expected SNR, we find ≳20%\gtrsim 20\% more signals at a false alarm threshold of 10−6 10^{-6}\,Hz in a single detector. The method may easily be generalized to binaries with non-precessing spins.Comment: Version accepted by Phys. Rev.

    Robust statistics for deterministic and stochastic gravitational waves in non-Gaussian noise I: Frequentist analyses

    Get PDF
    Gravitational wave detectors will need optimal signal-processing algorithms to extract weak signals from the detector noise. Most algorithms designed to date are based on the unrealistic assumption that the detector noise may be modeled as a stationary Gaussian process. However most experiments exhibit a non-Gaussian ``tail'' in the probability distribution. This ``excess'' of large signals can be a troublesome source of false alarms. This article derives an optimal (in the Neyman-Pearson sense, for weak signals) signal processing strategy when the detector noise is non-Gaussian and exhibits tail terms. This strategy is robust, meaning that it is close to optimal for Gaussian noise but far less sensitive than conventional methods to the excess large events that form the tail of the distribution. The method is analyzed for two different signal analysis problems: (i) a known waveform (e.g., a binary inspiral chirp) and (ii) a stochastic background, which requires a multi-detector signal processing algorithm. The methods should be easy to implement: they amount to truncation or clipping of sample values which lie in the outlier part of the probability distribution.Comment: RevTeX 4, 17 pages, 8 figures, typos corrected from first version
    • 

    corecore