28,174 research outputs found

    Frequency-Domain Stochastic Modeling of Stationary Bivariate or Complex-Valued Signals

    Get PDF
    There are three equivalent ways of representing two jointly observed real-valued signals: as a bivariate vector signal, as a single complex-valued signal, or as two analytic signals known as the rotary components. Each representation has unique advantages depending on the system of interest and the application goals. In this paper we provide a joint framework for all three representations in the context of frequency-domain stochastic modeling. This framework allows us to extend many established statistical procedures for bivariate vector time series to complex-valued and rotary representations. These include procedures for parametrically modeling signal coherence, estimating model parameters using the Whittle likelihood, performing semi-parametric modeling, and choosing between classes of nested models using model choice. We also provide a new method of testing for impropriety in complex-valued signals, which tests for noncircular or anisotropic second-order statistical structure when the signal is represented in the complex plane. Finally, we demonstrate the usefulness of our methodology in capturing the anisotropic structure of signals observed from fluid dynamic simulations of turbulence.Comment: To appear in IEEE Transactions on Signal Processin

    Sparse Vector Autoregressive Modeling

    Full text link
    The vector autoregressive (VAR) model has been widely used for modeling temporal dependence in a multivariate time series. For large (and even moderate) dimensions, the number of AR coefficients can be prohibitively large, resulting in noisy estimates, unstable predictions and difficult-to-interpret temporal dependence. To overcome such drawbacks, we propose a 2-stage approach for fitting sparse VAR (sVAR) models in which many of the AR coefficients are zero. The first stage selects non-zero AR coefficients based on an estimate of the partial spectral coherence (PSC) together with the use of BIC. The PSC is useful for quantifying the conditional relationship between marginal series in a multivariate process. A refinement second stage is then applied to further reduce the number of parameters. The performance of this 2-stage approach is illustrated with simulation results. The 2-stage approach is also applied to two real data examples: the first is the Google Flu Trends data and the second is a time series of concentration levels of air pollutants.Comment: 39 pages, 7 figure

    Compressive Parameter Estimation for Sparse Translation-Invariant Signals Using Polar Interpolation

    Get PDF
    We propose new compressive parameter estimation algorithms that make use of polar interpolation to improve the estimator precision. Our work extends previous approaches involving polar interpolation for compressive parameter estimation in two aspects: (i) we extend the formulation from real non-negative amplitude parameters to arbitrary complex ones, and (ii) we allow for mismatch between the manifold described by the parameters and its polar approximation. To quantify the improvements afforded by the proposed extensions, we evaluate six algorithms for estimation of parameters in sparse translation-invariant signals, exemplified with the time delay estimation problem. The evaluation is based on three performance metrics: estimator precision, sampling rate and computational complexity. We use compressive sensing with all the algorithms to lower the necessary sampling rate and show that it is still possible to attain good estimation precision and keep the computational complexity low. Our numerical experiments show that the proposed algorithms outperform existing approaches that either leverage polynomial interpolation or are based on a conversion to a frequency-estimation problem followed by a super-resolution algorithm. The algorithms studied here provide various tradeoffs between computational complexity, estimation precision, and necessary sampling rate. The work shows that compressive sensing for the class of sparse translation-invariant signals allows for a decrease in sampling rate and that the use of polar interpolation increases the estimation precision.Comment: 13 pages, 5 figures, to appear in IEEE Transactions on Signal Processing; minor edits and correction

    Interpolation of nonstationary high frequency spatial-temporal temperature data

    Full text link
    The Atmospheric Radiation Measurement program is a U.S. Department of Energy project that collects meteorological observations at several locations around the world in order to study how weather processes affect global climate change. As one of its initiatives, it operates a set of fixed but irregularly-spaced monitoring facilities in the Southern Great Plains region of the U.S. We describe methods for interpolating temperature records from these fixed facilities to locations at which no observations were made, which can be useful when values are required on a spatial grid. We interpolate by conditionally simulating from a fitted nonstationary Gaussian process model that accounts for the time-varying statistical characteristics of the temperatures, as well as the dependence on solar radiation. The model is fit by maximizing an approximate likelihood, and the conditional simulations result in well-calibrated confidence intervals for the predicted temperatures. We also describe methods for handling spatial-temporal jumps in the data to interpolate a slow-moving cold front.Comment: Published in at http://dx.doi.org/10.1214/13-AOAS633 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    An evaluation of intrusive instrumental intelligibility metrics

    Full text link
    Instrumental intelligibility metrics are commonly used as an alternative to listening tests. This paper evaluates 12 monaural intrusive intelligibility metrics: SII, HEGP, CSII, HASPI, NCM, QSTI, STOI, ESTOI, MIKNN, SIMI, SIIB, and sEPSMcorr\text{sEPSM}^\text{corr}. In addition, this paper investigates the ability of intelligibility metrics to generalize to new types of distortions and analyzes why the top performing metrics have high performance. The intelligibility data were obtained from 11 listening tests described in the literature. The stimuli included Dutch, Danish, and English speech that was distorted by additive noise, reverberation, competing talkers, pre-processing enhancement, and post-processing enhancement. SIIB and HASPI had the highest performance achieving a correlation with listening test scores on average of ρ=0.92\rho=0.92 and ρ=0.89\rho=0.89, respectively. The high performance of SIIB may, in part, be the result of SIIBs developers having access to all the intelligibility data considered in the evaluation. The results show that intelligibility metrics tend to perform poorly on data sets that were not used during their development. By modifying the original implementations of SIIB and STOI, the advantage of reducing statistical dependencies between input features is demonstrated. Additionally, the paper presents a new version of SIIB called SIIBGauss\text{SIIB}^\text{Gauss}, which has similar performance to SIIB and HASPI, but takes less time to compute by two orders of magnitude.Comment: Published in IEEE/ACM Transactions on Audio, Speech, and Language Processing, 201
    corecore