768 research outputs found

    A New Regularized Adaptive Windowed Lomb Periodogram for Time-Frequency Analysis of Nonstationary Signals With Impulsive Components

    Get PDF
    This paper proposes a new class of windowed Lomb periodogram (WLP) for time-frequency analysis of nonstationary signals, which may contain impulsive components and may be nonuniformly sampled. The proposed methods significantly extend the conventional Lomb periodogram in two aspects: 1) The nonstationarity problem is addressed by employing the weighted least squares (WLS) to estimate locally the time-varying periodogram and an intersection of confidence interval technique to adaptively select the window sizes of WLS in the time-frequency domain. This yields an adaptive WLP (AWLP) having a better tradeoff between time resolution and frequency resolution. 2) A more general regularized maximum-likelihood-type (M-) estimator is used instead of the LS estimator in estimating the AWLP. This yields a novel M-estimation-based regularized AWLP method which is capable of reducing estimation variance, accentuating predominant time-frequency components, restraining adverse influence of impulsive components, and separating impulsive components. Simulation results were conducted to illustrate the advantages of the proposed method over the conventional Lomb periodogram in adaptive time-frequency resolution, sparse representation for sinusoids, robustness to impulsive components, and applicability to nonuniformly sampled data. Moreover, as the computation of the proposed method at each time sample and frequency is independent of others, parallel computing can be conveniently employed without much difficulty to significantly reduce the computational time of our proposed method for real-time applications. The proposed method is expected to find a wide range of applications in instrumentation and measurement and related areas. Its potential applications to power quality analysis and speech signal analysis are also discussed and demonstrated.published_or_final_versio

    Classical sampling theorems in the context of multirate and polyphase digital filter bank structures

    Get PDF
    The recovery of a signal from so-called generalized samples is a problem of designing appropriate linear filters called reconstruction (or synthesis) filters. This relationship is reviewed and explored. Novel theorems for the subsampling of sequences are derived by direct use of the digital-filter-bank framework. These results are related to the theory of perfect reconstruction in maximally decimated digital-filter-bank systems. One of the theorems pertains to the subsampling of a sequence and its first few differences and its subsequent stable reconstruction at finite cost with no error. The reconstruction filters turn out to be multiplierless and of the FIR (finite impulse response) type. These ideas are extended to the case of two-dimensional signals by use of a Kronecker formalism. The subsampling of bandlimited sequences is also considered. A sequence x(n ) with a Fourier transform vanishes for |ω|&ges;Lπ/M, where L and M are integers with L<M, can in principle be represented by reducing the data rate by the amount M/L. The digital polyphase framework is used as a convenient tool for the derivation as well as mechanization of the sampling theorem

    Spectral analysis of randomly sampled signals: suppression of aliasing and sampler jitter

    Get PDF
    Nonuniform sampling can facilitate digital alias-free signal processing (DASP), i.e., digital signal processing that is not affected by aliasing. This paper presents two DASP approaches for spectrum estimation of continuous-time signals. The proposed algorithms, named the weighted sample (WS) and weighted probability (WP) density functions, respectively, utilize random sampling to suppress aliasing. Both methods produce unbiased estimators of the signal spectrum. To achieve this effect, the computational procedure for each method has been suitably matched with the probability density function characterising the pseudorandom generators of the sampling instants. Both proposed methods are analyzed, and the qualities of the estimators they produce have been compared with each other. Although none of the proposed spectrum estimators is universally better than the other one, it has been shown that in practical cases, the WP estimator produces generally smaller errors than those obtained from WS estimation. A practical limitation of the approaches caused by the sampling-instant jitter is also studied. It has been proven that in the presence of jitter, the theoretically infinite bandwidths of WS and WP signal analyses are limited. The maximum frequency up to which these analyses can be performed is inversely proportional to the size of the jitter

    Channel Capacity under Sub-Nyquist Nonuniform Sampling

    Full text link
    This paper investigates the effect of sub-Nyquist sampling upon the capacity of an analog channel. The channel is assumed to be a linear time-invariant Gaussian channel, where perfect channel knowledge is available at both the transmitter and the receiver. We consider a general class of right-invertible time-preserving sampling methods which include irregular nonuniform sampling, and characterize in closed form the channel capacity achievable by this class of sampling methods, under a sampling rate and power constraint. Our results indicate that the optimal sampling structures extract out the set of frequencies that exhibits the highest signal-to-noise ratio among all spectral sets of measure equal to the sampling rate. This can be attained through filterbank sampling with uniform sampling at each branch with possibly different rates, or through a single branch of modulation and filtering followed by uniform sampling. These results reveal that for a large class of channels, employing irregular nonuniform sampling sets, while typically complicated to realize, does not provide capacity gain over uniform sampling sets with appropriate preprocessing. Our findings demonstrate that aliasing or scrambling of spectral components does not provide capacity gain, which is in contrast to the benefits obtained from random mixing in spectrum-blind compressive sampling schemes.Comment: accepted to IEEE Transactions on Information Theory, 201

    Graph Spectral Image Processing

    Full text link
    Recent advent of graph signal processing (GSP) has spurred intensive studies of signals that live naturally on irregular data kernels described by graphs (e.g., social networks, wireless sensor networks). Though a digital image contains pixels that reside on a regularly sampled 2D grid, if one can design an appropriate underlying graph connecting pixels with weights that reflect the image structure, then one can interpret the image (or image patch) as a signal on a graph, and apply GSP tools for processing and analysis of the signal in graph spectral domain. In this article, we overview recent graph spectral techniques in GSP specifically for image / video processing. The topics covered include image compression, image restoration, image filtering and image segmentation

    Extended Fourier analysis of signals

    Full text link
    This summary of the doctoral thesis is created to emphasize the close connection of the proposed spectral analysis method with the Discrete Fourier Transform (DFT), the most extensively studied and frequently used approach in the history of signal processing. It is shown that in a typical application case, where uniform data readings are transformed to the same number of uniformly spaced frequencies, the results of the classical DFT and proposed approach coincide. The difference in performance appears when the length of the DFT is selected to be greater than the length of the data. The DFT solves the unknown data problem by padding readings with zeros up to the length of the DFT, while the proposed Extended DFT (EDFT) deals with this situation in a different way, it uses the Fourier integral transform as a target and optimizes the transform basis in the extended frequency range without putting such restrictions on the time domain. Consequently, the Inverse DFT (IDFT) applied to the result of EDFT returns not only known readings, but also the extrapolated data, where classical DFT is able to give back just zeros, and higher resolution are achieved at frequencies where the data has been successfully extended. It has been demonstrated that EDFT able to process data with missing readings or gaps inside or even nonuniformly distributed data. Thus, EDFT significantly extends the usability of the DFT-based methods, where previously these approaches have been considered as not applicable. The EDFT founds the solution in an iterative way and requires repeated calculations to get the adaptive basis, and this makes it numerical complexity much higher compared to DFT. This disadvantage was a serious problem in the 1990s, when the method has been proposed. Fortunately, since then the power of computers has increased so much that nowadays EDFT application could be considered as a real alternative.Comment: 29 pages, 8 figure

    Formulations for Estimating Spatial Variations of Analysis Error Variance to Improve Multiscale and Multistep Variational Data Assimilation

    Get PDF
    When the coarse-resolution observations used in the first step of multiscale and multistep variational data assimilation become increasingly nonuniform and/or sparse, the error variance of the first-step analysis tends to have increasingly large spatial variations. However, the analysis error variance computed from the previously developed spectral formulations is constant and thus limited to represent only the spatially averaged error variance. To overcome this limitation, analytic formulations are constructed to efficiently estimate the spatial variation of analysis error variance and associated spatial variation in analysis error covariance. First, a suite of formulations is constructed to efficiently estimate the error variance reduction produced by analyzing the coarse-resolution observations in one- and two-dimensional spaces with increased complexity and generality (from uniformly distributed observations with periodic extension to nonuniformly distributed observations without periodic extension). Then, three different formulations are constructed for using the estimated analysis error variance to modify the analysis error covariance computed from the spectral formulations. The successively improved accuracies of these three formulations and their increasingly positive impacts on the two-step variational analysis (or multistep variational analysis in first two steps) are demonstrated by idealized experiments

    NONUNIFORMLY AND RANDOMLY SAMPLED SYSTEMS

    Get PDF
    Problems with missing data, sampling irregularities and randomly sampled systems are the topics covered by this dissertation. The spectral analysis of a series of periodically repeated sampling patterns is developed. Parameter estimation of autoregressive moving average models using partial observations and an algorithm to fill in the missing data are proved and demonstrated by simulation programs. Interpolation of missing data using bandlimiting assumptions and discrete Fourier transform techniques is developed. Representation and analysis of randomly sampled linear systems with independent and identically distributed sampling intervals are studied. The mean, and the mean-square behavior of a multiple-input multiple-output randomly sampled system are found. A definition of and results concerning the power spectral density gain are also given. A complete FORTRAN simulation package is developed and implemented in a microcomputer environment demonstrating the new algorithms
    • …
    corecore