694 research outputs found

    Algorithms for Spectral Analysis of Irregularly Sampled Time Series

    Get PDF
    In this paper, we present a spectral analysis method based upon least square approximation. Our method deals with nonuniform sampling. It provides meaningful phase information that varies in a predictable way as the samples are shifted in time. We compare least square approximations of real and complex series, analyze their properties for sample count towards infinity as well as estimator behaviour, and show the equivalence to the discrete Fourier transform applied onto uniformly sampled data as a special case. We propose a way to deal with the undesirable side effects of nonuniform sampling in the presence of constant offsets. By using weighted least square approximation, we introduce an analogue to the Morlet wavelet transform for nonuniformly sampled data. Asymptotically fast divide-and-conquer schemes for the computation of the variants of the proposed method are presented. The usefulness is demonstrated in some relevant applications.

    Period Estimation and Denoising Families of Nonuniformly Sampled Time Series

    Get PDF
    Nonuniformly sampled time series are common in astronomy, finance, and other areas of research. Commonly, these time series belong to a family of signals recorded from the same phenomenon. Period estimation and denoising of such data relies on periodograms. In particular, the Lomb-Scargle periodogram and its extension, the Multiband Lomb-Scargle, are at the forefront of time series period estimation. However, these methods are not without laws. This paper explores alternatives to the Lomb-Scargle and Multiband Lomb-Scargle. In particular, this thesis uses regularized least squares and the convolution theorem to introduce a spectral consensus model of a family of nonuniformly sampled time series

    Algorithms for Spectral Analysis of Irregularly Sampled Time Series

    Get PDF
    In this paper, we present a spectral analysis method based upon least square approximation. Our method deals with nonuniform sampling. It provides meaningful phase information that varies in a predictable way as the samples are shifted in time. We compare least square approximations of real and complex series, analyze their properties for sample count towards infinity as well as estimator behaviour, and show the equivalence to the discrete Fourier transform applied onto uniformly sampled data as a special case. We propose a way to deal with the undesirable side effects of nonuniform sampling in the presence of constant offsets. By using weighted least square approximation, we introduce an analogue to the Morlet wavelet transform for nonuniformly sampled data. Asymptotically fast divide-and-conquer schemes for the computation of the variants of the proposed method are presented. The usefulness is demonstrated in some relevant applications

    Optimized Nonuniform FFTs and Their Application to Array Factor Computation

    Get PDF
    We deal with developing an optimized approach for implementing nonuniform fast Fourier transform (NUFFT) algorithms under a general and new perspective for 1-D transformations. The computations of nonequispaced results, nonequispaced data, and Type-3 nonuniform discrete Fourier transforms are tackled in a unified way. They exploit “uniformly sampled” exponentials to interpolate the “nonuniformly sampled” ones involved in the nonuniform discrete Fourier transforms (NUFDTs), so as to enable the use of standard fast Fourier transforms, and an optimized window. The computational costs and the memory requirements are analyzed, and their convenient performance is assessed also by comparing them with other approaches in the literature. Numerical results demonstrate that the method is more accurate and does not introduce any additional computational or memory burden. The computation of the window functions amounts to that of a Legendre polynomial expansion, i.e., a simple polynomial evaluation. This is convenient in terms of computational burden and of the proper arrangement of the calculations. A case study of electromagnetic interest has been carried out by applying the developed NUFFTs to the radiation of linear regular or irregular arrays onto a set of regular or irregular spectral points. Guidelines for multidimensional extension of the proposed approach are also presented

    Classical sampling theorems in the context of multirate and polyphase digital filter bank structures

    Get PDF
    The recovery of a signal from so-called generalized samples is a problem of designing appropriate linear filters called reconstruction (or synthesis) filters. This relationship is reviewed and explored. Novel theorems for the subsampling of sequences are derived by direct use of the digital-filter-bank framework. These results are related to the theory of perfect reconstruction in maximally decimated digital-filter-bank systems. One of the theorems pertains to the subsampling of a sequence and its first few differences and its subsequent stable reconstruction at finite cost with no error. The reconstruction filters turn out to be multiplierless and of the FIR (finite impulse response) type. These ideas are extended to the case of two-dimensional signals by use of a Kronecker formalism. The subsampling of bandlimited sequences is also considered. A sequence x(n ) with a Fourier transform vanishes for |ω|&ges;Lπ/M, where L and M are integers with L<M, can in principle be represented by reducing the data rate by the amount M/L. The digital polyphase framework is used as a convenient tool for the derivation as well as mechanization of the sampling theorem

    A New Regularized Adaptive Windowed Lomb Periodogram for Time-Frequency Analysis of Nonstationary Signals With Impulsive Components

    Get PDF
    This paper proposes a new class of windowed Lomb periodogram (WLP) for time-frequency analysis of nonstationary signals, which may contain impulsive components and may be nonuniformly sampled. The proposed methods significantly extend the conventional Lomb periodogram in two aspects: 1) The nonstationarity problem is addressed by employing the weighted least squares (WLS) to estimate locally the time-varying periodogram and an intersection of confidence interval technique to adaptively select the window sizes of WLS in the time-frequency domain. This yields an adaptive WLP (AWLP) having a better tradeoff between time resolution and frequency resolution. 2) A more general regularized maximum-likelihood-type (M-) estimator is used instead of the LS estimator in estimating the AWLP. This yields a novel M-estimation-based regularized AWLP method which is capable of reducing estimation variance, accentuating predominant time-frequency components, restraining adverse influence of impulsive components, and separating impulsive components. Simulation results were conducted to illustrate the advantages of the proposed method over the conventional Lomb periodogram in adaptive time-frequency resolution, sparse representation for sinusoids, robustness to impulsive components, and applicability to nonuniformly sampled data. Moreover, as the computation of the proposed method at each time sample and frequency is independent of others, parallel computing can be conveniently employed without much difficulty to significantly reduce the computational time of our proposed method for real-time applications. The proposed method is expected to find a wide range of applications in instrumentation and measurement and related areas. Its potential applications to power quality analysis and speech signal analysis are also discussed and demonstrated.published_or_final_versio

    Spectral analysis of randomly sampled signals: suppression of aliasing and sampler jitter

    Get PDF
    Nonuniform sampling can facilitate digital alias-free signal processing (DASP), i.e., digital signal processing that is not affected by aliasing. This paper presents two DASP approaches for spectrum estimation of continuous-time signals. The proposed algorithms, named the weighted sample (WS) and weighted probability (WP) density functions, respectively, utilize random sampling to suppress aliasing. Both methods produce unbiased estimators of the signal spectrum. To achieve this effect, the computational procedure for each method has been suitably matched with the probability density function characterising the pseudorandom generators of the sampling instants. Both proposed methods are analyzed, and the qualities of the estimators they produce have been compared with each other. Although none of the proposed spectrum estimators is universally better than the other one, it has been shown that in practical cases, the WP estimator produces generally smaller errors than those obtained from WS estimation. A practical limitation of the approaches caused by the sampling-instant jitter is also studied. It has been proven that in the presence of jitter, the theoretically infinite bandwidths of WS and WP signal analyses are limited. The maximum frequency up to which these analyses can be performed is inversely proportional to the size of the jitter

    Extended Fourier analysis of signals

    Full text link
    This summary of the doctoral thesis is created to emphasize the close connection of the proposed spectral analysis method with the Discrete Fourier Transform (DFT), the most extensively studied and frequently used approach in the history of signal processing. It is shown that in a typical application case, where uniform data readings are transformed to the same number of uniformly spaced frequencies, the results of the classical DFT and proposed approach coincide. The difference in performance appears when the length of the DFT is selected to be greater than the length of the data. The DFT solves the unknown data problem by padding readings with zeros up to the length of the DFT, while the proposed Extended DFT (EDFT) deals with this situation in a different way, it uses the Fourier integral transform as a target and optimizes the transform basis in the extended frequency range without putting such restrictions on the time domain. Consequently, the Inverse DFT (IDFT) applied to the result of EDFT returns not only known readings, but also the extrapolated data, where classical DFT is able to give back just zeros, and higher resolution are achieved at frequencies where the data has been successfully extended. It has been demonstrated that EDFT able to process data with missing readings or gaps inside or even nonuniformly distributed data. Thus, EDFT significantly extends the usability of the DFT-based methods, where previously these approaches have been considered as not applicable. The EDFT founds the solution in an iterative way and requires repeated calculations to get the adaptive basis, and this makes it numerical complexity much higher compared to DFT. This disadvantage was a serious problem in the 1990s, when the method has been proposed. Fortunately, since then the power of computers has increased so much that nowadays EDFT application could be considered as a real alternative.Comment: 29 pages, 8 figure
    • …
    corecore