1,434 research outputs found

    Regularized adaptive long autoregressive spectral analysis

    Full text link
    This paper is devoted to adaptive long autoregressive spectral analysis when (i) very few data are available, (ii) information does exist beforehand concerning the spectral smoothness and time continuity of the analyzed signals. The contribution is founded on two papers by Kitagawa and Gersch. The first one deals with spectral smoothness, in the regularization framework, while the second one is devoted to time continuity, in the Kalman formalism. The present paper proposes an original synthesis of the two contributions: a new regularized criterion is introduced that takes both information into account. The criterion is efficiently optimized by a Kalman smoother. One of the major features of the method is that it is entirely unsupervised: the problem of automatically adjusting the hyperparameters that balance data-based versus prior-based information is solved by maximum likelihood. The improvement is quantified in the field of meteorological radar

    A compressed sensing approach to block-iterative equalization: connections and applications to radar imaging reconstruction

    Get PDF
    The widespread of underdetermined systems has brought forth a variety of new algorithmic solutions, which capitalize on the Compressed Sensing (CS) of sparse data. While well known greedy or iterative threshold type of CS recursions take the form of an adaptive filter followed by a proximal operator, this is no different in spirit from the role of block iterative decision-feedback equalizers (BI-DFE), where structure is roughly exploited by the signal constellation slicer. By taking advantage of the intrinsic sparsity of signal modulations in a communications scenario, the concept of interblock interference (IBI) can be approached more cunningly in light of CS concepts, whereby the optimal feedback of detected symbols is devised adaptively. The new DFE takes the form of a more efficient re-estimation scheme, proposed under recursive-least-squares based adaptations. Whenever suitable, these recursions are derived under a reduced-complexity, widely-linear formulation, which further reduces the minimum-mean-square-error (MMSE) in comparison with traditional strictly-linear approaches. Besides maximizing system throughput, the new algorithms exhibit significantly higher performance when compared to existing methods. Our reasoning will also show that a properly formulated BI-DFE turns out to be a powerful CS algorithm itself. A new algorithm, referred to as CS-Block DFE (CS-BDFE) exhibits improved convergence and detection when compared to first order methods, thus outperforming the state-of-the-art Complex Approximate Message Passing (CAMP) recursions. The merits of the new recursions are illustrated under a novel 3D MIMO Radar formulation, where the CAMP algorithm is shown to fail with respect to important performance measures.A proliferação de sistemas sub-determinados trouxe a tona uma gama de novas soluções algorítmicas, baseadas no sensoriamento compressivo (CS) de dados esparsos. As recursões do tipo greedy e de limitação iterativa para CS se apresentam comumente como um filtro adaptativo seguido de um operador proximal, não muito diferente dos equalizadores de realimentação de decisão iterativos em blocos (BI-DFE), em que um decisor explora a estrutura do sinal de constelação. A partir da esparsidade intrínseca presente na modulação de sinais no contexto de comunicações, a interferência entre blocos (IBI) pode ser abordada utilizando-se o conceito de CS, onde a realimentação ótima de símbolos detectados é realizada de forma adaptativa. O novo DFE se apresenta como um esquema mais eficiente de reestimação, baseado na atualização por mínimos quadrados recursivos (RLS). Sempre que possível estas recursões são propostas via formulação linear no sentido amplo, o que reduz ainda mais o erro médio quadrático mínimo (MMSE) em comparação com abordagens tradicionais. Além de maximizar a taxa de transferência de informação, o novo algoritmo exibe um desempenho significativamente superior quando comparado aos métodos existentes. Também mostraremos que um equalizador BI-DFE formulado adequadamente se torna um poderoso algoritmo de CS. O novo algoritmo CS-BDFE apresenta convergência e detecção aprimoradas, quando comparado a métodos de primeira ordem, superando as recursões de Passagem de Mensagem Aproximada para Complexos (CAMP). Os méritos das novas recursões são ilustrados através de um modelo tridimensional para radares MIMO recentemente proposto, onde o algoritmo CAMP falha em aspectos importantes de medidas de desempenho

    Shack - Hartmann spot dislocation map determination using an optical flow method

    Get PDF
    We present a robust, dense, and accurate Shack-Hartmann spot dislocation map determination method based on a regularized optical flow algorithm that does not require obtaining the spot centroids. The method is capable to measure in presence of strong noise, background illumination and spot modulating signals, which are typical limiting factors of traditional centroid detection algorithms. Moreover, the proposed approach is able to face cases where some of the reference beam spots have not a corresponding one in the distorted Hartmann diagram, and it can expand the dynamic range of the Shack-Hartmann sensor unwrapping the obtained dense dislocation maps. We have tested the algorithm with both simulations and experimental data obtaining satisfactory results. A complete MATLAB package that can reproduce all the results can be downloaded from [http://goo.gl/XbZVOr]

    Variational Downscaling, Fusion and Assimilation of Hydrometeorological States via Regularized Estimation

    Full text link
    Improved estimation of hydrometeorological states from down-sampled observations and background model forecasts in a noisy environment, has been a subject of growing research in the past decades. Here, we introduce a unified framework that ties together the problems of downscaling, data fusion and data assimilation as ill-posed inverse problems. This framework seeks solutions beyond the classic least squares estimation paradigms by imposing proper regularization, which are constraints consistent with the degree of smoothness and probabilistic structure of the underlying state. We review relevant regularization methods in derivative space and extend classic formulations of the aforementioned problems with particular emphasis on hydrologic and atmospheric applications. Informed by the statistical characteristics of the state variable of interest, the central results of the paper suggest that proper regularization can lead to a more accurate and stable recovery of the true state and hence more skillful forecasts. In particular, using the Tikhonov and Huber regularization in the derivative space, the promise of the proposed framework is demonstrated in static downscaling and fusion of synthetic multi-sensor precipitation data, while a data assimilation numerical experiment is presented using the heat equation in a variational setting

    Bayesian Approach for Identification of Multiple Events in an Early Warning System

    Get PDF
    The 2011 Tohoku earthquake (M_w 9.0) was followed by a large number of aftershocks that resulted in 70 early warning messages in the first month after the mainshock. Of these warnings, a non‐negligible fraction (63%) were false warnings in which the largest expected seismic intensities were overestimated by at least two intensities or larger. These errors can be largely attributed to multiple concurrent aftershocks from distant origins that occur within a short period of time. Based on a Bayesian formulation that considers the possibility of having more than one event present at any given time, we propose a novel likelihood function suitable for classifying multiple concurrent earthquakes, which uses amplitude information. We use a sequential Monte Carlo heuristic whose complexity grows linearly with the number of events. We further provide a particle filter implementation and empirically verify its performance with the aftershock records after the Tohoku earthquake. The initial case studies suggest promising performance of this method in classifying multiple seismic events that occur closely in time

    An Efficient Algorithm for Video Super-Resolution Based On a Sequential Model

    Get PDF
    In this work, we propose a novel procedure for video super-resolution, that is the recovery of a sequence of high-resolution images from its low-resolution counterpart. Our approach is based on a "sequential" model (i.e., each high-resolution frame is supposed to be a displaced version of the preceding one) and considers the use of sparsity-enforcing priors. Both the recovery of the high-resolution images and the motion fields relating them is tackled. This leads to a large-dimensional, non-convex and non-smooth problem. We propose an algorithmic framework to address the latter. Our approach relies on fast gradient evaluation methods and modern optimization techniques for non-differentiable/non-convex problems. Unlike some other previous works, we show that there exists a provably-convergent method with a complexity linear in the problem dimensions. We assess the proposed optimization method on {several video benchmarks and emphasize its good performance with respect to the state of the art.}Comment: 37 pages, SIAM Journal on Imaging Sciences, 201
    corecore