6 research outputs found

    Real-time reconstruction of 2D signals with missing observations

    Get PDF
    In this paper, we propose a new real-time reconstruction method of two-dimensional uniformly sampled signals with missing observations. A two-dimensional autoregressive model (AR-2D) is adopted. Two cases of causality, Quarter of Plane (QP) and Non symetric Half Plane (NSHP) are tested. The criterion used to estimate the model parameters is quadratic, and defined when samples are availables. Due to missing observations, the gradient of the criterion becomes non-linear. The optimum is reached by means of LMS-like algorithms adapted to 2D non uniformly sampled signals. Two approximations of the criterion are proposed. They lead to two algorithms whose formal descriptions and compared performances are provided. The results obtained show the reconstruction performances of two-dimensional (stationary and non stationary) signals, as a function of the ratio and the distribution law of missing samples.Dans cet article nous proposons une nouvelle méthode de reconstruction en temps réel de signaux bidimensionnels à échantillons manquants. Un modèle autorégressif bidimensionnel est adopté. On considère deux causalités, quart de plan (QP) et demi-plan asymétrique (NSHP : Non Symetric Half Plane). Le critère à optimiser pour estimer les paramètres du modèle est quadratique en l'erreur d'estimation, et défini aux instants d'arrivées des échantillons. Du fait d'observations manquantes, le gradient du critère devient non linéaire en les paramètres. L'optimum est atteint à l'aide d'algorithmes de type LMS adaptés aux signaux bidimensionnels à échantillons manquants. Deux approximations du critère sont proposées. Elles conduisent à deux algorithmes dont la description formelle est fournie et les performances comparées. Les résultats présentés montrent les performances de la reconstruction (pour des signaux bidimensionnels stationnaires et non stationnaires) en fonction du pourcentage d'échantillons manquants ainsi que de la loi de distribution de ces échantillons

    A comparison of adaptive predictors in sub-band coding

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1987.Bibliography: leaves 82-85.by Paul Ning.M.S

    Design of joint source/channel coders

    Get PDF
    The need to transmit large amounts of data over a band limited channel has led to the development of various data compression schemes. Many of these schemes function by attempting to remove redundancy from the data stream. An unwanted side effect of this approach is to make the information transfer process more vulnerable to channel noise. Efforts at protecting against errors involve the reinsertion of redundancy and an increase in bandwidth requirements. The papers presented within this document attempt to deal with these problems from a number of different approaches

    Lossless compression of hyperspectral images

    Get PDF
    Band ordering and the prediction scheme are the two major aspects of hyperspectral imaging which have been studied to improve the performance of the compression system. In the prediction module, we propose spatio-spectral prediction methods. Two non-linear spectral prediction methods have been proposed in this thesis. NPHI (Non-linear Prediction for Hyperspectral Images) is based on a band look-ahead technique wherein a reference band is included in the prediction of pixels in the current band. The prediction technique estimates the variation between the contexts of the two bands to modify the weights computed in the reference band to predict the pixels in the current band. EPHI (Edge-based Prediction for Hyperspectral Images) is the modified NPHI technique wherein an edge-based analysis is used to classify the pixels into edges and non-edges in order to perform the prediction of the pixel in the current band. Three ordering methods have been proposed in this thesis. The first ordering method computes the local and global features in each band to group the bands. The bands in each group are ordered by estimating the compression ratios achieved between the entire band in the group and then ordering them using Kruskal\u27s algorithm. The other two methods of ordering compute the compression ratios between b-neighbors in performing the band ordering

    Adaptive edge-based prediction for lossless image compression

    Get PDF
    Many lossless image compression methods have been suggested with established results hard to surpass. However there are some aspects that can be considered to improve the performance further. This research focuses on two-phase prediction-encoding method, separately studying each and suggesting new techniques.;In the prediction module, proposed Edge-Based-Predictor (EBP) and Least-Squares-Edge-Based-Predictor (LS-EBP) emphasizes on image edges and make predictions accordingly. EBP is a gradient based nonlinear adaptive predictor. EBP switches between prediction-rules based on few threshold parameters automatically determined by a pre-analysis procedure, which makes a first pass. The LS-EBP also uses these parameters, but optimizes the prediction for each pre-analysis assigned edge location, thus applying least-square approach only at the edge points.;For encoding module: a novel Burrows Wheeler Transform (BWT) inspired method is suggested, which performs better than applying the BWT directly on the images. We also present a context-based adaptive error modeling and encoding scheme. When coupled with the above-mentioned prediction schemes, the result is the best-known compression performance in the genre of compression schemes with same time and space complexity

    Métodos Bootstrap: Principios, Teoría y su aplicación al Procesamiento Digital de Señales

    Get PDF
    El propósito de este curso es de introducir algunas técnicas de remuestreo que han sido explotadas recientemente para llevar a cabo la caracterización estadística de sistemas físicos (cálculo de intervalos de confianza, de los primeros momentos estadísticos, de la densidad de probabilidad completa o densidades completas que interactuan en un sistema bajo estudio, etc.). Lo anterior se hace con la finalidad de clasificar los estimadores adaptados a un objetivo particular de modelado, o bien para llevar a cabo la selección de modelos puestos en competencia para representar algún sistema físico predeterminado. En nuestro caso, nos interesamos en los sistemas que tienen que ver con el Procesamiento Digital de Señales (PDS). La caracterización estadística se lleva a cabo suponiendo que no se conoce la densidad de probabilidad o densidades de probabilidad que interactuan en el sistema bajo estudio, por ejemplo, la densidad de los errores de observación o de adquisición p(e) es supuesta desconocida. Se presenta un panorama general de los métodos existentes y las nuevas tendencias, y bajo que condiciones pueden ser utilizados. Se brindan varios ejemplos de aplicación tanto en instrumentación como en PDS en general
    corecore