27 research outputs found

    A new multistage lattice vector quantization with adaptive subband thresholding for image compression

    Get PDF
    Lattice vector quantization (LVQ) reduces coding complexity and computation due to its regular structure. A new multistage LVQ (MLVQ) using an adaptive subband thresholding technique is presented and applied to image compression. The technique concentrates on reducing the quantization error of the quantized vectors by "blowing out" the residual quantization errors with an LVQ scale factor. The significant coefficients of each subband are identified using an optimum adaptive thresholding scheme for each subband. A variable length coding procedure using Golomb codes is used to compress the codebook index which produces a very efficient and fast technique for entropy coding. Experimental results using the MLVQ are shown to be significantly better than JPEG 2000 and the recent VQ techniques for various test images

    A new multistage lattice vector quantization with adaptive subband thresholding for image compression

    Get PDF
    Lattice vector quantization (LVQ) reduces coding complexity and computation due to its regular structure. A new multistage LVQ (MLVQ) using an adaptive subband thresholding technique is presented and applied to image compression. The technique concentrates on reducing the quantization error of the quantized vectors by "blowing out" the residual quantization errors with an LVQ scale factor. The significant coefficients of each subband are identified using an optimum adaptive thresholding scheme for each subband. A variable length coding procedure using Golomb codes is used to compress the codebook index which produces a very efficient and fast technique for entropy coding. Experimental results using the MLVQ are shown to be significantly better than JPEG 2000 and the recent VQ techniques for various test images

    Source-channel coding for robust image transmission and for dirty-paper coding

    Get PDF
    In this dissertation, we studied two seemingly uncorrelated, but conceptually related problems in terms of source-channel coding: 1) wireless image transmission and 2) Costa ("dirty-paper") code design. In the first part of the dissertation, we consider progressive image transmission over a wireless system employing space-time coded OFDM. The space-time coded OFDM system based on a newly built broadband MIMO fading model is theoretically evaluated by assuming perfect channel state information (CSI) at the receiver for coherent detection. Then an adaptive modulation scheme is proposed to pick the constellation size that offers the best reconstructed image quality for each average signal-to-noise ratio (SNR). A more practical scenario is also considered without the assumption of perfect CSI. We employ low-complexity decision-feedback decoding for differentially space- time coded OFDM systems to exploit transmitter diversity. For JSCC, we adopt a product channel code structure that is proven to provide powerful error protection and bursty error correction. To further improve the system performance, we also apply the powerful iterative (turbo) coding techniques and propose the iterative decoding of differentially space-time coded multiple descriptions of images. The second part of the dissertation deals with practical dirty-paper code designs. We first invoke an information-theoretical interpretation of algebraic binning and motivate the code design guidelines in terms of source-channel coding. Then two dirty-paper code designs are proposed. The first is a nested turbo construction based on soft-output trellis-coded quantization (SOTCQ) for source coding and turbo trellis- coded modulation (TTCM) for channel coding. A novel procedure is devised to balance the dimensionalities of the equivalent lattice codes corresponding to SOTCQ and TTCM. The second dirty-paper code design employs TCQ and IRA codes for near-capacity performance. This is done by synergistically combining TCQ with IRA codes so that they work together as well as they do individually. Our TCQ/IRA design approaches the dirty-paper capacity limit at the low rate regime (e.g., < 1:0 bit/sample), while our nested SOTCQ/TTCM scheme provides the best performs so far at medium-to-high rates (e.g., >= 1:0 bit/sample). Thus the two proposed practical code designs are complementary to each other

    Multi-image classification and compression using vector quantization

    Get PDF
    Vector Quantization (VQ) is an image processing technique based on statistical clustering, and designed originally for image compression. In this dissertation, several methods for multi-image classification and compression based on a VQ design are presented. It is demonstrated that VQ can perform joint multi-image classification and compression by associating a class identifier with each multi-spectral signature codevector. We extend the Weighted Bayes Risk VQ (WBRVQ) method, previously used for single-component images, that explicitly incorporates a Bayes risk component into the distortion measure used in the VQ quantizer design and thereby permits a flexible trade-off between classification and compression priorities. In the specific case of multi-spectral images, we investigate the application of the Multi-scale Retinex algorithm as a preprocessing stage, before classification and compression, that performs dynamic range compression, reduces the dependence on lighting conditions, and generally enhances apparent spatial resolution. The goals of this research are four-fold: (1) to study the interrelationship between statistical clustering, classification and compression in a multi-image VQ context; (2) to study mixed-pixel classification and combined classification and compression for simulated and actual, multispectral and hyperspectral multi-images; (3) to study the effects of multi-image enhancement on class spectral signatures; and (4) to study the preservation of scientific data integrity as a function of compression. In this research, a key issue is not just the subjective quality of the resulting images after classification and compression but also the effect of multi-image dimensionality on the complexity of the optimal coder design

    Combined Wavelet-neural Clasifier For Power Distribution Systems

    Get PDF
    Tez (Yüksek Lisans) -- İstanbul Teknik Üniversitesi, Fen Bilimleri Enstitüsü, 2002Thesis (M.Sc.) -- İstanbul Technical University, Institute of Science and Technology, 2002Bu çalışmada, dağıtım sistemlerinde hibrid “Dalgacık-Yapay Sinir ağı (YSA) tabanlı” bir yaklaşımla arıza sınıflama işlemi gerçeklenmiştir. 34.5 kV “Sağmalcılar-Maltepe” dağıtım sistemi PSCAD/EMTDC yazılımı kullanılarak arıza sınıflayıcı için gereken veri üretilmiştir. Tezin amacı, on farklı kısa-devre sistem arızalarını tanımlayabilecek bir sınıflayıcı tasarlamaktır. Sistemde kullanılan arıza işaretleri 5 kHZ lik örnekleme frekansı ile üretilmiştir. Farklı arıza noktaları ve farklı arıza oluşum açılarındaki hat-akımları ve hat-toprak gerilimlerini içeren sistem arızaları ile bir veritabanı oluşturulmuştur. “Çoklu-çözünürlük işaret ayrıştırma” tekniği kullanılarak altı-kanal akım ve gerilim örneklerinden karakteristik bigi çıkarılmıştır. PSCAD/EMTDC ile üretilen veri bu şekilde bir ön islemden geçirildikten sonra YSA-tabanlı bir yapı ile sınıflama islemi gerçekleştirilmiştir. Bu yapının görevi çeşitli sistem ve arıza koşullarını kapsayan karmaşık arıza sınıflama problemini çözebilmektir. Bu çalışmada, Kohonen’in öğrenme algoritmasını kullanan bir “Kendine-Organize harita” ile “eğitilebilen vektör kuantalama” teknikleri kullanılmıştır. Bu “dalgacık-sinir ağı” tabanlı arıza sınıflayıcı ile eğitim kümesi için % 99-100 arasında ve sınıflayıcıya daha önce hiç verilmemiş test kümesi ile de %85-92 arasında sınıflama oranları elde edilmiştir. Elde edilen başarım oranları literatürdeki sonuçlara yakındır. Geliştirilen birleşik “dalgacık-sinir ağı” tabanlı sınıflayıcı elektrik dağıtım sistemlerindeki arızaların belirlenmesinde iyi sonuçlar vermiş ve iyi bir performans sağlamıştır.In this study an integrated design of fault classifier in a distribution system by using a hybrid “Wavelet- Artificial neural network (ANN) based” approach is implemented. Data for the fault classifier is produced by using PSCAD/EMTDC simulation program on 34.5 kV “Sagmalcılar-Maltepe” distribution system in Istanbul. The objective is to design a classifier capable of recognizing ten classes of three-phase system faults. The signals are generated at an equivalent sampling rate of 5 KHz per channel. A database of line currents and line-to-ground voltages is built up including system faults at different fault inception angles and fault locations. The characteristic information over six-channel of current and voltage samples is extracted by the “wavelet multi-resolution analysis” technique, which is a preprocessing unit to obtain a small size of interpretable features from the raw data. After preprocessing the raw data, an ANN-based tool was employed for classification task. The main idea in this approach is solving the complex fault (three-phase short-circuit) classification problem under various system and fault conditions. In this project, a self-organizing map, with Kohonen’s learning algorithm and type-one learning vector quantization technique is implemented into the fault classification study. The performance of the wavelet-neural fault classification scheme is found to be around “99-100%” for the training data and around “85-92%” for the test data, which the classifier has not been trained on. This result is comparable to the studied fault classifiers in the literature. Combined wavelet-neural classifier showed a promising future to identify the faults in electric distribution systemsYüksek LisansM.Sc

    Conjoint probabilistic subband modeling

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1997.Includes bibliographical references (leaves 125-133).by Ashok Chhabedia Popat.Ph.D

    Systematic hybrid analog/digital signal coding

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.Includes bibliographical references (p. 201-206).This thesis develops low-latency, low-complexity signal processing solutions for systematic source coding, or source coding with side information at the decoder. We consider an analog source signal transmitted through a hybrid channel that is the composition of two channels: a noisy analog channel through which the source is sent unprocessed and a secondary rate-constrained digital channel; the source is processed prior to transmission through the digital channel. The challenge is to design a digital encoder and decoder that provide a minimum-distortion reconstruction of the source at the decoder, which has observations of analog and digital channel outputs. The methods described in this thesis have importance to a wide array of applications. For example, in the case of in-band on-channel (IBOC) digital audio broadcast (DAB), an existing noisy analog communications infrastructure may be augmented by a low-bandwidth digital side channel for improved fidelity, while compatibility with existing analog receivers is preserved. Another application is a source coding scheme which devotes a fraction of available bandwidth to the analog source and the rest of the bandwidth to a digital representation. This scheme is applicable in a wireless communications environment (or any environment with unknown SNR), where analog transmission has the advantage of a gentle roll-off of fidelity with SNR. A very general paradigm for low-latency, low-complexity source coding is composed of three basic cascaded elements: 1) a space rotation, or transformation, 2) quantization, and 3) lossless bitstream coding. The paradigm has been applied with great success to conventional source coding, and it applies equally well to systematic source coding. Focusing on the case involving a Gaussian source, Gaussian channel and mean-squared distortion, we determine optimal or near-optimal components for each of the three elements, each of which has analogous components in conventional source coding. The space rotation can take many forms such as linear block transforms, lapped transforms, or subband decomposition, all for which we derive conditions of optimality. For a very general case we develop algorithms for the design of locally optimal quantizers. For the Gaussian case, we describe a low-complexity scalar quantizer, the nested lattice scalar quantizer, that has performance very near that of the optimal systematic scalar quantizer. Analogous to entropy coding for conventional source coding, Slepian-Wolf coding is shown to be an effective lossless bitstream coding stage for systematic source coding.by Richard J. Barron.Ph.D

    Courbure discrète : théorie et applications

    Get PDF
    International audienceThe present volume contains the proceedings of the 2013 Meeting on discrete curvature, held at CIRM, Luminy, France. The aim of this meeting was to bring together researchers from various backgrounds, ranging from mathematics to computer science, with a focus on both theory and applications. With 27 invited talks and 8 posters, the conference attracted 70 researchers from all over the world. The challenge of finding a common ground on the topic of discrete curvature was met with success, and these proceedings are a testimony of this wor

    Contributions to unsupervised and supervised learning with applications in digital image processing

    Get PDF
    311 p. : il.[EN]This Thesis covers a broad period of research activities with a commonthread: learning processes and its application to image processing. The twomain categories of learning algorithms, supervised and unsupervised, have beentouched across these years. The main body of initial works was devoted tounsupervised learning neural architectures, specially the Self Organizing Map.Our aim was to study its convergence properties from empirical and analyticalviewpoints.From the digital image processing point of view, we have focused on twobasic problems: Color Quantization and filter design. Both problems have beenaddressed from the context of Vector Quantization performed by CompetitiveNeural Networks. Processing of non-stationary data is an interesting paradigmthat has not been explored with Competitive Neural Networks. We have statesthe problem of Non-stationary Clustering and related Adaptive Vector Quantizationin the context of image sequence processing, where we naturally havea Frame Based Adaptive Vector Quantization. This approach deals with theproblem as a sequence of stationary almost-independent Clustering problems.We have also developed some new computational algorithms for Vector Quantizationdesign.The works on supervised learning have been sparsely distributed in time anddirection. First we worked on the use of Self Organizing Map for the independentmodeling of skin and no-skin color distributions for color based face localization. Second, we have collaborated in the realization of a supervised learning systemfor tissue segmentation in Magnetic Resonance Imaging data. Third, we haveworked on the development, implementation and experimentation with HighOrder Boltzmann Machines, which are a very different learning architecture.Finally, we have been working on the application of Sparse Bayesian Learningto a new kind of classification systems based on Dendritic Computing. This lastresearch line is an open research track at the time of writing this Thesis

    Pattern Recognition

    Get PDF
    Pattern recognition is a very wide research field. It involves factors as diverse as sensors, feature extraction, pattern classification, decision fusion, applications and others. The signals processed are commonly one, two or three dimensional, the processing is done in real- time or takes hours and days, some systems look for one narrow object class, others search huge databases for entries with at least a small amount of similarity. No single person can claim expertise across the whole field, which develops rapidly, updates its paradigms and comprehends several philosophical approaches. This book reflects this diversity by presenting a selection of recent developments within the area of pattern recognition and related fields. It covers theoretical advances in classification and feature extraction as well as application-oriented works. Authors of these 25 works present and advocate recent achievements of their research related to the field of pattern recognition
    corecore