2,632 research outputs found

    Comparison Of Sparse Coding And Jpeg Coding Schemes For Blurred Retinal Images.

    Get PDF
    Overcomplete representations are currently one of the highly researched areas especially in the field of signal processing due to their strong potential to generate sparse representation of signals. Sparse representation implies that given signal can be represented with components that are only rarely significantly active. It has been strongly argued that the mammalian visual system is highly related towards sparse and overcomplete representations. The primary visual cortex has overcomplete responses in representing an input signal which leads to the use of sparse neuronal activity for further processing. This work investigates the sparse coding with an overcomplete basis set representation which is believed to be the strategy employed by the mammalian visual system for efficient coding of natural images. This work analyzes the Sparse Code Learning algorithm in which the given image is represented by means of linear superposition of sparse statistically independent events on a set of overcomplete basis functions. This algorithm trains and adapts the overcomplete basis functions such as to represent any given image in terms of sparse structures. The second part of the work analyzes an inhibition based sparse coding model in which the Gabor based overcomplete representations are used to represent the image. It then applies an iterative inhibition algorithm based on competition between neighboring transform coefficients to select subset of Gabor functions such as to represent the given image with sparse set of coefficients. This work applies the developed models for the image compression applications and tests the achievable levels of compression of it. The research towards these areas so far proves that sparse coding algorithms are inefficient in representing high frequency sharp image features. So this work analyzes the performance of these algorithms only on the natural images which does not have sharp features and compares the compression results with the current industrial standard coding schemes such as JPEG and JPEG 2000. It also models the characteristics of an image falling on the retina after the distortion effects of the eye and then applies the developed algorithms towards these images and tests compression results

    Project SEMACODE : a scale-invariant object recognition system for content-based queries in image databases

    Get PDF
    For the efficient management of large image databases, the automated characterization of images and the usage of that characterization for searching and ordering tasks is highly desirable. The purpose of the project SEMACODE is to combine the still unsolved problem of content-oriented characterization of images with scale-invariant object recognition and modelbased compression methods. To achieve this goal, existing techniques as well as new concepts related to pattern matching, image encoding, and image compression are examined. The resulting methods are integrated in a common framework with the aid of a content-oriented conception. For the application, an image database at the library of the university of Frankfurt/Main (StUB; about 60000 images), the required operations are developed. The search and query interfaces are defined in close cooperation with the StUB project “Digitized Colonial Picture Library”. This report describes the fundamentals and first results of the image encoding and object recognition algorithms developed within the scope of the project

    An Introduction To Compressive Sampling [A sensing/sampling paradigm that goes against the common knowledge in data acquisition]

    Get PDF
    This article surveys the theory of compressive sampling, also known as compressed sensing or CS, a novel sensing/sampling paradigm that goes against the common wisdom in data acquisition. CS theory asserts that one can recover certain signals and images from far fewer samples or measurements than traditional methods use. To make this possible, CS relies on two principles: sparsity, which pertains to the signals of interest, and incoherence, which pertains to the sensing modality. Our intent in this article is to overview the basic CS theory that emerged in the works [1]–[3], present the key mathematical ideas underlying this theory, and survey a couple of important results in the field. Our goal is to explain CS as plainly as possible, and so our article is mainly of a tutorial nature. One of the charms of this theory is that it draws from various subdisciplines within the applied mathematical sciences, most notably probability theory. In this review, we have decided to highlight this aspect and especially the fact that randomness can — perhaps surprisingly — lead to very effective sensing mechanisms. We will also discuss significant implications, explain why CS is a concrete protocol for sensing and compressing data simultaneously (thus the name), and conclude our tour by reviewing important applications

    Compressed Sensing with Coherent and Redundant Dictionaries

    Get PDF
    This article presents novel results concerning the recovery of signals from undersampled data in the common situation where such signals are not sparse in an orthonormal basis or incoherent dictionary, but in a truly redundant dictionary. This work thus bridges a gap in the literature and shows not only that compressed sensing is viable in this context, but also that accurate recovery is possible via an L1-analysis optimization problem. We introduce a condition on the measurement/sensing matrix, which is a natural generalization of the now well-known restricted isometry property, and which guarantees accurate recovery of signals that are nearly sparse in (possibly) highly overcomplete and coherent dictionaries. This condition imposes no incoherence restriction on the dictionary and our results may be the first of this kind. We discuss practical examples and the implications of our results on those applications, and complement our study by demonstrating the potential of L1-analysis for such problems

    Seismic Data Compression using Wave Atom Transform

    Get PDF
    Seismic data compression SDC is crucially confronted in the oil Industry with large data volumes and Incomplete data measurements In this research we present a comprehensive method of exploiting wave packets to perform seismic data compression Wave atoms are the modern addition to the collection of mathematical transforms for harmonic computational analysis Wave atoms are variant of 2D wavelet packets that keep an isotropic aspect ratio Wave atoms have a spiky frequency localization that cannot be attained using a filter bank based on wavelet packets and offer a significantly sparser expansion for oscillatory functions than wavelets curvelets and Gabor atom

    Space/time/frequency methods in adaptive radar

    Get PDF
    Radar systems may be processed with various space, time and frequency techniques. Advanced radar systems are required to detect targets in the presence of jamming and clutter. This work studies the application of two types of radar systems. It is well known that targets moving along-track within a Synthetic Aperture Radar field of view are imaged as defocused objects. The SAR stripmap mode is tuned to stationary ground targets and the mismatch between the SAR processing parameters and the target motion parameters causes the energy to spill over to adjacent image pixels, thus hindering target feature extraction and reducing the probability of detection. The problem can be remedied by generating the image using a filter matched to the actual target motion parameters, effectively focusing the SAR image on the target. For a fixed rate of motion the target velocity can be estimated from the slope of the Doppler frequency characteristic. The problem is similar to the classical problem of estimating the instantaneous frequency of a linear FM signal (chirp). The Wigner-Ville distribution, the Gabor expansion, the Short-Time Fourier transform and the Continuous Wavelet Transform are compared with respect to their performance in noisy SAR data to estimate the instantaneous Doppler frequency of range compressed SAR data. It is shown that these methods exhibit sharp signal-to-noise threshold effects. The space-time radar problem is well suited to the application of techniques that take advantage of the low-rank property of the space-time covariance matrix. It is shown that reduced-rank methods outperform full-rank space-time adaptive processing when the space-time covariance matrix is estimated from a dataset with limited support. The utility of reduced-rank methods is demonstrated by theoretical analysis, simulations and analysis of real data. It is shown that reduced-rank processing has two effects on the performance: increased statistical stability which tends to improve performance, and introduction of a bias which lowers the signal-to-noise ratio. A method for evaluating the theoretical conditioned SNR for fixed reduced-rank transforms is also presented

    Visualisation of Articular Cartilage Microstructure

    Get PDF
    This thesis developed image processing techniques enabling the detection and segregation of biological three dimensional images into its component features based upon shape and relative size of the features detected. The work used articular cartilage images and separated fibrous components from the cells and background noise. Measurement of individual components and their recombination into a composite image are possible. Developed software was used to analyse the development of hyaline cartilage in developing sheep embryos
    corecore