9,227 research outputs found

    Rendition: Reclaiming what a black box takes away

    Full text link
    The premise of our work is deceptively familiar: A black box f(⋅)f(\cdot) has altered an image x→f(x)\mathbf{x} \rightarrow f(\mathbf{x}). Recover the image x\mathbf{x}. This black box might be any number of simple or complicated things: a linear or non-linear filter, some app on your phone, etc. The latter is a good canonical example for the problem we address: Given only "the app" and an image produced by the app, find the image that was fed to the app. You can run the given image (or any other image) through the app as many times as you like, but you can not look inside the (code for the) app to see how it works. At first blush, the problem sounds a lot like a standard inverse problem, but it is not in the following sense: While we have access to the black box f(⋅)f(\cdot) and can run any image through it and observe the output, we do not know how the block box alters the image. Therefore we have no explicit form or model of f(⋅)f(\cdot). Nor are we necessarily interested in the internal workings of the black box. We are simply happy to reverse its effect on a particular image, to whatever extent possible. This is what we call the "rendition" (rather than restoration) problem, as it does not fit the mold of an inverse problem (blind or otherwise). We describe general conditions under which rendition is possible, and provide a remarkably simple algorithm that works for both contractive and expansive black box operators. The principal and novel take-away message from our work is this surprising fact: One simple algorithm can reliably undo a wide class of (not too violent) image distortions. A higher quality pdf of this paper is available at http://www.milanfar.or

    Efficient Nonlinear Transforms for Lossy Image Compression

    Full text link
    We assess the performance of two techniques in the context of nonlinear transform coding with artificial neural networks, Sadam and GDN. Both techniques have been successfully used in state-of-the-art image compression methods, but their performance has not been individually assessed to this point. Together, the techniques stabilize the training procedure of nonlinear image transforms and increase their capacity to approximate the (unknown) rate-distortion optimal transform functions. Besides comparing their performance to established alternatives, we detail the implementation of both methods and provide open-source code along with the paper.Comment: accepted as a conference contribution to Picture Coding Symposium 201

    Properties of continuous Fourier extension of the discrete cosine transform and its multidimensional generalization

    Full text link
    A versatile method is described for the practical computation of the discrete Fourier transforms (DFT) of a continuous function g(t)g(t) given by its values gjg_{j} at the points of a uniform grid FNF_{N} generated by conjugacy classes of elements of finite adjoint order NN in the fundamental region FF of compact semisimple Lie groups. The present implementation of the method is for the groups SU(2), when FF is reduced to a one-dimensional segment, and for SU(2)×...×SU(2)SU(2)\times ... \times SU(2) in multidimensional cases. This simplest case turns out to result in a transform known as discrete cosine transform (DCT), which is often considered to be simply a specific type of the standard DFT. Here we show that the DCT is very different from the standard DFT when the properties of the continuous extensions of these two discrete transforms from the discrete grid points tj;j=0,1,...Nt_j; j=0,1, ... N to all points t∈Ft \in F are considered. (A) Unlike the continuous extension of the DFT, the continuous extension of (the inverse) DCT, called CEDCT, closely approximates g(t)g(t) between the grid points tjt_j. (B) For increasing NN, the derivative of CEDCT converges to the derivative of g(t)g(t). And (C), for CEDCT the principle of locality is valid. Finally, we use the continuous extension of 2-dimensional DCT to illustrate its potential for interpolation, as well as for the data compression of 2D images.Comment: submitted to JMP on April 3, 2003; still waiting for the referee's Repor

    Adaptive Regularization of Ill-Posed Problems: Application to Non-rigid Image Registration

    Full text link
    We introduce an adaptive regularization approach. In contrast to conventional Tikhonov regularization, which specifies a fixed regularization operator, we estimate it simultaneously with parameters. From a Bayesian perspective we estimate the prior distribution on parameters assuming that it is close to some given model distribution. We constrain the prior distribution to be a Gauss-Markov random field (GMRF), which allows us to solve for the prior distribution analytically and provides a fast optimization algorithm. We apply our approach to non-rigid image registration to estimate the spatial transformation between two images. Our evaluation shows that the adaptive regularization approach significantly outperforms standard variational methods

    Lossy Image Compression with Compressive Autoencoders

    Full text link
    We propose a new approach to the problem of optimizing autoencoders for lossy image compression. New media formats, changing hardware technology, as well as diverse requirements and content types create a need for compression algorithms which are more flexible than existing codecs. Autoencoders have the potential to address this need, but are difficult to optimize directly due to the inherent non-differentiabilty of the compression loss. We here show that minimal changes to the loss are sufficient to train deep autoencoders competitive with JPEG 2000 and outperforming recently proposed approaches based on RNNs. Our network is furthermore computationally efficient thanks to a sub-pixel architecture, which makes it suitable for high-resolution images. This is in contrast to previous work on autoencoders for compression using coarser approximations, shallower architectures, computationally expensive methods, or focusing on small images

    Multispectral Palmprint Recognition Using a Hybrid Feature

    Full text link
    Personal identification problem has been a major field of research in recent years. Biometrics-based technologies that exploit fingerprints, iris, face, voice and palmprints, have been in the center of attention to solve this problem. Palmprints can be used instead of fingerprints that have been of the earliest of these biometrics technologies. A palm is covered with the same skin as the fingertips but has a larger surface, giving us more information than the fingertips. The major features of the palm are palm-lines, including principal lines, wrinkles and ridges. Using these lines is one of the most popular approaches towards solving the palmprint recognition problem. Another robust feature is the wavelet energy of palms. In this paper we used a hybrid feature which combines both of these features. %Moreover, multispectral analysis is applied to improve the performance of the system. At the end, minimum distance classifier is used to match test images with one of the training samples. The proposed algorithm has been tested on a well-known multispectral palmprint dataset and achieved an average accuracy of 98.8\%.Comment: 6 page

    Signal and Image Processing with Sinlets

    Full text link
    This paper presents a new family of localized orthonormal bases - sinlets - which are well suited for both signal and image processing and analysis. One-dimensional sinlets are related to specific solutions of the time-dependent harmonic oscillator equation. By construction, each sinlet is infinitely differentiable and has a well-defined and smooth instantaneous frequency known in analytical form. For square-integrable transient signals with infinite support, one-dimensional sinlet basis provides an advantageous alternative to the Fourier transform by rendering accurate signal representation via a countable set of real-valued coefficients. The properties of sinlets make them suitable for analyzing many real-world signals whose frequency content changes with time including radar and sonar waveforms, music, speech, biological echolocation sounds, biomedical signals, seismic acoustic waves, and signals employed in wireless communication systems. One-dimensional sinlet bases can be used to construct two- and higher-dimensional bases with variety of potential applications including image analysis and representation.Comment: 26 pages, 21 figure

    Taylor Series as Wide-sense Biorthogonal Wavelet Decomposition

    Full text link
    Pointwise-supported generalized wavelets are introduced, based on Dirac, doublet and further derivatives of delta. A generalized biorthogonal analysis leads to standard Taylor series and new Dual-Taylor series that may be interpreted as Laurent Schwartz distributions. A Parseval-like identity is also derived for Taylor series, showing that Taylor series support an energy theorem. New representations for signals called derivagrams are introduced, which are similar to spectrograms. This approach corroborates the impact of wavelets in modern signal analysis.Comment: 6 pages, 4 figures. conference: XXII Simposio Brasileiro de Telecomunicacoes, SBrT'05, 2005, Campinas, SP, Brazi

    Quality Adaptive Low-Rank Based JPEG Decoding with Applications

    Full text link
    Small compression noises, despite being transparent to human eyes, can adversely affect the results of many image restoration processes, if left unaccounted for. Especially, compression noises are highly detrimental to inverse operators of high-boosting (sharpening) nature, such as deblurring and superresolution against a convolution kernel. By incorporating the non-linear DCT quantization mechanism into the formulation for image restoration, we propose a new sparsity-based convex programming approach for joint compression noise removal and image restoration. Experimental results demonstrate significant performance gains of the new approach over existing image restoration methods

    Covariance Eigenvector Sparsity for Compression and Denoising

    Full text link
    Sparsity in the eigenvectors of signal covariance matrices is exploited in this paper for compression and denoising. Dimensionality reduction (DR) and quantization modules present in many practical compression schemes such as transform codecs, are designed to capitalize on this form of sparsity and achieve improved reconstruction performance compared to existing sparsity-agnostic codecs. Using training data that may be noisy a novel sparsity-aware linear DR scheme is developed to fully exploit sparsity in the covariance eigenvectors and form noise-resilient estimates of the principal covariance eigenbasis. Sparsity is effected via norm-one regularization, and the associated minimization problems are solved using computationally efficient coordinate descent iterations. The resulting eigenspace estimator is shown capable of identifying a subset of the unknown support of the eigenspace basis vectors even when the observation noise covariance matrix is unknown, as long as the noise power is sufficiently low. It is proved that the sparsity-aware estimator is asymptotically normal, and the probability to correctly identify the signal subspace basis support approaches one, as the number of training data grows large. Simulations using synthetic data and images, corroborate that the proposed algorithms achieve improved reconstruction quality relative to alternatives.Comment: IEEE Transcations on Signal Processing, 2012 (to appear
    • …
    corecore