10,127 research outputs found
ShearLab 3D: Faithful Digital Shearlet Transforms based on Compactly Supported Shearlets
Wavelets and their associated transforms are highly efficient when
approximating and analyzing one-dimensional signals. However, multivariate
signals such as images or videos typically exhibit curvilinear singularities,
which wavelets are provably deficient of sparsely approximating and also of
analyzing in the sense of, for instance, detecting their direction. Shearlets
are a directional representation system extending the wavelet framework, which
overcomes those deficiencies. Similar to wavelets, shearlets allow a faithful
implementation and fast associated transforms. In this paper, we will introduce
a comprehensive carefully documented software package coined ShearLab 3D
(www.ShearLab.org) and discuss its algorithmic details. This package provides
MATLAB code for a novel faithful algorithmic realization of the 2D and 3D
shearlet transform (and their inverses) associated with compactly supported
universal shearlet systems incorporating the option of using CUDA. We will
present extensive numerical experiments in 2D and 3D concerning denoising,
inpainting, and feature extraction, comparing the performance of ShearLab 3D
with similar transform-based algorithms such as curvelets, contourlets, or
surfacelets. In the spirit of reproducible reseaerch, all scripts are
accessible on www.ShearLab.org.Comment: There is another shearlet software package
(http://www.mathematik.uni-kl.de/imagepro/members/haeuser/ffst/) by S.
H\"auser and G. Steidl. We will include this in a revisio
Confocal microscopic image sequence compression using vector quantization and 3D pyramids
The 3D pyramid compressor project at the University of Glasgow has developed a compressor for images obtained from CLSM device. The proposed method using a combination of image pyramid coder and vector quantization techniques has good performance at compressing confocal volume image data. An experiment was conducted on several kinds of CLSM data using the presented compressor compared to other well-known volume data compressors, such as MPEG-1. The results showed that the 3D pyramid compressor gave higher subjective and objective image quality of reconstructed images at the same compression ratio and presented more acceptable results when applying image processing filters on reconstructed images
Sparse Representation of Astronomical Images
Sparse representation of astronomical images is discussed. It is shown that a
significant gain in sparsity is achieved when particular mixed dictionaries are
used for approximating these types of images with greedy selection strategies.
Experiments are conducted to confirm: i)Effectiveness at producing sparse
representations. ii)Competitiveness, with respect to the time required to
process large images.The latter is a consequence of the suitability of the
proposed dictionaries for approximating images in partitions of small
blocks.This feature makes it possible to apply the effective greedy selection
technique Orthogonal Matching Pursuit, up to some block size. For blocks
exceeding that size a refinement of the original Matching Pursuit approach is
considered. The resulting method is termed Self Projected Matching Pursuit,
because is shown to be effective for implementing, via Matching Pursuit itself,
the optional back-projection intermediate steps in that approach.Comment: Software to implement the approach is available on
http://www.nonlinear-approx.info/examples/node1.htm
Light Field Denoising via Anisotropic Parallax Analysis in a CNN Framework
Light field (LF) cameras provide perspective information of scenes by taking
directional measurements of the focusing light rays. The raw outputs are
usually dark with additive camera noise, which impedes subsequent processing
and applications. We propose a novel LF denoising framework based on
anisotropic parallax analysis (APA). Two convolutional neural networks are
jointly designed for the task: first, the structural parallax synthesis network
predicts the parallax details for the entire LF based on a set of anisotropic
parallax features. These novel features can efficiently capture the high
frequency perspective components of a LF from noisy observations. Second, the
view-dependent detail compensation network restores non-Lambertian variation to
each LF view by involving view-specific spatial energies. Extensive experiments
show that the proposed APA LF denoiser provides a much better denoising
performance than state-of-the-art methods in terms of visual quality and in
preservation of parallax details
Information theoretic approach for assessing image fidelity in photon-counting arrays
The method of photon-counting integral imaging has been introduced recently for three-dimensional object sensing, visualization, recognition and classification of scenes under photon-starved conditions. This paper presents an information-theoretic model for the photon-counting imaging (PCI) method, thereby providing a rigorous foundation for the merits of PCI in terms of image fidelity. This, in turn, can facilitate our understanding of the demonstrated success of photon-counting integral imaging in compressive imaging and classification. The mutual information between the source and photon-counted images is derived in a Markov random field setting and normalized by the source-image’s entropy, yielding a fidelity metric that is between zero and unity, which respectively corresponds to complete loss of information and full preservation of information. Calculations suggest that the PCI fidelity metric increases with spatial correlation in source image, from which we infer that the PCI method is particularly effective for source images with high spatial correlation; the metric also increases with the reduction in photon-number uncertainty. As an application to the theory, an image-classification problem is considered showing a congruous relationship between the fidelity metric and classifier’s performance
- …