30 research outputs found
SCHATTEN MATRIX NORM BASED POLARIMETRIC SAR DATA REGULARIZATION. APPLICATION OVER CHAMONIX MONT-BLANC
International audienceThe paper addresses the filtering of Polarimetry Synthetic Aperture Radar (PolSAR) images. The filtering strategy is based on a regularizing cost function associated with matrix norms called the Schatten p-norms. These norms apply on matrix singular values. The proposed approach is illustrated upon scattering and coherency matrices on RADARSAT-2 PolSAR images over the Chamonix Mont-Blanc site. Several p values of Schatten p-norms are surveyed and their capabilities on filtering PolSAR images is provided in comparison with conventional strategies for filtering PolSAR data
Isotropic inverse-problem approach for two-dimensional phase unwrapping
In this paper, we propose a new technique for two-dimensional phase
unwrapping. The unwrapped phase is found as the solution of an inverse problem
that consists in the minimization of an energy functional. The latter includes
a weighted data-fidelity term that favors sparsity in the error between the
true and wrapped phase differences, as well as a regularizer based on
higher-order total-variation. One desirable feature of our method is its
rotation invariance, which allows it to unwrap a much larger class of images
compared to the state of the art. We demonstrate the effectiveness of our
method through several experiments on simulated and real data obtained through
the tomographic phase microscope. The proposed method can enhance the
applicability and outreach of techniques that rely on quantitative phase
evaluation
Efficient Inversion of Multiple-Scattering Model for Optical Diffraction Tomography
Optical diffraction tomography relies on solving an inverse scattering
problem governed by the wave equation. Classical reconstruction algorithms are
based on linear approximations of the forward model (Born or Rytov), which
limits their applicability to thin samples with low refractive-index contrasts.
More recent works have shown the benefit of adopting nonlinear models. They
account for multiple scattering and reflections, improving the quality of
reconstruction. To reduce the complexity and memory requirements of these
methods, we derive an explicit formula for the Jacobian matrix of the nonlinear
Lippmann-Schwinger model which lends itself to an efficient evaluation of the
gradient of the data- fidelity term. This allows us to deploy efficient methods
to solve the corresponding inverse problem subject to sparsity constraints
Universal Denoising Networks : A Novel CNN Architecture for Image Denoising
We design a novel network architecture for learning discriminative image
models that are employed to efficiently tackle the problem of grayscale and
color image denoising. Based on the proposed architecture, we introduce two
different variants. The first network involves convolutional layers as a core
component, while the second one relies instead on non-local filtering layers
and thus it is able to exploit the inherent non-local self-similarity property
of natural images. As opposed to most of the existing deep network approaches,
which require the training of a specific model for each considered noise level,
the proposed models are able to handle a wide range of noise levels using a
single set of learned parameters, while they are very robust when the noise
degrading the latent image does not match the statistics of the noise used
during training. The latter argument is supported by results that we report on
publicly available images corrupted by unknown noise and which we compare
against solutions obtained by competing methods. At the same time the
introduced networks achieve excellent results under additive white Gaussian
noise (AWGN), which are comparable to those of the current state-of-the-art
network, while they depend on a more shallow architecture with the number of
trained parameters being one order of magnitude smaller. These properties make
the proposed networks ideal candidates to serve as sub-solvers on restoration
methods that deal with general inverse imaging problems such as deblurring,
demosaicking, superresolution, etc.Comment: Camera ready paper to appear in the Proceedings of CVPR 201
Duality Mapping for Schatten Matrix Norms
In this paper, we fully characterize the duality mapping over the space of
matrices that are equipped with Schatten norms. Our approach is based on the
analysis of the saturation of the H\"older inequality for Schatten norms. We
prove in our main result that, for , the duality mapping over
the space of real-valued matrices with Schatten- norm is a continuous and
single-valued function and provide an explicit form for its computation. For
the special case , the mapping is set-valued; by adding a rank
constraint, we show that it can be reduced to a Borel-measurable single-valued
function for which we also provide a closed-form expression
Image Reconstruction from Undersampled Confocal Microscopy Data using Multiresolution Based Maximum Entropy Regularization
We consider the problem of reconstructing 2D images from randomly
under-sampled confocal microscopy samples. The well known and widely celebrated
total variation regularization, which is the L1 norm of derivatives, turns out
to be unsuitable for this problem; it is unable to handle both noise and
under-sampling together. This issue is linked with the notion of phase
transition phenomenon observed in compressive sensing research, which is
essentially the break-down of total variation methods, when sampling density
gets lower than certain threshold. The severity of this breakdown is determined
by the so-called mutual incoherence between the derivative operators and
measurement operator. In our problem, the mutual incoherence is low, and hence
the total variation regularization gives serious artifacts in the presence of
noise even when the sampling density is not very low. There has been very few
attempts in developing regularization methods that perform better than total
variation regularization for this problem. We develop a multi-resolution based
regularization method that is adaptive to image structure. In our approach, the
desired reconstruction is formulated as a series of coarse-to-fine
multi-resolution reconstructions; for reconstruction at each level, the
regularization is constructed to be adaptive to the image structure, where the
information for adaption is obtained from the reconstruction obtained at
coarser resolution level. This adaptation is achieved by using maximum entropy
principle, where the required adaptive regularization is determined as the
maximizer of entropy subject to the information extracted from the coarse
reconstruction as constraints. We demonstrate the superiority of the proposed
regularization method over existing ones using several reconstruction examples
Towards Recognizing of 3D Models Using A Single Image
As 3D data is getting more popular, techniques for retrieving a particular 3D model are necessary. We want to recognize a 3D model from a single photograph; as any user can easily get an image of a model he/she would like to find, requesting by an image is indeed simple and natural. However, a 2D intensity image is relative to viewpoint, texture and lighting condition and thus matching with a 3D geometric model is very challenging. This paper proposes a first step towards matching a 2D image to models, based on features repeatable in 2D images and in depth images (generated from 3D models); we show their independence to textures and lighting. Then, the detected features are matched to recognize 3D models by combining HOG (Histogram Of Gradients) descriptors and repeatability scores. The proposed methods reaches a recognition rate of 72% among 12 3D objects categories, and outperforms classical feature detection techniques for recognizing 3D models using a single image