357 research outputs found
A Framework for Fast Image Deconvolution with Incomplete Observations
In image deconvolution problems, the diagonalization of the underlying
operators by means of the FFT usually yields very large speedups. When there
are incomplete observations (e.g., in the case of unknown boundaries), standard
deconvolution techniques normally involve non-diagonalizable operators,
resulting in rather slow methods, or, otherwise, use inexact convolution
models, resulting in the occurrence of artifacts in the enhanced images. In
this paper, we propose a new deconvolution framework for images with incomplete
observations that allows us to work with diagonalized convolution operators,
and therefore is very fast. We iteratively alternate the estimation of the
unknown pixels and of the deconvolved image, using, e.g., an FFT-based
deconvolution method. This framework is an efficient, high-quality alternative
to existing methods of dealing with the image boundaries, such as edge
tapering. It can be used with any fast deconvolution method. We give an example
in which a state-of-the-art method that assumes periodic boundary conditions is
extended, through the use of this framework, to unknown boundary conditions.
Furthermore, we propose a specific implementation of this framework, based on
the alternating direction method of multipliers (ADMM). We provide a proof of
convergence for the resulting algorithm, which can be seen as a "partial" ADMM,
in which not all variables are dualized. We report experimental comparisons
with other primal-dual methods, where the proposed one performed at the level
of the state of the art. Four different kinds of applications were tested in
the experiments: deconvolution, deconvolution with inpainting, superresolution,
and demosaicing, all with unknown boundaries.Comment: IEEE Trans. Image Process., to be published. 15 pages, 11 figures.
MATLAB code available at
https://github.com/alfaiate/DeconvolutionIncompleteOb
A new kernel method for hyperspectral image feature extraction
Hyperspectral image provides abundant spectral information for remote discrimination of subtle differences in ground covers. However, the increasing spectral dimensions, as well as the information redundancy, make the analysis and interpretation of hyperspectral images a challenge. Feature extraction is a very important step for hyperspectral image processing. Feature extraction methods aim at reducing the dimension of data, while preserving as much information as possible. Particularly, nonlinear feature extraction methods (e.g. kernel minimum noise fraction (KMNF) transformation) have been reported to benefit many applications of hyperspectral remote sensing, due to their good preservation of high-order structures of the original data. However, conventional KMNF or its extensions have some limitations on noise fraction estimation during the feature extraction, and this leads to poor performances for post-applications. This paper proposes a novel nonlinear feature extraction method for hyperspectral images. Instead of estimating noise fraction by the nearest neighborhood information (within a sliding window), the proposed method explores the use of image segmentation. The approach benefits both noise fraction estimation and information preservation, and enables a significant improvement for classification. Experimental results on two real hyperspectral images demonstrate the efficiency of the proposed method. Compared to conventional KMNF, the improvements of the method on two hyperspectral image classification are 8 and 11%. This nonlinear feature extraction method can be also applied to other disciplines where high-dimensional data analysis is required
Hyperspectral super-resolution of locally low rank images from complementary multisource data
International audienceRemote sensing hyperspectral images (HSI) are quite often low rank, in the sense that the data belong to a low dimensional subspace/manifold. This has been recently exploited for the fusion of low spatial resolution HSI with high spatial resolution multispectral images (MSI) in order to obtain super-resolution HSI. Most approaches adopt an unmixing or a matrix factorization perspective. The derived methods have led to state-of-the-art results when the spectral information lies in a low dimensional subspace/manifold. However, if the subspace/manifold dimensionality spanned by the complete data set is large, i.e., larger than the number of multispectral bands, the performance of these methods decrease mainly because the underlying sparse regression problem is severely ill-posed. In this paper, we propose a local approach to cope with this difficulty. Fundamentally, we exploit the fact that real world HSI are locally low rank, that is, pixels acquired from a given spatial neighborhood span a very low dimensional subspace/manifold, i.e., lower or equal than the number of multispectral bands. Thus, we propose to partition the image into patches and solve the data fusion problem independently for each patch. This way, in each patch the subspace/manifold dimensionality is low enough such that the problem is not ill-posed anymore. We propose two alternative approaches to define the hyperspectral super-resolution via local dictionary learning using endmember induction algorithms (HSR-LDL-EIA). We also explore two alternatives to define the local regions, using sliding windows and binary partition trees. The effectiveness of the proposed approaches is illustrated with synthetic and semi real data
Détection de sources en interférométrie optique hyperspectrale}
National audienceEn faisant interférer la lumière provenant de plusieurs télescopes, l'interférométrie optique fournit des mesures à très haute résolution angulaire (de l'ordre de la milliseconde d'arc). Chaque mesure estime la valeur en une fréquence spatiale de la transformée de Fourier de la distribution spatiale d'intensité émise par l'objet observé dans chacun des canaux spectraux. Le problème traité ici est la détection, la localisation précise et l'extraction sans biais du spectre de chacune des étoiles d'un amas observé en interférométrie. C'est un verrou important pour l'étude des étoiles au voisinage du trou noir central de notre galaxie, but scientifique du futur instrument GRAVITY du VLTI. A la suite de nos précédent travaux, nous présentons ici une méthode de reconstruction basée sur la méthode de multiplicateur à directions alternées (ADMM). Cela permet d'utiliser dans le même temps les données interférométriques et photométriques. L'introduction de variables auxiliaires permet de découper le problème de reconstruction en sous problèmes plus faciles à traiter. Des tests sur des simulations montrent que la méthode proposée permet de détecter toutes les étoiles d'un amas et de d'estimer leurs spectres avec un biais négligeable
Expanding the Algorithmic Information Theory Frame for Applications to Earth Observation
Recent years have witnessed an increased interest towards compression-based methods and their applications to remote sensing, as these have a data-driven and parameter-free approach and can be thus succesfully employed in several applications, especially in image information mining. This paper expands the algorithmic information theory frame, on which these methods are based. On the one hand, algorithms originally defined in the pattern matching domain are reformulated, allowing a better understanding of the available compression-based tools for remote sensing applications. On the other hand, the use of existing compression algorithms is proposed to store satellite images with added semantic value
A Standardised Procedure for Evaluating Creative Systems: Computational Creativity Evaluation Based on What it is to be Creative
Computational creativity is a flourishing research area, with a variety of creative systems being produced and developed. Creativity evaluation has not kept pace with system development with an evident lack of systematic evaluation of the creativity of these systems in the literature. This is partially due to difficulties in defining what it means for a computer to be creative; indeed, there is no consensus on this for human creativity, let alone its computational equivalent. This paper proposes a Standardised Procedure for Evaluating Creative Systems (SPECS). SPECS is a three-step process: stating what it means for a particular computational system to be creative, deriving and performing tests based on these statements. To assist this process, the paper offers a collection of key components of creativity, identified empirically from discussions of human and computational creativity. Using this approach, the SPECS methodology is demonstrated through a comparative case study evaluating computational creativity systems that improvise music
From Word to Sense Embeddings: A Survey on Vector Representations of Meaning
Over the past years, distributed semantic representations have proved to be
effective and flexible keepers of prior knowledge to be integrated into
downstream applications. This survey focuses on the representation of meaning.
We start from the theoretical background behind word vector space models and
highlight one of their major limitations: the meaning conflation deficiency,
which arises from representing a word with all its possible meanings as a
single vector. Then, we explain how this deficiency can be addressed through a
transition from the word level to the more fine-grained level of word senses
(in its broader acceptation) as a method for modelling unambiguous lexical
meaning. We present a comprehensive overview of the wide range of techniques in
the two main branches of sense representation, i.e., unsupervised and
knowledge-based. Finally, this survey covers the main evaluation procedures and
applications for this type of representation, and provides an analysis of four
of its important aspects: interpretability, sense granularity, adaptability to
different domains and compositionality.Comment: 46 pages, 8 figures. Published in Journal of Artificial Intelligence
Researc
Exploiting spatial sparsity for multi-wavelength imaging in optical interferometry
Optical interferometers provide multiple wavelength measurements. In order to
fully exploit the spectral and spatial resolution of these instruments, new
algorithms for image reconstruction have to be developed. Early attempts to
deal with multi-chromatic interferometric data have consisted in recovering a
gray image of the object or independent monochromatic images in some spectral
bandwidths. The main challenge is now to recover the full 3-D (spatio-spectral)
brightness distribution of the astronomical target given all the available
data. We describe a new approach to implement multi-wavelength image
reconstruction in the case where the observed scene is a collection of
point-like sources. We show the gain in image quality (both spatially and
spectrally) achieved by globally taking into account all the data instead of
dealing with independent spectral slices. This is achieved thanks to a
regularization which favors spatial sparsity and spectral grouping of the
sources. Since the objective function is not differentiable, we had to develop
a specialized optimization algorithm which also accounts for non-negativity of
the brightness distribution.Comment: This version has been accepted for publication in J. Opt. Soc. Am.
Review of Remote Sensing Technologies for the Acquisition of Very High Vertical Accuracy Elevation Data (DEM) in the Framework of the Precise Remediation of Industrial Disasters – Part 2
Based on the information gathered within the technologies review performed in the previous article, the authors analyse if the deferent technologies could efficiently (or not) support the excavation work to be performed for the remediation of industrial disasters. At first sight, some technologies reach the requested accuracy. But after considering the error propagation when the technologies are applied in the condition of the fieldwork, it turned out that none of the remote sensing techniques we have reviewed finally offers sufficient accuracy to reach the 2.5 cm relative vertical accuracy target that was set. The final conclusion is a direct realtime measurement in the field, and the development of an appropriate apparatus for the real-time control of the blade may be the appropriate solution to reach the targeted accuracy in the field. This approach should be examined and developed in a next research work
- …