416 research outputs found
Renormalization group flows of Hamiltonians using tensor networks
A renormalization group flow of Hamiltonians for two-dimensional classical
partition functions is constructed using tensor networks. Similar to tensor
network renormalization ([G. Evenbly and G. Vidal, Phys. Rev. Lett. 115, 180405
(2015)], [S. Yang, Z.-C. Gu, and X.-G Wen, Phys. Rev. Lett. 118, 110504
(2017)]) we obtain approximate fixed point tensor networks at criticality. Our
formalism however preserves positivity of the tensors at every step and hence
yields an interpretation in terms of Hamiltonian flows. We emphasize that the
key difference between tensor network approaches and Kadanoff's spin blocking
method can be understood in terms of a change of local basis at every
decimation step, a property which is crucial to overcome the area law of mutual
information. We derive algebraic relations for fixed point tensors, calculate
critical exponents, and benchmark our method on the Ising model and the
six-vertex model.Comment: accepted version for Phys. Rev. Lett, main text: 5 pages, 3 figures,
appendices: 9 pages, 1 figur
Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches
Imaging spectrometers measure electromagnetic energy scattered in their
instantaneous field view in hundreds or thousands of spectral channels with
higher spectral resolution than multispectral cameras. Imaging spectrometers
are therefore often referred to as hyperspectral cameras (HSCs). Higher
spectral resolution enables material identification via spectroscopic analysis,
which facilitates countless applications that require identifying materials in
scenarios unsuitable for classical spectroscopic analysis. Due to low spatial
resolution of HSCs, microscopic material mixing, and multiple scattering,
spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus,
accurate estimation requires unmixing. Pixels are assumed to be mixtures of a
few materials, called endmembers. Unmixing involves estimating all or some of:
the number of endmembers, their spectral signatures, and their abundances at
each pixel. Unmixing is a challenging, ill-posed inverse problem because of
model inaccuracies, observation noise, environmental conditions, endmember
variability, and data set size. Researchers have devised and investigated many
models searching for robust, stable, tractable, and accurate unmixing
algorithms. This paper presents an overview of unmixing methods from the time
of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models
are first discussed. Signal-subspace, geometrical, statistical, sparsity-based,
and spatial-contextual unmixing algorithms are described. Mathematical problems
and potential solutions are described. Algorithm characteristics are
illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of
Selected Topics in Applied Earth Observations and Remote Sensin
Sparse Modeling for Image and Vision Processing
In recent years, a large amount of multi-disciplinary research has been
conducted on sparse models and their applications. In statistics and machine
learning, the sparsity principle is used to perform model selection---that is,
automatically selecting a simple model among a large collection of them. In
signal processing, sparse coding consists of representing data with linear
combinations of a few dictionary elements. Subsequently, the corresponding
tools have been widely adopted by several scientific communities such as
neuroscience, bioinformatics, or computer vision. The goal of this monograph is
to offer a self-contained view of sparse modeling for visual recognition and
image processing. More specifically, we focus on applications where the
dictionary is learned and adapted to data, yielding a compact representation
that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics
and Visio
- …