63,031 research outputs found
Data Decomposition and Spatial Mixture Modeling for Part Based Model
Abstract. This paper presents a system of data decomposition and spa-tial mixture modeling for part based models. Recently, many enhanced part based models (with e.g., multiple features, more components or parts) have been proposed. Nevertheless, those enhanced models bring high computation cost together with the risk of over-fitting. To tackle this problem, we propose a data decomposition method for part based models which not only accelerates training and testing process but also improves the performance on average. Besides, the original part based model uses a strict rigid structural model to describe the distribution of each part location. It is not “deformable ” enough, especially for those instances with different viewpoints or poses in the same aspect ratio. To address this problem, we present a novel spatial mixture modeling method. The spatial mixture embedded model is then integrated into the proposed data decomposition framework. We evaluate our system on the challenging PASCAL VOC2007 and PASCAL VOC2010 datasets, demonstrating the state-of-the-art performance compared with other re-lated methods in terms of accuracy and efficiency.
Sparse component separation for accurate CMB map estimation
The Cosmological Microwave Background (CMB) is of premier importance for the
cosmologists to study the birth of our universe. Unfortunately, most CMB
experiments such as COBE, WMAP or Planck do not provide a direct measure of the
cosmological signal; CMB is mixed up with galactic foregrounds and point
sources. For the sake of scientific exploitation, measuring the CMB requires
extracting several different astrophysical components (CMB, Sunyaev-Zel'dovich
clusters, galactic dust) form multi-wavelength observations. Mathematically
speaking, the problem of disentangling the CMB map from the galactic
foregrounds amounts to a component or source separation problem. In the field
of CMB studies, a very large range of source separation methods have been
applied which all differ from each other in the way they model the data and the
criteria they rely on to separate components. Two main difficulties are i) the
instrument's beam varies across frequencies and ii) the emission laws of most
astrophysical components vary across pixels. This paper aims at introducing a
very accurate modeling of CMB data, based on sparsity, accounting for beams
variability across frequencies as well as spatial variations of the components'
spectral characteristics. Based on this new sparse modeling of the data, a
sparsity-based component separation method coined Local-Generalized
Morphological Component Analysis (L-GMCA) is described. Extensive numerical
experiments have been carried out with simulated Planck data. These experiments
show the high efficiency of the proposed component separation methods to
estimate a clean CMB map with a very low foreground contamination, which makes
L-GMCA of prime interest for CMB studies.Comment: submitted to A&
Hyperspectral Image Restoration via Total Variation Regularized Low-rank Tensor Decomposition
Hyperspectral images (HSIs) are often corrupted by a mixture of several types
of noise during the acquisition process, e.g., Gaussian noise, impulse noise,
dead lines, stripes, and many others. Such complex noise could degrade the
quality of the acquired HSIs, limiting the precision of the subsequent
processing. In this paper, we present a novel tensor-based HSI restoration
approach by fully identifying the intrinsic structures of the clean HSI part
and the mixed noise part respectively. Specifically, for the clean HSI part, we
use tensor Tucker decomposition to describe the global correlation among all
bands, and an anisotropic spatial-spectral total variation (SSTV)
regularization to characterize the piecewise smooth structure in both spatial
and spectral domains. For the mixed noise part, we adopt the norm
regularization to detect the sparse noise, including stripes, impulse noise,
and dead pixels. Despite that TV regulariztion has the ability of removing
Gaussian noise, the Frobenius norm term is further used to model heavy Gaussian
noise for some real-world scenarios. Then, we develop an efficient algorithm
for solving the resulting optimization problem by using the augmented Lagrange
multiplier (ALM) method. Finally, extensive experiments on simulated and
real-world noise HSIs are carried out to demonstrate the superiority of the
proposed method over the existing state-of-the-art ones.Comment: 15 pages, 20 figure
- …