21,388 research outputs found
Accelerated graph-based spectral polynomial filters
Graph-based spectral denoising is a low-pass filtering using the
eigendecomposition of the graph Laplacian matrix of a noisy signal. Polynomial
filtering avoids costly computation of the eigendecomposition by projections
onto suitable Krylov subspaces. Polynomial filters can be based, e.g., on the
bilateral and guided filters. We propose constructing accelerated polynomial
filters by running flexible Krylov subspace based linear and eigenvalue solvers
such as the Block Locally Optimal Preconditioned Conjugate Gradient (LOBPCG)
method.Comment: 6 pages, 6 figures. Accepted to the 2015 IEEE International Workshop
on Machine Learning for Signal Processin
Chebyshev and Conjugate Gradient Filters for Graph Image Denoising
In 3D image/video acquisition, different views are often captured with
varying noise levels across the views. In this paper, we propose a graph-based
image enhancement technique that uses a higher quality view to enhance a
degraded view. A depth map is utilized as auxiliary information to match the
perspectives of the two views. Our method performs graph-based filtering of the
noisy image by directly computing a projection of the image to be filtered onto
a lower dimensional Krylov subspace of the graph Laplacian. We discuss two
graph spectral denoising methods: first using Chebyshev polynomials, and second
using iterations of the conjugate gradient algorithm. Our framework generalizes
previously known polynomial graph filters, and we demonstrate through numerical
simulations that our proposed technique produces subjectively cleaner images
with about 1-3 dB improvement in PSNR over existing polynomial graph filters.Comment: 6 pages, 6 figures, accepted to 2014 IEEE International Conference on
Multimedia and Expo Workshops (ICMEW
Convolutional Neural Networks Via Node-Varying Graph Filters
Convolutional neural networks (CNNs) are being applied to an increasing
number of problems and fields due to their superior performance in
classification and regression tasks. Since two of the key operations that CNNs
implement are convolution and pooling, this type of networks is implicitly
designed to act on data described by regular structures such as images.
Motivated by the recent interest in processing signals defined in irregular
domains, we advocate a CNN architecture that operates on signals supported on
graphs. The proposed design replaces the classical convolution not with a
node-invariant graph filter (GF), which is the natural generalization of
convolution to graph domains, but with a node-varying GF. This filter extracts
different local features without increasing the output dimension of each layer
and, as a result, bypasses the need for a pooling stage while involving only
local operations. A second contribution is to replace the node-varying GF with
a hybrid node-varying GF, which is a new type of GF introduced in this paper.
While the alternative architecture can still be run locally without requiring a
pooling stage, the number of trainable parameters is smaller and can be
rendered independent of the data dimension. Tests are run on a synthetic source
localization problem and on the 20NEWS dataset.Comment: Submitted to DSW 2018 (IEEE Data Science Workshop
Segmentation-Aware Convolutional Networks Using Local Attention Masks
We introduce an approach to integrate segmentation information within a
convolutional neural network (CNN). This counter-acts the tendency of CNNs to
smooth information across regions and increases their spatial precision. To
obtain segmentation information, we set up a CNN to provide an embedding space
where region co-membership can be estimated based on Euclidean distance. We use
these embeddings to compute a local attention mask relative to every neuron
position. We incorporate such masks in CNNs and replace the convolution
operation with a "segmentation-aware" variant that allows a neuron to
selectively attend to inputs coming from its own region. We call the resulting
network a segmentation-aware CNN because it adapts its filters at each image
point according to local segmentation cues. We demonstrate the merit of our
method on two widely different dense prediction tasks, that involve
classification (semantic segmentation) and regression (optical flow). Our
results show that in semantic segmentation we can match the performance of
DenseCRFs while being faster and simpler, and in optical flow we obtain clearly
sharper responses than networks that do not use local attention masks. In both
cases, segmentation-aware convolution yields systematic improvements over
strong baselines. Source code for this work is available online at
http://cs.cmu.edu/~aharley/segaware
- …