43,758 research outputs found
CapProNet: Deep Feature Learning via Orthogonal Projections onto Capsule Subspaces
In this paper, we formalize the idea behind capsule nets of using a capsule
vector rather than a neuron activation to predict the label of samples. To this
end, we propose to learn a group of capsule subspaces onto which an input
feature vector is projected. Then the lengths of resultant capsules are used to
score the probability of belonging to different classes. We train such a
Capsule Projection Network (CapProNet) by learning an orthogonal projection
matrix for each capsule subspace, and show that each capsule subspace is
updated until it contains input feature vectors corresponding to the associated
class. We will also show that the capsule projection can be viewed as
normalizing the multiple columns of the weight matrix simultaneously to form an
orthogonal basis, which makes it more effective in incorporating novel
components of input features to update capsule representations. In other words,
the capsule projection can be viewed as a multi-dimensional weight
normalization in capsule subspaces, where the conventional weight normalization
is simply a special case of the capsule projection onto 1D lines. Only a small
negligible computing overhead is incurred to train the network in
low-dimensional capsule subspaces or through an alternative hyper-power
iteration to estimate the normalization matrix. Experiment results on image
datasets show the presented model can greatly improve the performance of the
state-of-the-art ResNet backbones by and that of the Densenet by
respectively at the same level of computing and memory expenses. The
CapProNet establishes the competitive state-of-the-art performance for the
family of capsule nets by significantly reducing test errors on the benchmark
datasets.Comment: Liheng Zhang, Marzieh Edraki, Guo-Jun Qi. CapProNet: Deep Feature
Learning via Orthogonal Projections onto Capsule Subspaces, in Proccedings of
Thirty-second Conference on Neural Information Processing Systems (NIPS
2018), Palais des Congr\`es de Montr\'eal, Montr\'eal, Canda, December 3-8,
201
Generalized Forward-Backward Splitting
This paper introduces the generalized forward-backward splitting algorithm
for minimizing convex functions of the form , where
has a Lipschitz-continuous gradient and the 's are simple in the sense
that their Moreau proximity operators are easy to compute. While the
forward-backward algorithm cannot deal with more than non-smooth
function, our method generalizes it to the case of arbitrary . Our method
makes an explicit use of the regularity of in the forward step, and the
proximity operators of the 's are applied in parallel in the backward
step. This allows the generalized forward backward to efficiently address an
important class of convex problems. We prove its convergence in infinite
dimension, and its robustness to errors on the computation of the proximity
operators and of the gradient of . Examples on inverse problems in imaging
demonstrate the advantage of the proposed methods in comparison to other
splitting algorithms.Comment: 24 pages, 4 figure
A dissipative time reversal technique for photo-acoustic tomography in a cavity
We consider the inverse source problem arising in thermo- and photo-acoustic
tomography. It consists in reconstructing the initial pressure from the
boundary measurements of the acoustic wave. Our goal is to extend versatile
time reversal techniques to the case of perfectly reflecting boundary of the
domain. Standard time reversal works only if the solution of the direct problem
decays in time, which does not happen in the setup we consider. We thus propose
a novel time reversal technique with a non-standard boundary condition. The
error induced by this time reversal technique satisfies the wave equation with
a dissipative boundary condition and, therefore, decays in time. For larger
measurement times, this method yields a close approximation; for smaller times,
the first approximation can be iteratively refined, resulting in a convergent
Neumann series for the approximation
Gaussian Belief Propagation Based Multiuser Detection
In this work, we present a novel construction for solving the linear
multiuser detection problem using the Gaussian Belief Propagation algorithm.
Our algorithm yields an efficient, iterative and distributed implementation of
the MMSE detector. We compare our algorithm's performance to a recent result
and show an improved memory consumption, reduced computation steps and a
reduction in the number of sent messages. We prove that recent work by
Montanari et al. is an instance of our general algorithm, providing new
convergence results for both algorithms.Comment: 6 pages, 1 figures, appeared in the 2008 IEEE International Symposium
on Information Theory, Toronto, July 200
Wavelets and Fast Numerical Algorithms
Wavelet based algorithms in numerical analysis are similar to other transform
methods in that vectors and operators are expanded into a basis and the
computations take place in this new system of coordinates. However, due to the
recursive definition of wavelets, their controllable localization in both space
and wave number (time and frequency) domains, and the vanishing moments
property, wavelet based algorithms exhibit new and important properties.
For example, the multiresolution structure of the wavelet expansions brings
about an efficient organization of transformations on a given scale and of
interactions between different neighbouring scales. Moreover, wide classes of
operators which naively would require a full (dense) matrix for their numerical
description, have sparse representations in wavelet bases. For these operators
sparse representations lead to fast numerical algorithms, and thus address a
critical numerical issue.
We note that wavelet based algorithms provide a systematic generalization of
the Fast Multipole Method (FMM) and its descendents.
These topics will be the subject of the lecture. Starting from the notion of
multiresolution analysis, we will consider the so-called non-standard form
(which achieves decoupling among the scales) and the associated fast numerical
algorithms. Examples of non-standard forms of several basic operators (e.g.
derivatives) will be computed explicitly.Comment: 32 pages, uuencoded tar-compressed LaTeX file. Uses epsf.sty (see
`macros'
- …