19,344 research outputs found
Flexible Multi-layer Sparse Approximations of Matrices and Applications
The computational cost of many signal processing and machine learning
techniques is often dominated by the cost of applying certain linear operators
to high-dimensional vectors. This paper introduces an algorithm aimed at
reducing the complexity of applying linear operators in high dimension by
approximately factorizing the corresponding matrix into few sparse factors. The
approach relies on recent advances in non-convex optimization. It is first
explained and analyzed in details and then demonstrated experimentally on
various problems including dictionary learning for image denoising, and the
approximation of large matrices arising in inverse problems
Fast, Dense Feature SDM on an iPhone
In this paper, we present our method for enabling dense SDM to run at over 90
FPS on a mobile device. Our contributions are two-fold. Drawing inspiration
from the FFT, we propose a Sparse Compositional Regression (SCR) framework,
which enables a significant speed up over classical dense regressors. Second,
we propose a binary approximation to SIFT features. Binary Approximated SIFT
(BASIFT) features, which are a computationally efficient approximation to SIFT,
a commonly used feature with SDM. We demonstrate the performance of our
algorithm on an iPhone 7, and show that we achieve similar accuracy to SDM
Designing Gabor windows using convex optimization
Redundant Gabor frames admit an infinite number of dual frames, yet only the
canonical dual Gabor system, constructed from the minimal l2-norm dual window,
is widely used. This window function however, might lack desirable properties,
e.g. good time-frequency concentration, small support or smoothness. We employ
convex optimization methods to design dual windows satisfying the Wexler-Raz
equations and optimizing various constraints. Numerical experiments suggest
that alternate dual windows with considerably improved features can be found
- …