4,073 research outputs found
Extractor-Based Time-Space Lower Bounds for Learning
A matrix corresponds to the following
learning problem: An unknown element is chosen uniformly at random. A
learner tries to learn from a stream of samples, , where for every , is chosen uniformly at random and
.
Assume that are such that any submatrix of of at least
rows and at least columns, has a bias
of at most . We show that any learning algorithm for the learning
problem corresponding to requires either a memory of size at least
, or at least samples. The
result holds even if the learner has an exponentially small success probability
(of ).
In particular, this shows that for a large class of learning problems, any
learning algorithm requires either a memory of size at least or an exponential number of samples, achieving a
tight lower bound on the size
of the memory, rather than a bound of obtained in previous works [R17,MM17b].
Moreover, our result implies all previous memory-samples lower bounds, as
well as a number of new applications.
Our proof builds on [R17] that gave a general technique for proving
memory-samples lower bounds
Gabor frames and deep scattering networks in audio processing
This paper introduces Gabor scattering, a feature extractor based on Gabor
frames and Mallat's scattering transform. By using a simple signal model for
audio signals specific properties of Gabor scattering are studied. It is shown
that for each layer, specific invariances to certain signal characteristics
occur. Furthermore, deformation stability of the coefficient vector generated
by the feature extractor is derived by using a decoupling technique which
exploits the contractivity of general scattering networks. Deformations are
introduced as changes in spectral shape and frequency modulation. The
theoretical results are illustrated by numerical examples and experiments.
Numerical evidence is given by evaluation on a synthetic and a "real" data set,
that the invariances encoded by the Gabor scattering transform lead to higher
performance in comparison with just using Gabor transform, especially when few
training samples are available.Comment: 26 pages, 8 figures, 4 tables. Repository for reproducibility:
https://gitlab.com/hararticles/gs-gt . Keywords: machine learning; scattering
transform; Gabor transform; deep learning; time-frequency analysis; CNN.
Accepted and published after peer revisio
SizeNet: Weakly Supervised Learning of Visual Size and Fit in Fashion Images
Finding clothes that fit is a hot topic in the e-commerce fashion industry.
Most approaches addressing this problem are based on statistical methods
relying on historical data of articles purchased and returned to the store.
Such approaches suffer from the cold start problem for the thousands of
articles appearing on the shopping platforms every day, for which no prior
purchase history is available. We propose to employ visual data to infer size
and fit characteristics of fashion articles. We introduce SizeNet, a
weakly-supervised teacher-student training framework that leverages the power
of statistical models combined with the rich visual information from article
images to learn visual cues for size and fit characteristics, capable of
tackling the challenging cold start problem. Detailed experiments are performed
on thousands of textile garments, including dresses, trousers, knitwear, tops,
etc. from hundreds of different brands.Comment: IEEE Conference on Computer Vision and Pattern Recognition Workshop
(CVPRW) 2019 Focus on Fashion and Subjective Search - Understanding
Subjective Attributes of Data (FFSS-USAD
- …