1,019 research outputs found
Smoothed Analysis in Unsupervised Learning via Decoupling
Smoothed analysis is a powerful paradigm in overcoming worst-case
intractability in unsupervised learning and high-dimensional data analysis.
While polynomial time smoothed analysis guarantees have been obtained for
worst-case intractable problems like tensor decompositions and learning
mixtures of Gaussians, such guarantees have been hard to obtain for several
other important problems in unsupervised learning. A core technical challenge
in analyzing algorithms is obtaining lower bounds on the least singular value
for random matrix ensembles with dependent entries, that are given by
low-degree polynomials of a few base underlying random variables.
In this work, we address this challenge by obtaining high-confidence lower
bounds on the least singular value of new classes of structured random matrix
ensembles of the above kind. We then use these bounds to design algorithms with
polynomial time smoothed analysis guarantees for the following three important
problems in unsupervised learning:
1. Robust subspace recovery, when the fraction of inliers in the
d-dimensional subspace is at least for any constant integer . This contrasts with the known
worst-case intractability when , and the previous smoothed
analysis result which needed (Hardt and Moitra, 2013).
2. Learning overcomplete hidden markov models, where the size of the state
space is any polynomial in the dimension of the observations. This gives the
first polynomial time guarantees for learning overcomplete HMMs in a smoothed
analysis model.
3. Higher order tensor decompositions, where we generalize the so-called
FOOBI algorithm of Cardoso to find order- rank-one tensors in a subspace.
This allows us to obtain polynomially robust decomposition algorithms for
'th order tensors with rank .Comment: 44 page
Modeling neural dynamics during speech production using a state space variational autoencoder
Characterizing the neural encoding of behavior remains a challenging task in
many research areas due in part to complex and noisy spatiotemporal dynamics of
evoked brain activity. An important aspect of modeling these neural encodings
involves separation of robust, behaviorally relevant signals from background
activity, which often contains signals from irrelevant brain processes and
decaying information from previous behavioral events. To achieve this
separation, we develop a two-branch State Space Variational AutoEncoder (SSVAE)
model to individually describe the instantaneous evoked foreground signals and
the context-dependent background signals. We modeled the spontaneous
speech-evoked brain dynamics using smoothed Gaussian mixture models. By
applying the proposed SSVAE model to track ECoG dynamics in one participant
over multiple hours, we find that the model can predict speech-related dynamics
more accurately than other latent factor inference algorithms. Our results
demonstrate that separately modeling the instantaneous speech-evoked and slow
context-dependent brain dynamics can enhance tracking performance, which has
important implications for the development of advanced neural encoding and
decoding models in various neuroscience sub-disciplines.Comment: 5 page
Learning Generative Models across Incomparable Spaces
Generative Adversarial Networks have shown remarkable success in learning a
distribution that faithfully recovers a reference distribution in its entirety.
However, in some cases, we may want to only learn some aspects (e.g., cluster
or manifold structure), while modifying others (e.g., style, orientation or
dimension). In this work, we propose an approach to learn generative models
across such incomparable spaces, and demonstrate how to steer the learned
distribution towards target properties. A key component of our model is the
Gromov-Wasserstein distance, a notion of discrepancy that compares
distributions relationally rather than absolutely. While this framework
subsumes current generative models in identically reproducing distributions,
its inherent flexibility allows application to tasks in manifold learning,
relational learning and cross-domain learning.Comment: International Conference on Machine Learning (ICML
- β¦