33,030 research outputs found
Structured Sequence Modeling with Graph Convolutional Recurrent Networks
This paper introduces Graph Convolutional Recurrent Network (GCRN), a deep
learning model able to predict structured sequences of data. Precisely, GCRN is
a generalization of classical recurrent neural networks (RNN) to data
structured by an arbitrary graph. Such structured sequences can represent
series of frames in videos, spatio-temporal measurements on a network of
sensors, or random walks on a vocabulary graph for natural language modeling.
The proposed model combines convolutional neural networks (CNN) on graphs to
identify spatial structures and RNN to find dynamic patterns. We study two
possible architectures of GCRN, and apply the models to two practical problems:
predicting moving MNIST data, and modeling natural language with the Penn
Treebank dataset. Experiments show that exploiting simultaneously graph spatial
and dynamic information about data can improve both precision and learning
speed
The Power of Localization for Efficiently Learning Linear Separators with Noise
We introduce a new approach for designing computationally efficient learning
algorithms that are tolerant to noise, and demonstrate its effectiveness by
designing algorithms with improved noise tolerance guarantees for learning
linear separators.
We consider both the malicious noise model and the adversarial label noise
model. For malicious noise, where the adversary can corrupt both the label and
the features, we provide a polynomial-time algorithm for learning linear
separators in under isotropic log-concave distributions that can
tolerate a nearly information-theoretically optimal noise rate of . For the adversarial label noise model, where the
distribution over the feature vectors is unchanged, and the overall probability
of a noisy label is constrained to be at most , we also give a
polynomial-time algorithm for learning linear separators in under
isotropic log-concave distributions that can handle a noise rate of .
We show that, in the active learning model, our algorithms achieve a label
complexity whose dependence on the error parameter is
polylogarithmic. This provides the first polynomial-time active learning
algorithm for learning linear separators in the presence of malicious noise or
adversarial label noise.Comment: Contains improved label complexity analysis communicated to us by
Steve Hannek
A novel prestack sparse azimuthal AVO inversion
In this paper we demonstrate a new algorithm for sparse prestack azimuthal
AVO inversion. A novel Euclidean prior model is developed to at once respect
sparseness in the layered earth and smoothness in the model of reflectivity.
Recognizing that methods of artificial intelligence and Bayesian computation
are finding an every increasing role in augmenting the process of
interpretation and analysis of geophysical data, we derive a generalized
matrix-variate model of reflectivity in terms of orthogonal basis functions,
subject to sparse constraints. This supports a direct application of machine
learning methods, in a way that can be mapped back onto the physical principles
known to govern reflection seismology. As a demonstration we present an
application of these methods to the Marcellus shale. Attributes extracted using
the azimuthal inversion are clustered using an unsupervised learning algorithm.
Interpretation of the clusters is performed in the context of the Ruger model
of azimuthal AVO
- …