1,012 research outputs found
Dependency Parsing with Dilated Iterated Graph CNNs
Dependency parses are an effective way to inject linguistic knowledge into
many downstream tasks, and many practitioners wish to efficiently parse
sentences at scale. Recent advances in GPU hardware have enabled neural
networks to achieve significant gains over the previous best models, these
models still fail to leverage GPUs' capability for massive parallelism due to
their requirement of sequential processing of the sentence. In response, we
propose Dilated Iterated Graph Convolutional Neural Networks (DIG-CNNs) for
graph-based dependency parsing, a graph convolutional architecture that allows
for efficient end-to-end GPU parsing. In experiments on the English Penn
TreeBank benchmark, we show that DIG-CNNs perform on par with some of the best
neural network parsers.Comment: 2nd Workshop on Structured Prediction for Natural Language Processing
(at EMNLP '17
Top-k Route Search through Submodularity Modeling of Recurrent POI Features
We consider a practical top-k route search problem: given a collection of
points of interest (POIs) with rated features and traveling costs between POIs,
a user wants to find k routes from a source to a destination and limited in a
cost budget, that maximally match her needs on feature preferences. One
challenge is dealing with the personalized diversity requirement where users
have various trade-off between quantity (the number of POIs with a specified
feature) and variety (the coverage of specified features). Another challenge is
the large scale of the POI map and the great many alternative routes to search.
We model the personalized diversity requirement by the whole class of
submodular functions, and present an optimal solution to the top-k route search
problem through indices for retrieving relevant POIs in both feature and route
spaces and various strategies for pruning the search space using user
preferences and constraints. We also present promising heuristic solutions and
evaluate all the solutions on real life data.Comment: 11 pages, 7 figures, 2 table
Probabilistic learning on graphs via contextual architectures
We propose a novel methodology for representation learning on graph-structured data, in which a stack of Bayesian Networks learns different distributions of a vertex's neighbour- hood. Through an incremental construction policy and layer-wise training, we can build deeper architectures with respect to typical graph convolutional neural networks, with benefits in terms of context spreading between vertices. First, the model learns from graphs via maximum likelihood estimation without using target labels. Then, a supervised readout is applied to the learned graph embeddings to deal with graph classification and vertex classification tasks, showing competitive results against neural models for graphs. The computational complexity is linear in the number of edges, facilitating learning on large scale data sets. By studying how depth affects the performances of our model, we discover that a broader context generally improves performances. In turn, this leads to a critical analysis of some benchmarks used in literature
- …