584 research outputs found
Learning flexible representations of stochastic processes on graphs
Graph convolutional networks adapt the architecture of convolutional neural
networks to learn rich representations of data supported on arbitrary graphs by
replacing the convolution operations of convolutional neural networks with
graph-dependent linear operations. However, these graph-dependent linear
operations are developed for scalar functions supported on undirected graphs.
We propose a class of linear operations for stochastic (time-varying) processes
on directed (or undirected) graphs to be used in graph convolutional networks.
We propose a parameterization of such linear operations using functional
calculus to achieve arbitrarily low learning complexity. The proposed approach
is shown to model richer behaviors and display greater flexibility in learning
representations than product graph methods
Semi-blind sparse channel estimation with constant modulus symbols
We propose two methods for the estimation of sparse communication channels. In the first method, we consider the problem of channel estimation based on training symbols, and formulate it as an optimization problem. In this formulation, we combine the objective of fidelity to the received data with a non-quadratic constraint reflecting the prior information about the sparsity of the channel. This approach leads to accurate channel estimates with much shorter training sequences than conventional methods. The second method we propose is aimed at taking advantage of any available training-based data, as well as any "blind" data based on unknown, constant modulus symbols. We propose a semi-blind optimization framework making use of these two types of data, and enforcing the sparsity of the channel, as well as the constant modulus property of the symbols. This approach improves upon the channel estimates based only on training sequences, and also produces accurate estimates for the unknown symbols
Testing the Structure of a Gaussian Graphical Model with Reduced Transmissions in a Distributed Setting
Testing a covariance matrix following a Gaussian graphical model (GGM) is
considered in this paper based on observations made at a set of distributed
sensors grouped into clusters. Ordered transmissions are proposed to achieve
the same Bayes risk as the optimum centralized energy unconstrained approach
but with fewer transmissions and a completely distributed approach. In this
approach, we represent the Bayes optimum test statistic as a sum of local test
statistics which can be calculated by only utilizing the observations available
at one cluster. We select one sensor to be the cluster head (CH) to collect and
summarize the observed data in each cluster and intercluster communications are
assumed to be inexpensive. The CHs with more informative observations transmit
their data to the fusion center (FC) first. By halting before all transmissions
have taken place, transmissions can be saved without performance loss. It is
shown that this ordering approach can guarantee a lower bound on the average
number of transmissions saved for any given GGM and the lower bound can
approach approximately half the number of clusters when the minimum eigenvalue
of the covariance matrix under the alternative hypothesis in each cluster
becomes sufficiently large
Interactive Semantic Parsing for If-Then Recipes via Hierarchical Reinforcement Learning
Given a text description, most existing semantic parsers synthesize a program
in one shot. However, it is quite challenging to produce a correct program
solely based on the description, which in reality is often ambiguous or
incomplete. In this paper, we investigate interactive semantic parsing, where
the agent can ask the user clarification questions to resolve ambiguities via a
multi-turn dialogue, on an important type of programs called "If-Then recipes."
We develop a hierarchical reinforcement learning (HRL) based agent that
significantly improves the parsing performance with minimal questions to the
user. Results under both simulation and human evaluation show that our agent
substantially outperforms non-interactive semantic parsers and rule-based
agents.Comment: 13 pages, 2 figures, accepted by AAAI 201
Mining Entity Synonyms with Efficient Neural Set Generation
Mining entity synonym sets (i.e., sets of terms referring to the same entity)
is an important task for many entity-leveraging applications. Previous work
either rank terms based on their similarity to a given query term, or treats
the problem as a two-phase task (i.e., detecting synonymy pairs, followed by
organizing these pairs into synonym sets). However, these approaches fail to
model the holistic semantics of a set and suffer from the error propagation
issue. Here we propose a new framework, named SynSetMine, that efficiently
generates entity synonym sets from a given vocabulary, using example sets from
external knowledge bases as distant supervision. SynSetMine consists of two
novel modules: (1) a set-instance classifier that jointly learns how to
represent a permutation invariant synonym set and whether to include a new
instance (i.e., a term) into the set, and (2) a set generation algorithm that
enumerates the vocabulary only once and applies the learned set-instance
classifier to detect all entity synonym sets in it. Experiments on three real
datasets from different domains demonstrate both effectiveness and efficiency
of SynSetMine for mining entity synonym sets.Comment: AAAI 2019 camera-ready versio
- …
