557 research outputs found
Factor Graph Neural Networks
In recent years, we have witnessed a surge of Graph Neural Networks (GNNs),
most of which can learn powerful representations in an end-to-end fashion with
great success in many real-world applications. They have resemblance to
Probabilistic Graphical Models (PGMs), but break free from some limitations of
PGMs. By aiming to provide expressive methods for representation learning
instead of computing marginals or most likely configurations, GNNs provide
flexibility in the choice of information flowing rules while maintaining good
performance. Despite their success and inspirations, they lack efficient ways
to represent and learn higher-order relations among variables/nodes. More
expressive higher-order GNNs which operate on k-tuples of nodes need increased
computational resources in order to process higher-order tensors. We propose
Factor Graph Neural Networks (FGNNs) to effectively capture higher-order
relations for inference and learning. To do so, we first derive an efficient
approximate Sum-Product loopy belief propagation inference algorithm for
discrete higher-order PGMs. We then neuralize the novel message passing scheme
into a Factor Graph Neural Network (FGNN) module by allowing richer
representations of the message update rules; this facilitates both efficient
inference and powerful end-to-end learning. We further show that with a
suitable choice of message aggregation operators, our FGNN is also able to
represent Max-Product belief propagation, providing a single family of
architecture that can represent both Max and Sum-Product loopy belief
propagation. Our extensive experimental evaluation on synthetic as well as real
datasets demonstrates the potential of the proposed model.Comment: Accepted by JML
Community detection and stochastic block models: recent developments
The stochastic block model (SBM) is a random graph model with planted
clusters. It is widely employed as a canonical model to study clustering and
community detection, and provides generally a fertile ground to study the
statistical and computational tradeoffs that arise in network and data
sciences.
This note surveys the recent developments that establish the fundamental
limits for community detection in the SBM, both with respect to
information-theoretic and computational thresholds, and for various recovery
requirements such as exact, partial and weak recovery (a.k.a., detection). The
main results discussed are the phase transitions for exact recovery at the
Chernoff-Hellinger threshold, the phase transition for weak recovery at the
Kesten-Stigum threshold, the optimal distortion-SNR tradeoff for partial
recovery, the learning of the SBM parameters and the gap between
information-theoretic and computational thresholds.
The note also covers some of the algorithms developed in the quest of
achieving the limits, in particular two-round algorithms via graph-splitting,
semi-definite programming, linearized belief propagation, classical and
nonbacktracking spectral methods. A few open problems are also discussed
Gunrock: GPU Graph Analytics
For large-scale graph analytics on the GPU, the irregularity of data access
and control flow, and the complexity of programming GPUs, have presented two
significant challenges to developing a programmable high-performance graph
library. "Gunrock", our graph-processing system designed specifically for the
GPU, uses a high-level, bulk-synchronous, data-centric abstraction focused on
operations on a vertex or edge frontier. Gunrock achieves a balance between
performance and expressiveness by coupling high performance GPU computing
primitives and optimization strategies with a high-level programming model that
allows programmers to quickly develop new graph primitives with small code size
and minimal GPU programming knowledge. We characterize the performance of
various optimization strategies and evaluate Gunrock's overall performance on
different GPU architectures on a wide range of graph primitives that span from
traversal-based algorithms and ranking algorithms, to triangle counting and
bipartite-graph-based algorithms. The results show that on a single GPU,
Gunrock has on average at least an order of magnitude speedup over Boost and
PowerGraph, comparable performance to the fastest GPU hardwired primitives and
CPU shared-memory graph libraries such as Ligra and Galois, and better
performance than any other GPU high-level graph library.Comment: 52 pages, invited paper to ACM Transactions on Parallel Computing
(TOPC), an extended version of PPoPP'16 paper "Gunrock: A High-Performance
Graph Processing Library on the GPU
Query-Driven Sampling for Collective Entity Resolution
Probabilistic databases play a preeminent role in the processing and
management of uncertain data. Recently, many database research efforts have
integrated probabilistic models into databases to support tasks such as
information extraction and labeling. Many of these efforts are based on batch
oriented inference which inhibits a realtime workflow. One important task is
entity resolution (ER). ER is the process of determining records (mentions) in
a database that correspond to the same real-world entity. Traditional pairwise
ER methods can lead to inconsistencies and low accuracy due to localized
decisions. Leading ER systems solve this problem by collectively resolving all
records using a probabilistic graphical model and Markov chain Monte Carlo
(MCMC) inference. However, for large datasets this is an extremely expensive
process. One key observation is that, such exhaustive ER process incurs a huge
up-front cost, which is wasteful in practice because most users are interested
in only a small subset of entities. In this paper, we advocate pay-as-you-go
entity resolution by developing a number of query-driven collective ER
techniques. We introduce two classes of SQL queries that involve ER operators
--- selection-driven ER and join-driven ER. We implement novel variations of
the MCMC Metropolis Hastings algorithm to generate biased samples and
selectivity-based scheduling algorithms to support the two classes of ER
queries. Finally, we show that query-driven ER algorithms can converge and
return results within minutes over a database populated with the extraction
from a newswire dataset containing 71 million mentions
Compressive Sensing and Recovery of Structured Sparse Signals
In the recent years, numerous disciplines including telecommunications, medical imaging, computational biology, and neuroscience benefited from increasing applications of high dimensional datasets. This calls for efficient ways of data capturing and data processing. Compressive sensing (CS), which is introduced as an efficient sampling (data capturing) method, is addressing this need. It is well-known that the signals, which belong to an ambient high-dimensional space, have much smaller dimensionality in an appropriate domain. CS taps into this principle and dramatically reduces the number of samples that is required to be captured to avoid any distortion in the information content of the data. This reduction in the required number of samples enables many new applications that were previously infeasible using classical sampling techniques. Most CS-based approaches take advantage of the inherent low-dimensionality in many datasets. They try to determine a sparse representation of the data, in an appropriately chosen basis using only a few significant elements. These approaches make no extra assumptions regarding possible relationships among the significant elements of that basis. In this dissertation, different ways of incorporating the knowledge about such relationships are integrated into the data sampling and the processing schemes. We first consider the recovery of temporally correlated sparse signals and show that using the time correlation model. The recovery performance can be significantly improved. Next, we modify the sampling process of sparse signals to incorporate the signal structure in a more efficient way. In the image processing application, we show that exploiting the structure information in both signal sampling and signal recovery improves the efficiency of the algorithm. In addition, we show that region-of-interest information can be included in the CS sampling and recovery steps to provide a much better quality for the region-of-interest area compared the rest of the image or video. In spectrum sensing applications, CS can dramatically improve the sensing efficiency by facilitating the coordination among spectrum sensors. A cluster-based spectrum sensing with coordination among spectrum sensors is proposed for geographically disperse cognitive radio networks. Further, CS has been exploited in this problem for simultaneous sensing and localization. Having access to this information dramatically facilitates the implementation of advanced communication technologies as required by 5G communication networks
A Comparative Study of Modern Inference Techniques for Structured Discrete Energy Minimization Problems
International audienceSzeliski et al. published an influential study in 2006 on energy minimization methods for Markov Random Fields (MRF). This study provided valuable insights in choosing the best optimization technique for certain classes of problems. While these insights remain generally useful today, the phenomenal success of random field models means that the kinds of inference problems that have to be solved changed significantly. Specifically , the models today often include higher order interactions, flexible connectivity structures, large label-spaces of different car-dinalities, or learned energy tables. To reflect these changes, we provide a modernized and enlarged study. We present an empirical comparison of more than 27 state-of-the-art optimization techniques on a corpus of 2,453 energy minimization instances from diverse applications in computer vision. To ensure reproducibility, we evaluate all methods in the OpenGM 2 framework and report extensive results regarding runtime and solution quality. Key insights from our study agree with the results of Szeliski et al. for the types of models they studied. However, on new and challenging types of models our findings disagree and suggest that polyhedral methods and integer programming solvers are competitive in terms of runtime and solution quality over a large range of model types
- …