3 research outputs found

    Learning Loosely Connected Markov Random Fields

    Full text link
    We consider the structure learning problem for graphical models that we call loosely connected Markov random fields, in which the number of short paths between any pair of nodes is small, and present a new conditional independence test based algorithm for learning the underlying graph structure. The novel maximization step in our algorithm ensures that the true edges are detected correctly even when there are short cycles in the graph. The number of samples required by our algorithm is C*log p, where p is the size of the graph and the constant C depends on the parameters of the model. We show that several previously studied models are examples of loosely connected Markov random fields, and our algorithm achieves the same or lower computational complexity than the previously designed algorithms for individual cases. We also get new results for more general graphical models, in particular, our algorithm learns general Ising models on the Erdos-Renyi random graph G(p, c/p) correctly with running time O(np^5).Comment: 45 pages, minor revisio

    Learning network structure from node behavior

    Get PDF
    Understanding the network structure connecting a group of entities is of interest in applications such as predicting stock prices and making recommendations to customers. The network structure is usually not directly observable. However, due to improvements in technology and the ever-increasing use of the Internet, large amounts of data about individual node behavior is becoming more easily available. Thus, an interesting problem is to devise algorithms to infer network structure from node behavior data. Since network sizes are enormous in typical applications, the learning problem is not tractable for general network topology. In this thesis, we focus on three models with simplifying assumptions on the underlying network. The first model represents the network as a Markov random field, where each node in the network is viewed as a random variable and the conditional independence relations among them is encoded by a graph. The simplifying assumption is that the underlying graph is loosely connected: the number of short paths between any pair of nodes is small. We point out that many previously studied models are examples of this family. Given i.i.d. samples from the joint distribution, we present a natural low complexity algorithm for learning the structure of loosely connected Markov random fields. In particular, our algorithm learns the graph correctly with high probability using n=O(log⁑p)n = O(\log p) samples, where pp is the size of the graph. If there are at most D1D_1 short paths between non-neighbor nodes and D2D_2 non-direct short paths between neighboring nodes, the running time of our algorithm is O(npD1+D2+2)O(np^{D_1+D_2+2}). The second model arises from the recommender systems where users give ratings to items. We make the assumption that both users and items form clusters and users in the same cluster give the same binary rating to items in the same cluster. The goal is to recover the user and item clusters by observing only a small fraction of noisy entries. We first derive a lower bound on the minimum number of observations needed for exact cluster recovery. Then, we study three algorithms with different running time and compare the number of observations needed for successful cluster recovery. Our analytical results show smooth time-data trade-offs: one can gradually reduce the computational complexity when increasingly more observations are available. The third model considers a similar scenario as the previous one: instead of giving binary ratings, users give pairwise comparisons to items. We assume the users form clusters where users in the same cluster share the same score vector for the items, and the pairwise comparisons obtained from each user are generated according to the Bradley-Terry model with his/her score vector. We propose a two-step algorithm for estimating the score vectors: first cluster the users using projected comparison vectors and then estimate a score vector separately for each cluster by the maximum likelihood estimation for the classical Bradley-Terry model. The key observation is that, though each user is represented by a high-dimensional comparison vector, the corresponding expected comparison vector is determined by only a small number of parameters and it lies close to a low-dimensional linear subspace. When projecting the comparison vectors onto this subspace, it significantly reduces the noise and improves the clustering performance. Moreover, we show that the maximum likelihood estimation is robust to clustering errors
    corecore