10 research outputs found

    The Grow-Shrink strategy for learning Markov network structures constrained by context-specific independences

    Full text link
    Markov networks are models for compactly representing complex probability distributions. They are composed by a structure and a set of numerical weights. The structure qualitatively describes independences in the distribution, which can be exploited to factorize the distribution into a set of compact functions. A key application for learning structures from data is to automatically discover knowledge. In practice, structure learning algorithms focused on "knowledge discovery" present a limitation: they use a coarse-grained representation of the structure. As a result, this representation cannot describe context-specific independences. Very recently, an algorithm called CSPC was designed to overcome this limitation, but it has a high computational complexity. This work tries to mitigate this downside presenting CSGS, an algorithm that uses the Grow-Shrink strategy for reducing unnecessary computations. On an empirical evaluation, the structures learned by CSGS achieve competitive accuracies and lower computational complexity with respect to those obtained by CSPC.Comment: 12 pages, and 8 figures. This works was presented in IBERAMIA 201

    Learning Markov networks with context-specific independences

    Full text link
    Learning the Markov network structure from data is a problem that has received considerable attention in machine learning, and in many other application fields. This work focuses on a particular approach for this purpose called independence-based learning. Such approach guarantees the learning of the correct structure efficiently, whenever data is sufficient for representing the underlying distribution. However, an important issue of such approach is that the learned structures are encoded in an undirected graph. The problem with graphs is that they cannot encode some types of independence relations, such as the context-specific independences. They are a particular case of conditional independences that is true only for a certain assignment of its conditioning set, in contrast to conditional independences that must hold for all its assignments. In this work we present CSPC, an independence-based algorithm for learning structures that encode context-specific independences, and encoding them in a log-linear model, instead of a graph. The central idea of CSPC is combining the theoretical guarantees provided by the independence-based approach with the benefits of representing complex structures by using features in a log-linear model. We present experiments in a synthetic case, showing that CSPC is more accurate than the state-of-the-art IB algorithms when the underlying distribution contains CSIs.Comment: 8 pages, 6 figure

    Learning Bayesian Networks with Thousands of Variables

    Get PDF
    Abstract We present a method for learning Bayesian networks from data sets containing thousands of variables without the need for structure constraints. Our approach is made of two parts. The first is a novel algorithm that effectively explores the space of possible parent sets of a node. It guides the exploration towards the most promising parent sets on the basis of an approximated score function that is computed in constant time. The second part is an improvement of an existing ordering-based algorithm for structure optimization. The new algorithm provably achieves a higher score compared to its original formulation. Our novel approach consistently outperforms the state of the art on very large data sets

    A Comparative Study of Markov Network Structure Learning Methods Over Data Streams

    Get PDF
    Abstract-Markov network is a widely used graphical representation of data in applications such as natural language and computational biology. This undirected graph consists of nodes and edges as attributes and its dependencies respectively. One major challenge in a learning task involving Markov network is to learn its structure, i.e. attribute dependencies, from data. This has been the subject of various studies in the recent past, which uses heuristics to estimate dependencies from data. In this paper, we highlight the challenges of Markov network structure learning, and review existing methods addressing these challenges. In particular, we study the scalability of these heuristics over streaming data where data instances are assumed to occur continuously. Furthermore, we propose a new heuristic based on clustering of features, consisting of attribute dependencies, that can seamlessly update the model structure as new data arrive in a stream. This clustering technique effectively reduces search space and uses fewer number of features to generate a single model. Weight learning and inference is performed at the end of each data chunk consisting of data instances arriving within a fixed time frame. We empirically evaluate the proposed heuristic by comparing the CMLL score, on various datasets (both streaming and non-streaming), with other state-of-the-art methods

    Lifted graphical models: a survey

    Get PDF
    Lifted graphical models provide a language for expressing dependencies between different types of entities, their attributes, and their diverse relations, as well as techniques for probabilistic reasoning in such multi-relational domains. In this survey, we review a general form for a lifted graphical model, a par-factor graph, and show how a number of existing statistical relational representations map to this formalism. We discuss inference algorithms, including lifted inference algorithms, that efficiently compute the answers to probabilistic queries over such models. We also review work in learning lifted graphical models from data. There is a growing need for statistical relational models (whether they go by that name or another), as we are inundated with data which is a mix of structured and unstructured, with entities and relations extracted in a noisy manner from text, and with the need to reason effectively with this data. We hope that this synthesis of ideas from many different research groups will provide an accessible starting point for new researchers in this expanding field

    Learning Markov Network Structure with Decision Trees

    No full text
    Abstract—Traditional Markov network structure learning algorithms perform a search for globally useful features. However, these algorithms are often slow and prone to finding local optima due to the large space of possible structures. Ravikumar et al. [1] recently proposed the alternative idea of applying L1 logistic regression to learn a set of pairwise features for each variable, which are then combined into a global model. This paper presents the DTSL algorithm, which uses probabilistic decision trees as the local model. Our approach has two significant advantages: it is more efficient, and it is able to discover features that capture more complex interactions among the variables. Our approach can also be seen as a method for converting a dependency network into a consistent probabilistic model. In an extensive empirical evaluation on 13 datasets, our algorithm obtains comparable accuracy to three standard structure learning algorithms while running 1-4 orders of magnitude faster. Keywords-Markov networks; structure learning; decision trees; probabilistic methods I
    corecore