12 research outputs found

    High-Dimensional Screening Using Multiple Grouping of Variables

    Full text link
    Screening is the problem of finding a superset of the set of non-zero entries in an unknown p-dimensional vector \beta* given n noisy observations. Naturally, we want this superset to be as small as possible. We propose a novel framework for screening, which we refer to as Multiple Grouping (MuG), that groups variables, performs variable selection over the groups, and repeats this process multiple number of times to estimate a sequence of sets that contains the non-zero entries in \beta*. Screening is done by taking an intersection of all these estimated sets. The MuG framework can be used in conjunction with any group based variable selection algorithm. In the high-dimensional setting, where p >> n, we show that when MuG is used with the group Lasso estimator, screening can be consistently performed without using any tuning parameter. Our numerical simulations clearly show the merits of using the MuG framework in practice.Comment: This paper will appear in the IEEE Transactions on Signal Processing. See http://www.ima.umn.edu/~dvats/MuGScreening.html for more detail

    Finding Non-overlapping Clusters for Generalized Inference Over Graphical Models

    Full text link
    Graphical models use graphs to compactly capture stochastic dependencies amongst a collection of random variables. Inference over graphical models corresponds to finding marginal probability distributions given joint probability distributions. In general, this is computationally intractable, which has led to a quest for finding efficient approximate inference algorithms. We propose a framework for generalized inference over graphical models that can be used as a wrapper for improving the estimates of approximate inference algorithms. Instead of applying an inference algorithm to the original graph, we apply the inference algorithm to a block-graph, defined as a graph in which the nodes are non-overlapping clusters of nodes from the original graph. This results in marginal estimates of a cluster of nodes, which we further marginalize to get the marginal estimates of each node. Our proposed block-graph construction algorithm is simple, efficient, and motivated by the observation that approximate inference is more accurate on graphs with longer cycles. We present extensive numerical simulations that illustrate our block-graph framework with a variety of inference algorithms (e.g., those in the libDAI software package). These simulations show the improvements provided by our framework.Comment: Extended the previous version to include extensive numerical simulations. See http://www.ima.umn.edu/~dvats/GeneralizedInference.html for code and dat

    Telescoping Recursive Representations and Estimation of Gauss-Markov Random Fields

    Full text link
    We present \emph{telescoping} recursive representations for both continuous and discrete indexed noncausal Gauss-Markov random fields. Our recursions start at the boundary (a hypersurface in Rd\R^d, d≥1d \ge 1) and telescope inwards. For example, for images, the telescoping representation reduce recursions from d=2d = 2 to d=1d = 1, i.e., to recursions on a single dimension. Under appropriate conditions, the recursions for the random field are linear stochastic differential/difference equations driven by white noise, for which we derive recursive estimation algorithms, that extend standard algorithms, like the Kalman-Bucy filter and the Rauch-Tung-Striebel smoother, to noncausal Markov random fields.Comment: To appear in the Transactions on Information Theor

    Active Learning for Undirected Graphical Model Selection

    Full text link
    This paper studies graphical model selection, i.e., the problem of estimating a graph of statistical relationships among a collection of random variables. Conventional graphical model selection algorithms are passive, i.e., they require all the measurements to have been collected before processing begins. We propose an active learning algorithm that uses junction tree representations to adapt future measurements based on the information gathered from prior measurements. We prove that, under certain conditions, our active learning algorithm requires fewer scalar measurements than any passive algorithm to reliably estimate a graph. A range of numerical results validate our theory and demonstrates the benefits of active learning.Comment: AISTATS 201

    AJunctionTreeFrameworkforUndirectedGraphicalModelSelection

    No full text
    An undirected graphical model is a joint probability distribution defined on an undirected graph G ∗, where the vertices in the graph index a collection of random variables and the edges encode conditional independence relationships amongst random variables. The undirected graphical model selection (UGMS) problem is to estimate the graph G ∗ given observations drawn from the undirected graphical model. This paper proposes a framework for decomposing the UGMS problem into multiple subproblems over clusters and subsets of the separators in a junction tree. The junctiontreeisconstructedusingagraph that contains a superset of the edges in G ∗. We highlight three main properties of using junction trees for UGMS. First, different regularization parameters or different UGMS algorithms can be used to learn different parts of the graph. This is possible since the subproblems we identify can be solved independently of each other. Second, under certain conditions, a junction tree based UGMS algorithm can produce consistent results with exponentially fewer observations than the usual requirements of existing algorithms. Third, both our theoretical and experimental results show that the junction tree framework does a significantly better job at finding the weakest edges in agraphthanexistingmethods. This property is a consequence of both the first and second properties. Finally, we note that our framework is independent of the choice of the UGMS algorithm and can be used asawrapperaroundstandardUGMS algorithms for more accurate graph estimation
    corecore