8,803 research outputs found

    Setting Parameters by Example

    Full text link
    We introduce a class of "inverse parametric optimization" problems, in which one is given both a parametric optimization problem and a desired optimal solution; the task is to determine parameter values that lead to the given solution. We describe algorithms for solving such problems for minimum spanning trees, shortest paths, and other "optimal subgraph" problems, and discuss applications in multicast routing, vehicle path planning, resource allocation, and board game programming.Comment: 13 pages, 3 figures. To be presented at 40th IEEE Symp. Foundations of Computer Science (FOCS '99

    Pairwise MRF Calibration by Perturbation of the Bethe Reference Point

    Get PDF
    We investigate different ways of generating approximate solutions to the pairwise Markov random field (MRF) selection problem. We focus mainly on the inverse Ising problem, but discuss also the somewhat related inverse Gaussian problem because both types of MRF are suitable for inference tasks with the belief propagation algorithm (BP) under certain conditions. Our approach consists in to take a Bethe mean-field solution obtained with a maximum spanning tree (MST) of pairwise mutual information, referred to as the \emph{Bethe reference point}, for further perturbation procedures. We consider three different ways following this idea: in the first one, we select and calibrate iteratively the optimal links to be added starting from the Bethe reference point; the second one is based on the observation that the natural gradient can be computed analytically at the Bethe point; in the third one, assuming no local field and using low temperature expansion we develop a dual loop joint model based on a well chosen fundamental cycle basis. We indeed identify a subclass of planar models, which we refer to as \emph{Bethe-dual graph models}, having possibly many loops, but characterized by a singly connected dual factor graph, for which the partition function and the linear response can be computed exactly in respectively O(N) and O(N2)O(N^2) operations, thanks to a dual weight propagation (DWP) message passing procedure that we set up. When restricted to this subclass of models, the inverse Ising problem being convex, becomes tractable at any temperature. Experimental tests on various datasets with refined L0L_0 or L1L_1 regularization procedures indicate that these approaches may be competitive and useful alternatives to existing ones.Comment: 54 pages, 8 figure. section 5 and refs added in V

    On Correcting Inputs: Inverse Optimization for Online Structured Prediction

    Get PDF
    Algorithm designers typically assume that the input data is correct, and then proceed to find "optimal" or "sub-optimal" solutions using this input data. However this assumption of correct data does not always hold in practice, especially in the context of online learning systems where the objective is to learn appropriate feature weights given some training samples. Such scenarios necessitate the study of inverse optimization problems where one is given an input instance as well as a desired output and the task is to adjust the input data so that the given output is indeed optimal. Motivated by learning structured prediction models, in this paper we consider inverse optimization with a margin, i.e., we require the given output to be better than all other feasible outputs by a desired margin. We consider such inverse optimization problems for maximum weight matroid basis, matroid intersection, perfect matchings, minimum cost maximum flows, and shortest paths and derive the first known results for such problems with a non-zero margin. The effectiveness of these algorithmic approaches to online learning for structured prediction is also discussed.Comment: Conference version to appear in FSTTCS, 201

    Learning Gaussian Graphical Models with Observed or Latent FVSs

    Get PDF
    Gaussian Graphical Models (GGMs) or Gauss Markov random fields are widely used in many applications, and the trade-off between the modeling capacity and the efficiency of learning and inference has been an important research problem. In this paper, we study the family of GGMs with small feedback vertex sets (FVSs), where an FVS is a set of nodes whose removal breaks all the cycles. Exact inference such as computing the marginal distributions and the partition function has complexity O(k2n)O(k^{2}n) using message-passing algorithms, where k is the size of the FVS, and n is the total number of nodes. We propose efficient structure learning algorithms for two cases: 1) All nodes are observed, which is useful in modeling social or flight networks where the FVS nodes often correspond to a small number of high-degree nodes, or hubs, while the rest of the networks is modeled by a tree. Regardless of the maximum degree, without knowing the full graph structure, we can exactly compute the maximum likelihood estimate in O(kn2+n2logn)O(kn^2+n^2\log n) if the FVS is known or in polynomial time if the FVS is unknown but has bounded size. 2) The FVS nodes are latent variables, where structure learning is equivalent to decomposing a inverse covariance matrix (exactly or approximately) into the sum of a tree-structured matrix and a low-rank matrix. By incorporating efficient inference into the learning steps, we can obtain a learning algorithm using alternating low-rank correction with complexity O(kn2+n2logn)O(kn^{2}+n^{2}\log n) per iteration. We also perform experiments using both synthetic data as well as real data of flight delays to demonstrate the modeling capacity with FVSs of various sizes

    Markov Network Structure Learning via Ensemble-of-Forests Models

    Full text link
    Real world systems typically feature a variety of different dependency types and topologies that complicate model selection for probabilistic graphical models. We introduce the ensemble-of-forests model, a generalization of the ensemble-of-trees model. Our model enables structure learning of Markov random fields (MRF) with multiple connected components and arbitrary potentials. We present two approximate inference techniques for this model and demonstrate their performance on synthetic data. Our results suggest that the ensemble-of-forests approach can accurately recover sparse, possibly disconnected MRF topologies, even in presence of non-Gaussian dependencies and/or low sample size. We applied the ensemble-of-forests model to learn the structure of perturbed signaling networks of immune cells and found that these frequently exhibit non-Gaussian dependencies with disconnected MRF topologies. In summary, we expect that the ensemble-of-forests model will enable MRF structure learning in other high dimensional real world settings that are governed by non-trivial dependencies.Comment: 13 pages, 6 figure

    Abelian gauge fields coupled to simplicial quantum gravity

    Get PDF
    We study the coupling of Abelian gauge theories to four-dimensional simplicial quantum gravity. The gauge fields live on dual links. This is the correct formulation if we want to compare the effect of gauge fields on geometry with similar effects studied so far for scalar fields. It shows that gauge fields couple equally weakly to geometry as scalar fields, and it offers an understanding of the relation between measure factors and Abelian gauge fields observed so-far.Comment: 20 page

    Forest Density Estimation

    Full text link
    We study graph estimation and density estimation in high dimensions, using a family of density estimators based on forest structured undirected graphical models. For density estimation, we do not assume the true distribution corresponds to a forest; rather, we form kernel density estimates of the bivariate and univariate marginals, and apply Kruskal's algorithm to estimate the optimal forest on held out data. We prove an oracle inequality on the excess risk of the resulting estimator relative to the risk of the best forest. For graph estimation, we consider the problem of estimating forests with restricted tree sizes. We prove that finding a maximum weight spanning forest with restricted tree size is NP-hard, and develop an approximation algorithm for this problem. Viewing the tree size as a complexity parameter, we then select a forest using data splitting, and prove bounds on excess risk and structure selection consistency of the procedure. Experiments with simulated data and microarray data indicate that the methods are a practical alternative to Gaussian graphical models.Comment: Extended version of earlier paper titled "Tree density estimation
    corecore