239 research outputs found

    Glosarium Matematika

    Get PDF
    273 p.; 24 cm

    Glosarium Matematika

    Get PDF

    Information Geometry

    Get PDF
    This Special Issue of the journal Entropy, titled “Information Geometry I”, contains a collection of 17 papers concerning the foundations and applications of information geometry. Based on a geometrical interpretation of probability, information geometry has become a rich mathematical field employing the methods of differential geometry. It has numerous applications to data science, physics, and neuroscience. Presenting original research, yet written in an accessible, tutorial style, this collection of papers will be useful for scientists who are new to the field, while providing an excellent reference for the more experienced researcher. Several papers are written by authorities in the field, and topics cover the foundations of information geometry, as well as applications to statistics, Bayesian inference, machine learning, complex systems, physics, and neuroscience

    Convex relaxation methods for graphical models : Lagrangian and maximum entropy approaches

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.Includes bibliographical references (p. 241-257).Graphical models provide compact representations of complex probability distributions of many random variables through a collection of potential functions defined on small subsets of these variables. This representation is defined with respect to a graph in which nodes represent random variables and edges represent the interactions among those random variables. Graphical models provide a powerful and flexible approach to many problems in science and engineering, but also present serious challenges owing to the intractability of optimal inference and estimation over general graphs. In this thesis, we consider convex optimization methods to address two central problems that commonly arise for graphical models. First, we consider the problem of determining the most probable configuration-also known as the maximum a posteriori (MAP) estimate-of all variables in a graphical model, conditioned on (possibly noisy) measurements of some variables. This general problem is intractable, so we consider a Lagrangian relaxation (LR) approach to obtain a tractable dual problem. This involves using the Lagrangian decomposition technique to break up an intractable graph into tractable subgraphs, such as small "blocks" of nodes, embedded trees or thin subgraphs. We develop a distributed, iterative algorithm that minimizes the Lagrangian dual function by block coordinate descent. This results in an iterative marginal-matching procedure that enforces consistency among the subgraphs using an adaptation of the well-known iterative scaling algorithm. This approach is developed both for discrete variable and Gaussian graphical models. In discrete models, we also introduce a deterministic annealing procedure, which introduces a temperature parameter to define a smoothed dual function and then gradually reduces the temperature to recover the (non-differentiable) Lagrangian dual. When strong duality holds, we recover the optimal MAP estimate. We show that this occurs for a broad class of "convex decomposable" Gaussian graphical models, which generalizes the "pairwise normalizable" condition known to be important for iterative estimation in Gaussian models.(cont.) In certain "frustrated" discrete models a duality gap can occur using simple versions of our approach. We consider methods that adaptively enhance the dual formulation, by including more complex subgraphs, so as to reduce the duality gap. In many cases we are able to eliminate the duality gap and obtain the optimal MAP estimate in a tractable manner. We also propose a heuristic method to obtain approximate solutions in cases where there is a duality gap. Second, we consider the problem of learning a graphical model (both the graph and its potential functions) from sample data. We propose the maximum entropy relaxation (MER) method, which is the convex optimization problem of selecting the least informative (maximum entropy) model over an exponential family of graphical models subject to constraints that small subsets of variables should have marginal distributions that are close to the distribution of sample data. We use relative entropy to measure the divergence between marginal probability distributions. We find that MER leads naturally to selection of sparse graphical models. To identify this sparse graph efficiently, we use a "bootstrap" method that constructs the MER solution by solving a sequence of tractable subproblems defined over thin graphs, including new edges at each step to correct for large marginal divergences that violate the MER constraint. The MER problem on each of these subgraphs is efficiently solved using the primaldual interior point method (implemented so as to take advantage of efficient inference methods for thin graphical models). We also consider a dual formulation of MER that minimizes a convex function of the potentials of the graphical model. This MER dual problem can be interpreted as a robust version of maximum-likelihood parameter estimation, where the MER constraints specify the uncertainty in the sufficient statistics of the model. This also corresponds to a regularized maximum-likelihood approach, in which an information-geometric regularization term favors selection of sparse potential representations. We develop a relaxed version of the iterative scaling method to solve this MER dual problem.by Jason K. Johnson.Ph.D

    International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book

    Get PDF
    The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions. This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more

    Activity Report 2022

    Get PDF

    The 4th Conference of PhD Students in Computer Science

    Get PDF

    Learning Probabilistic Graphical Models for Image Segmentation

    Get PDF
    Probabilistic graphical models provide a powerful framework for representing image structures. Based on this concept, many inference and learning algorithms have been developed. However, both algorithm classes are NP-hard combinatorial problems in the general case. As a consequence, relaxation methods were developed to approximate the original problems but with the benefit of being computationally efficient. In this work we consider the learning problem on binary graphical models and their relaxations. Two novel methods for determining the model parameters in discrete energy functions from training data are proposed. Learning the model parameters overcomes the problem of heuristically determining them. Motivated by common learning methods which aim at minimizing the training error measured by a loss function we develop a new learning method similar in fashion to structured SVM. However, computationally more efficient. We term this method as linearized approach (LA) as it is restricted to linearly dependent potentials. The linearity of LA is crucial to come up with a tight convex relaxation, which allows to use off-the-shelf inference solvers to approach subproblems which emerge from solving the overall problem. However, this type of learning methods almost never yield optimal solutions or perfect performance on the training data set. So what happens if the learned graphical model on the training data would lead to exact ground segmentation? Will this give a benefit when predicting? Motivated by the idea of inverse optimization, we take advantage of inverse linear programming to develop a learning approach, referred to as inverse linear programming approach (invLPA). It further refines the graphical models trained, using the previously introduced methods and is capable to perfectly predict ground truth on training data. The empirical results from implementing invLPA give answers to our questions posed before. LA is able to learn both unary and pairwise potentials jointly while with invLPA this is not possible due to the representation we use. On the other hand, invLPA does not rely on a certain form for the potentials and thus is flexible in the choice of the fitting method. Although the corrected potentials with invLPA always result in ground truth segmentation of the training data, invLPA is able to find corrections on the foreground segments only. Due to the relaxed problem formulation this does not affect the final segmentation result. Moreover, as long as we initialize invLPA with model parameters of a learning method performing sufficiently well, this drawback of invLPA does not significantly affect the final prediction result. The performance of the proposed learning methods is evaluated on both synthetic and real world datasets. We demonstrate that LA is competitive in comparison to other parameter learning methods using loss functions based on Maximum a Posteriori Marginal (MPM) and Maximum Likelihood Estimation (MLE). Moreover, we illustrate the benefits of learning with inverse linear programming. In a further experiment we demonstrate the versatility of our learning methods by applying LA to learning motion segmentation in video sequences and comparing it to state-of-the-art segmentation algorithms

    Inference And Learning: Computational Difficulty And Efficiency

    Get PDF
    In this thesis, we mainly investigate two collections of problems: statistical network inference and model selection in regression. The common feature shared by these two types of problems is that they typically exhibit an interesting phenomenon in terms of computational difficulty and efficiency. For statistical network inference, our goal is to infer the network structure based on a noisy observation of the network. Statistically, we model the network as generated from the structural information with the presence of noise, for example, planted submatrix model (for bipartite weighted graph), stochastic block model, and Watts-Strogatz model. As the relative amount of ``signal-to-noise\u27\u27 varies, the problems exhibit different stages of computational difficulty. On the theoretical side, we investigate these stages through characterizing the transition thresholds on the ``signal-to-noise\u27\u27 ratio, for the aforementioned models. On the methodological side, we provide new computationally efficient procedures to reconstruct the network structure for each model. For model selection in regression, our goal is to learn a ``good\u27\u27 model based on a certain model class from the observed data sequences (feature and response pairs), when the model can be misspecified. More concretely, we study two model selection problems: to learn from general classes of functions based on i.i.d. data with minimal assumptions, and to select from the sparse linear model class based on possibly adversarially chosen data in a sequential fashion. We develop new theoretical and algorithmic tools beyond empirical risk minimization to study these problems from a learning theory point of view
    • …
    corecore