3,507 research outputs found

    Equilibrium Computation and Robust Optimization in Zero Sum Games with Submodular Structure

    Full text link
    We define a class of zero-sum games with combinatorial structure, where the best response problem of one player is to maximize a submodular function. For example, this class includes security games played on networks, as well as the problem of robustly optimizing a submodular function over the worst case from a set of scenarios. The challenge in computing equilibria is that both players' strategy spaces can be exponentially large. Accordingly, previous algorithms have worst-case exponential runtime and indeed fail to scale up on practical instances. We provide a pseudopolynomial-time algorithm which obtains a guaranteed (1āˆ’1/e)2(1 - 1/e)^2-approximate mixed strategy for the maximizing player. Our algorithm only requires access to a weakened version of a best response oracle for the minimizing player which runs in polynomial time. Experimental results for network security games and a robust budget allocation problem confirm that our algorithm delivers near-optimal solutions and scales to much larger instances than was previously possible.Comment: 20 pages, 8 figures. A shorter version of this paper appears at AAAI 201

    Infinite factorization of multiple non-parametric views

    Get PDF
    Combined analysis of multiple data sources has increasing application interest, in particular for distinguishing shared and source-specific aspects. We extend this rationale of classical canonical correlation analysis into a flexible, generative and non-parametric clustering setting, by introducing a novel non-parametric hierarchical mixture model. The lower level of the model describes each source with a flexible non-parametric mixture, and the top level combines these to describe commonalities of the sources. The lower-level clusters arise from hierarchical Dirichlet Processes, inducing an infinite-dimensional contingency table between the views. The commonalities between the sources are modeled by an infinite block model of the contingency table, interpretable as non-negative factorization of infinite matrices, or as a prior for infinite contingency tables. With Gaussian mixture components plugged in for continuous measurements, the model is applied to two views of genes, mRNA expression and abundance of the produced proteins, to expose groups of genes that are co-regulated in either or both of the views. Cluster analysis of co-expression is a standard simple way of screening for co-regulation, and the two-view analysis extends the approach to distinguishing between pre- and post-translational regulation

    Efficient algorithms for tensor scaling, quantum marginals and moment polytopes

    Full text link
    We present a polynomial time algorithm to approximately scale tensors of any format to arbitrary prescribed marginals (whenever possible). This unifies and generalizes a sequence of past works on matrix, operator and tensor scaling. Our algorithm provides an efficient weak membership oracle for the associated moment polytopes, an important family of implicitly-defined convex polytopes with exponentially many facets and a wide range of applications. These include the entanglement polytopes from quantum information theory (in particular, we obtain an efficient solution to the notorious one-body quantum marginal problem) and the Kronecker polytopes from representation theory (which capture the asymptotic support of Kronecker coefficients). Our algorithm can be applied to succinct descriptions of the input tensor whenever the marginals can be efficiently computed, as in the important case of matrix product states or tensor-train decompositions, widely used in computational physics and numerical mathematics. We strengthen and generalize the alternating minimization approach of previous papers by introducing the theory of highest weight vectors from representation theory into the numerical optimization framework. We show that highest weight vectors are natural potential functions for scaling algorithms and prove new bounds on their evaluations to obtain polynomial-time convergence. Our techniques are general and we believe that they will be instrumental to obtain efficient algorithms for moment polytopes beyond the ones consider here, and more broadly, for other optimization problems possessing natural symmetries

    Bayesian learning of noisy Markov decision processes

    Full text link
    We consider the inverse reinforcement learning problem, that is, the problem of learning from, and then predicting or mimicking a controller based on state/action data. We propose a statistical model for such data, derived from the structure of a Markov decision process. Adopting a Bayesian approach to inference, we show how latent variables of the model can be estimated, and how predictions about actions can be made, in a unified framework. A new Markov chain Monte Carlo (MCMC) sampler is devised for simulation from the posterior distribution. This step includes a parameter expansion step, which is shown to be essential for good convergence properties of the MCMC sampler. As an illustration, the method is applied to learning a human controller

    Aggregation functions: an approach using copulae

    Get PDF
    In this paper we present the extension of the copula approach to aggregation functions. In fact we want to focus on a class of aggregation functions and present them in the multilinear form with marginal copulae. Moreover, we define the joint aggregation density function.

    The Limits of Diversification When Losses May Be Large

    Get PDF
    Recent results in value at risk analysis show that, for extremely heavy-tailed risks with unbounded distribution support, diversification may increase value at risk, and that, generally, it is difficult to construct an appropriate risk measure for such distributions. We further analyze the limitations of diversification for heavy-tailed risks. We provide additional insight in two ways. First, we show that similar nondiversification results are valid for a large class of risks with bounded support, as long as the risks are concentrated on a sufficiently large interval. The required length of the support depends on the number of risks available and on the degree of heavy-tailedness. Second, we relate the value at risk approach to more general risk frameworks. We argue that in financial markets where the number of assets is limited compared with the (bounded) distributional support of the risks, unbounded heavy-tailed risks may provide a reasonable approximation. We suggest that this type of analysis may have a role in explaining various types of market failures in markets for assets with possibly large negative outcomes.
    • ā€¦
    corecore