820 research outputs found

    Non-Smooth, H\"older-Smooth, and Robust Submodular Maximization

    Full text link
    We study the problem of maximizing a continuous DR-submodular function that is not necessarily smooth. We prove that the continuous greedy algorithm achieves an [(1-1/e)\OPT-\epsilon] guarantee when the function is monotone and H\"older-smooth, meaning that it admits a H\"older-continuous gradient. For functions that are non-differentiable or non-smooth, we propose a variant of the mirror-prox algorithm that attains an [(1/2)\OPT-\epsilon] guarantee. We apply our algorithmic frameworks to robust submodular maximization and distributionally robust submodular maximization under Wasserstein ambiguity. In particular, the mirror-prox method applies to robust submodular maximization to obtain a single feasible solution whose value is at least (1/2)\OPT-\epsilon. For distributionally robust maximization under Wasserstein ambiguity, we deduce and work over a submodular-convex maximin reformulation whose objective function is H\"older-smooth, for which we may apply both the continuous greedy and the mirror-prox algorithms

    Melding the Data-Decisions Pipeline: Decision-Focused Learning for Combinatorial Optimization

    Full text link
    Creating impact in real-world settings requires artificial intelligence techniques to span the full pipeline from data, to predictive models, to decisions. These components are typically approached separately: a machine learning model is first trained via a measure of predictive accuracy, and then its predictions are used as input into an optimization algorithm which produces a decision. However, the loss function used to train the model may easily be misaligned with the end goal, which is to make the best decisions possible. Hand-tuning the loss function to align with optimization is a difficult and error-prone process (which is often skipped entirely). We focus on combinatorial optimization problems and introduce a general framework for decision-focused learning, where the machine learning model is directly trained in conjunction with the optimization algorithm to produce high-quality decisions. Technically, our contribution is a means of integrating common classes of discrete optimization problems into deep learning or other predictive models, which are typically trained via gradient descent. The main idea is to use a continuous relaxation of the discrete problem to propagate gradients through the optimization procedure. We instantiate this framework for two broad classes of combinatorial problems: linear programs and submodular maximization. Experimental results across a variety of domains show that decision-focused learning often leads to improved optimization performance compared to traditional methods. We find that standard measures of accuracy are not a reliable proxy for a predictive model's utility in optimization, and our method's ability to specify the true goal as the model's training objective yields substantial dividends across a range of decision problems.Comment: Full version of paper accepted at AAAI 201
    • …
    corecore