836 research outputs found

    Dynamic Programming on Nominal Graphs

    Get PDF
    Many optimization problems can be naturally represented as (hyper) graphs, where vertices correspond to variables and edges to tasks, whose cost depends on the values of the adjacent variables. Capitalizing on the structure of the graph, suitable dynamic programming strategies can select certain orders of evaluation of the variables which guarantee to reach both an optimal solution and a minimal size of the tables computed in the optimization process. In this paper we introduce a simple algebraic specification with parallel composition and restriction whose terms up to structural axioms are the graphs mentioned above. In addition, free (unrestricted) vertices are labelled with variables, and the specification includes operations of name permutation with finite support. We show a correspondence between the well-known tree decompositions of graphs and our terms. If an axiom of scope extension is dropped, several (hierarchical) terms actually correspond to the same graph. A suitable graphical structure can be found, corresponding to every hierarchical term. Evaluating such a graphical structure in some target algebra yields a dynamic programming strategy. If the target algebra satisfies the scope extension axiom, then the result does not depend on the particular structure, but only on the original graph. We apply our approach to the parking optimization problem developed in the ASCENS e-mobility case study, in collaboration with Volkswagen. Dynamic programming evaluations are particularly interesting for autonomic systems, where actual behavior often consists of propagating local knowledge to obtain global knowledge and getting it back for local decisions.Comment: In Proceedings GaM 2015, arXiv:1504.0244

    Constraint-based protocols for distributed problem solving

    Get PDF
    AbstractDistributed Problem Solving (DPS) approaches decompose problems into subproblems to be solved by interacting, cooperative software agents. Thus, DPS is suitable for solving problems characterized by many interdependencies among subproblems in the context of parallel and distributed architectures. Concurrent Constraint Programming (CCP) provides a powerful execution framework for DPS where constraints define local problem solving and the exchange of information among agents declaratively. To optimize DPS, the protocol for constraint communication must be tuned to the specific kind of DPS problem and the characteristics of the underlying system architecture. In this paper, we provide a formal framework for modeling different problems and we show how the framework applies to simple yet generalizable examples

    Learning with Clustering Structure

    Full text link
    We study supervised learning problems using clustering constraints to impose structure on either features or samples, seeking to help both prediction and interpretation. The problem of clustering features arises naturally in text classification for instance, to reduce dimensionality by grouping words together and identify synonyms. The sample clustering problem on the other hand, applies to multiclass problems where we are allowed to make multiple predictions and the performance of the best answer is recorded. We derive a unified optimization formulation highlighting the common structure of these problems and produce algorithms whose core iteration complexity amounts to a k-means clustering step, which can be approximated efficiently. We extend these results to combine sparsity and clustering constraints, and develop a new projection algorithm on the set of clustered sparse vectors. We prove convergence of our algorithms on random instances, based on a union of subspaces interpretation of the clustering structure. Finally, we test the robustness of our methods on artificial data sets as well as real data extracted from movie reviews.Comment: Completely rewritten. New convergence proofs in the clustered and sparse clustered case. New projection algorithm on sparse clustered vector

    Detection of permutation symmetries in numerical constraint satisfaction problems

    Get PDF
    Technical reportThis technical report summarizes the work done by Ieva Dauzickaite during her traineeship at IRI from 2016-10-03 to 2017-03-31. The main objectives of the stay at IRI were to study detection of variable symmetries in numerical constraint satisfaction problems and to implement a method that detects such symmetries.Peer ReviewedPostprint (published version

    BQ-NCO: Bisimulation Quotienting for Efficient Neural Combinatorial Optimization

    Full text link
    Despite the success of neural-based combinatorial optimization methods for end-to-end heuristic learning, out-of-distribution generalization remains a challenge. In this paper, we present a novel formulation of Combinatorial Optimization Problems (COPs) as Markov Decision Processes (MDPs) that effectively leverages common symmetries of COPs to improve out-of-distribution robustness. Starting from a direct MDP formulation of a constructive method, we introduce a generic way to reduce the state space, based on Bisimulation Quotienting (BQ) in MDPs. Then, for COPs with a recursive nature, we specialize the bisimulation and show how the reduced state exploits the symmetries of these problems and facilitates MDP solving. Our approach is principled and we prove that an optimal policy for the proposed BQ-MDP actually solves the associated COPs. We illustrate our approach on five classical problems: the Euclidean and Asymmetric Traveling Salesman, Capacitated Vehicle Routing, Orienteering and Knapsack Problems. Furthermore, for each problem, we introduce a simple attention-based policy network for the BQ-MDPs, which we train by imitation of (near) optimal solutions of small instances from a single distribution. We obtain new state-of-the-art results for the five COPs on both synthetic and realistic benchmarks. Notably, in contrast to most existing neural approaches, our learned policies show excellent generalization performance to much larger instances than seen during training, without any additional search procedure

    Decomposition, Reformulation, and Diving in University Course Timetabling

    Full text link
    In many real-life optimisation problems, there are multiple interacting components in a solution. For example, different components might specify assignments to different kinds of resource. Often, each component is associated with different sets of soft constraints, and so with different measures of soft constraint violation. The goal is then to minimise a linear combination of such measures. This paper studies an approach to such problems, which can be thought of as multiphase exploitation of multiple objective-/value-restricted submodels. In this approach, only one computationally difficult component of a problem and the associated subset of objectives is considered at first. This produces partial solutions, which define interesting neighbourhoods in the search space of the complete problem. Often, it is possible to pick the initial component so that variable aggregation can be performed at the first stage, and the neighbourhoods to be explored next are guaranteed to contain feasible solutions. Using integer programming, it is then easy to implement heuristics producing solutions with bounds on their quality. Our study is performed on a university course timetabling problem used in the 2007 International Timetabling Competition, also known as the Udine Course Timetabling Problem. In the proposed heuristic, an objective-restricted neighbourhood generator produces assignments of periods to events, with decreasing numbers of violations of two period-related soft constraints. Those are relaxed into assignments of events to days, which define neighbourhoods that are easier to search with respect to all four soft constraints. Integer programming formulations for all subproblems are given and evaluated using ILOG CPLEX 11. The wider applicability of this approach is analysed and discussed.Comment: 45 pages, 7 figures. Improved typesetting of figures and table

    A Scalable Algorithm For Sparse Portfolio Selection

    Full text link
    The sparse portfolio selection problem is one of the most famous and frequently-studied problems in the optimization and financial economics literatures. In a universe of risky assets, the goal is to construct a portfolio with maximal expected return and minimum variance, subject to an upper bound on the number of positions, linear inequalities and minimum investment constraints. Existing certifiably optimal approaches to this problem do not converge within a practical amount of time at real world problem sizes with more than 400 securities. In this paper, we propose a more scalable approach. By imposing a ridge regularization term, we reformulate the problem as a convex binary optimization problem, which is solvable via an efficient outer-approximation procedure. We propose various techniques for improving the performance of the procedure, including a heuristic which supplies high-quality warm-starts, a preprocessing technique for decreasing the gap at the root node, and an analytic technique for strengthening our cuts. We also study the problem's Boolean relaxation, establish that it is second-order-cone representable, and supply a sufficient condition for its tightness. In numerical experiments, we establish that the outer-approximation procedure gives rise to dramatic speedups for sparse portfolio selection problems.Comment: Submitted to INFORMS Journal on Computin
    • …
    corecore