450 research outputs found

    Budget-restricted utility games with ordered strategic decisions

    Full text link
    We introduce the concept of budget games. Players choose a set of tasks and each task has a certain demand on every resource in the game. Each resource has a budget. If the budget is not enough to satisfy the sum of all demands, it has to be shared between the tasks. We study strategic budget games, where the budget is shared proportionally. We also consider a variant in which the order of the strategic decisions influences the distribution of the budgets. The complexity of the optimal solution as well as existence, complexity and quality of equilibria are analyzed. Finally, we show that the time an ordered budget game needs to convergence towards an equilibrium may be exponential

    Stochastic make-to-stock inventory deployment problem: an endosymbiotic psychoclonal algorithm based approach

    Get PDF
    Integrated steel manufacturers (ISMs) have no specific product, they just produce finished product from the ore. This enhances the uncertainty prevailing in the ISM regarding the nature of the finished product and significant demand by customers. At present low cost mini-mills are giving firm competition to ISMs in terms of cost, and this has compelled the ISM industry to target customers who want exotic products and faster reliable deliveries. To meet this objective, ISMs are exploring the option of satisfying part of their demand by converting strategically placed products, this helps in increasing the variability of product produced by the ISM in a short lead time. In this paper the authors have proposed a new hybrid evolutionary algorithm named endosymbiotic-psychoclonal (ESPC) to decide what and how much to stock as a semi-product in inventory. In the proposed theory, the ability of previously proposed psychoclonal algorithms to exploit the search space has been increased by making antibodies and antigen more co-operative interacting species. The efficacy of the proposed algorithm has been tested on randomly generated datasets and the results compared with other evolutionary algorithms such as genetic algorithms (GA) and simulated annealing (SA). The comparison of ESPC with GA and SA proves the superiority of the proposed algorithm both in terms of quality of the solution obtained and convergence time required to reach the optimal/near optimal value of the solution

    ForestHash: Semantic Hashing With Shallow Random Forests and Tiny Convolutional Networks

    Full text link
    Hash codes are efficient data representations for coping with the ever growing amounts of data. In this paper, we introduce a random forest semantic hashing scheme that embeds tiny convolutional neural networks (CNN) into shallow random forests, with near-optimal information-theoretic code aggregation among trees. We start with a simple hashing scheme, where random trees in a forest act as hashing functions by setting `1' for the visited tree leaf, and `0' for the rest. We show that traditional random forests fail to generate hashes that preserve the underlying similarity between the trees, rendering the random forests approach to hashing challenging. To address this, we propose to first randomly group arriving classes at each tree split node into two groups, obtaining a significantly simplified two-class classification problem, which can be handled using a light-weight CNN weak learner. Such random class grouping scheme enables code uniqueness by enforcing each class to share its code with different classes in different trees. A non-conventional low-rank loss is further adopted for the CNN weak learners to encourage code consistency by minimizing intra-class variations and maximizing inter-class distance for the two random class groups. Finally, we introduce an information-theoretic approach for aggregating codes of individual trees into a single hash code, producing a near-optimal unique hash for each class. The proposed approach significantly outperforms state-of-the-art hashing methods for image retrieval tasks on large-scale public datasets, while performing at the level of other state-of-the-art image classification techniques while utilizing a more compact and efficient scalable representation. This work proposes a principled and robust procedure to train and deploy in parallel an ensemble of light-weight CNNs, instead of simply going deeper.Comment: Accepted to ECCV 201

    Satisfiability Modulo Transcendental Functions via Incremental Linearization

    Full text link
    In this paper we present an abstraction-refinement approach to Satisfiability Modulo the theory of transcendental functions, such as exponentiation and trigonometric functions. The transcendental functions are represented as uninterpreted in the abstract space, which is described in terms of the combined theory of linear arithmetic on the rationals with uninterpreted functions, and are incrementally axiomatized by means of upper- and lower-bounding piecewise-linear functions. Suitable numerical techniques are used to ensure that the abstractions of the transcendental functions are sound even in presence of irrationals. Our experimental evaluation on benchmarks from verification and mathematics demonstrates the potential of our approach, showing that it compares favorably with delta-satisfiability /interval propagation and methods based on theorem proving

    On the equivalence of strong formulations for capacitated multi-level lot sizing problems with setup times

    Get PDF
    Several mixed integer programming formulations have been proposed for modeling capacitated multi-level lot sizing problems with setup times. These formulations include the so-called facility location formulation, the shortest route formulation, and the inventory and lot sizing formulation with (l,S) inequalities. In this paper, we demonstrate the equivalence of these formulations when the integrality requirement is relaxed for any subset of binary setup decision variables. This equivalence has significant implications for decomposition-based methods since same optimal solution values are obtained no matter which formulation is used. In particular, we discuss the relax-and-fix method, a decomposition-based heuristic used for the efficient solution of hard lot sizing problems. Computational tests allow us to compare the effectiveness of different formulations using benchmark problems. The choice of formulation directly affects the required computational effort, and our results therefore provide guidelines on choosing an effective formulation during the development of heuristic-based solution procedures

    Exact Ground States of Large Two-Dimensional Planar Ising Spin Glasses

    Get PDF
    Studying spin-glass physics through analyzing their ground-state properties has a long history. Although there exist polynomial-time algorithms for the two-dimensional planar case, where the problem of finding ground states is transformed to a minimum-weight perfect matching problem, the reachable system sizes have been limited both by the needed CPU time and by memory requirements. In this work, we present an algorithm for the calculation of exact ground states for two-dimensional Ising spin glasses with free boundary conditions in at least one direction. The algorithmic foundations of the method date back to the work of Kasteleyn from the 1960s for computing the complete partition function of the Ising model. Using Kasteleyn cities, we calculate exact ground states for huge two-dimensional planar Ising spin-glass lattices (up to 3000x3000 spins) within reasonable time. According to our knowledge, these are the largest sizes currently available. Kasteleyn cities were recently also used by Thomas and Middleton in the context of extended ground states on the torus. Moreover, they show that the method can also be used for computing ground states of planar graphs. Furthermore, we point out that the correctness of heuristically computed ground states can easily be verified. Finally, we evaluate the solution quality of heuristic variants of the Bieche et al. approach.Comment: 11 pages, 5 figures; shortened introduction, extended results; to appear in Physical Review E 7

    Decomposition, Reformulation, and Diving in University Course Timetabling

    Full text link
    In many real-life optimisation problems, there are multiple interacting components in a solution. For example, different components might specify assignments to different kinds of resource. Often, each component is associated with different sets of soft constraints, and so with different measures of soft constraint violation. The goal is then to minimise a linear combination of such measures. This paper studies an approach to such problems, which can be thought of as multiphase exploitation of multiple objective-/value-restricted submodels. In this approach, only one computationally difficult component of a problem and the associated subset of objectives is considered at first. This produces partial solutions, which define interesting neighbourhoods in the search space of the complete problem. Often, it is possible to pick the initial component so that variable aggregation can be performed at the first stage, and the neighbourhoods to be explored next are guaranteed to contain feasible solutions. Using integer programming, it is then easy to implement heuristics producing solutions with bounds on their quality. Our study is performed on a university course timetabling problem used in the 2007 International Timetabling Competition, also known as the Udine Course Timetabling Problem. In the proposed heuristic, an objective-restricted neighbourhood generator produces assignments of periods to events, with decreasing numbers of violations of two period-related soft constraints. Those are relaxed into assignments of events to days, which define neighbourhoods that are easier to search with respect to all four soft constraints. Integer programming formulations for all subproblems are given and evaluated using ILOG CPLEX 11. The wider applicability of this approach is analysed and discussed.Comment: 45 pages, 7 figures. Improved typesetting of figures and table

    Advanced brain dopamine transporter imaging in mice using small-animal SPECT/CT

    Get PDF
    Abstract. The stable marriage problem has recently been studied in its general setting, where both ties and incomplete lists are allowed. It is NP-hard to find a stable matching of maximum size, while any stable matching is a maximal matching and thus trivially a factor two approximation. In this paper, we give the first nontrivial result for approximation of factor less than two. Our algorithm achieves an approximation ratio of 2/(1+L −2) for instances in which only men have ties of length at most L. When both men and women are allowed to have ties, we show a ratio of 13/7(< 1.858) for the case when ties are of length two. We also improve the lower bound on the approximation ratio to 2

    The Air Traffic Flow Management Problem with Enroute Capacities

    Full text link

    Statistical mechanics of the vertex-cover problem

    Full text link
    We review recent progress in the study of the vertex-cover problem (VC). VC belongs to the class of NP-complete graph theoretical problems, which plays a central role in theoretical computer science. On ensembles of random graphs, VC exhibits an coverable-uncoverable phase transition. Very close to this transition, depending on the solution algorithm, easy-hard transitions in the typical running time of the algorithms occur. We explain a statistical mechanics approach, which works by mapping VC to a hard-core lattice gas, and then applying techniques like the replica trick or the cavity approach. Using these methods, the phase diagram of VC could be obtained exactly for connectivities c<ec<e, where VC is replica symmetric. Recently, this result could be confirmed using traditional mathematical techniques. For c>ec>e, the solution of VC exhibits full replica symmetry breaking. The statistical mechanics approach can also be used to study analytically the typical running time of simple complete and incomplete algorithms for VC. Finally, we describe recent results for VC when studied on other ensembles of finite- and infinite-dimensional graphs.Comment: review article, 26 pages, 9 figures, to appear in J. Phys. A: Math. Ge
    • 

    corecore