1,626 research outputs found

    Robust long-term production planning

    Get PDF

    Mapping constrained optimization problems to quantum annealing with application to fault diagnosis

    Get PDF
    Current quantum annealing (QA) hardware suffers from practical limitations such as finite temperature, sparse connectivity, small qubit numbers, and control error. We propose new algorithms for mapping boolean constraint satisfaction problems (CSPs) onto QA hardware mitigating these limitations. In particular we develop a new embedding algorithm for mapping a CSP onto a hardware Ising model with a fixed sparse set of interactions, and propose two new decomposition algorithms for solving problems too large to map directly into hardware. The mapping technique is locally-structured, as hardware compatible Ising models are generated for each problem constraint, and variables appearing in different constraints are chained together using ferromagnetic couplings. In contrast, global embedding techniques generate a hardware independent Ising model for all the constraints, and then use a minor-embedding algorithm to generate a hardware compatible Ising model. We give an example of a class of CSPs for which the scaling performance of D-Wave's QA hardware using the local mapping technique is significantly better than global embedding. We validate the approach by applying D-Wave's hardware to circuit-based fault-diagnosis. For circuits that embed directly, we find that the hardware is typically able to find all solutions from a min-fault diagnosis set of size N using 1000N samples, using an annealing rate that is 25 times faster than a leading SAT-based sampling method. Further, we apply decomposition algorithms to find min-cardinality faults for circuits that are up to 5 times larger than can be solved directly on current hardware.Comment: 22 pages, 4 figure

    A general framework of multi-population methods with clustering in undetectable dynamic environments

    Get PDF
    Copyright @ 2011 IEEETo solve dynamic optimization problems, multiple population methods are used to enhance the population diversity for an algorithm with the aim of maintaining multiple populations in different sub-areas in the fitness landscape. Many experimental studies have shown that locating and tracking multiple relatively good optima rather than a single global optimum is an effective idea in dynamic environments. However, several challenges need to be addressed when multi-population methods are applied, e.g., how to create multiple populations, how to maintain them in different sub-areas, and how to deal with the situation where changes can not be detected or predicted. To address these issues, this paper investigates a hierarchical clustering method to locate and track multiple optima for dynamic optimization problems. To deal with undetectable dynamic environments, this paper applies the random immigrants method without change detection based on a mechanism that can automatically reduce redundant individuals in the search space throughout the run. These methods are implemented into several research areas, including particle swarm optimization, genetic algorithm, and differential evolution. An experimental study is conducted based on the moving peaks benchmark to test the performance with several other algorithms from the literature. The experimental results show the efficiency of the clustering method for locating and tracking multiple optima in comparison with other algorithms based on multi-population methods on the moving peaks benchmark

    Overcommitment in Cloud Services -- Bin packing with Chance Constraints

    Full text link
    This paper considers a traditional problem of resource allocation, scheduling jobs on machines. One such recent application is cloud computing, where jobs arrive in an online fashion with capacity requirements and need to be immediately scheduled on physical machines in data centers. It is often observed that the requested capacities are not fully utilized, hence offering an opportunity to employ an overcommitment policy, i.e., selling resources beyond capacity. Setting the right overcommitment level can induce a significant cost reduction for the cloud provider, while only inducing a very low risk of violating capacity constraints. We introduce and study a model that quantifies the value of overcommitment by modeling the problem as a bin packing with chance constraints. We then propose an alternative formulation that transforms each chance constraint into a submodular function. We show that our model captures the risk pooling effect and can guide scheduling and overcommitment decisions. We also develop a family of online algorithms that are intuitive, easy to implement and provide a constant factor guarantee from optimal. Finally, we calibrate our model using realistic workload data, and test our approach in a practical setting. Our analysis and experiments illustrate the benefit of overcommitment in cloud services, and suggest a cost reduction of 1.5% to 17% depending on the provider's risk tolerance

    Scheduling for Timely Passenger Delivery in a Large Scale Ride Sharing System

    Get PDF
    Taxi ride sharing is one of the most promising solutions to urban transportation issues, such as traffic congestion, gas insufficiency, air pollution, limited parking space and unaffordable parking charge, taxi shortage in peak hours, etc. Despite the enormous demands of such service and its exciting social benefits, there is still a shortage of successful automated operations of ride sharing systems around the world. Two of the bottlenecks are: (1) on-time delivery is not guaranteed; (2) matching and scheduling drivers and passengers is a NP-hard problem, and optimization based models do not support real time scheduling on large scale systems. This thesis tackles the challenge of timely delivery of passengers in a large scale ride sharing system, where there are hundreds and even thousands of passengers and drivers to be matched and scheduled. We first formulate it as a mixed linear integer programming problem, which obtains the theoretical optimum, but at an unacceptable runtime cost even for a small system. We then introduce our greedy agglomeration and Monte Carlo simulation based algorithm. The effectiveness and efficiency of the new algorithm are fully evaluated: (1) Comparison with solving optimization model is conducted on small ride sharing cases. The greedy agglomerative algorithm can always achieve the same optimal solutions that the optimization model offers, but is three orders of magnitude faster. (2) Case studies on large scale systems are also included to validate its performance. (3) The proposed greedy algorithm is straightforward for parallelization to utilize distributed computing resources. (4) Two important details are discussed: selection of the number of Monte Carlo simulations and proper calculation of delays in the greedy agglomeration step. We find out from experiments that the sufficient number of simulations to achieve a “sufficiently optimal solution” is linearly related to the product of the number of vehicles and the number of passengers. Experiments also show that enabling margins and counting early delivery as negative delay leads to more accurate solutions than counting delay only
    corecore