394 research outputs found

    Distributed Domain Propagation

    Get PDF
    Portfolio parallelization is an approach that runs several solver instances in parallel and terminates when one of them succeeds in solving the problem. Despite its simplicity, portfolio parallelization has been shown to perform well for modern mixed-integer programming (MIP) and boolean satisfiability problem (SAT) solvers. Domain propagation has also been shown to be a simple technique in modern MIP and SAT solvers that effectively finds additional domain reductions after the domain of a variable has been reduced. In this paper we introduce distributed domain propagation, a technique that shares bound tightenings across solvers to trigger further domain propagations. We investigate its impact in modern MIP solvers that employ portfolio parallelization. Computational experiments were conducted for two implementations of this parallelization approach. While both share global variable bounds and solutions, they communicate differently. In one implementation the communication is performed only at designated points in the solving process and in the other it is performed completely asynchronously. Computational experiments show a positive performance impact of communicating global variable bounds and provide valuable insights in communication strategies for parallel solvers

    Distributed Domain Propagation

    Get PDF
    This is the final version. Available on open access from the publisher via the DOI in this record16th International Symposium on Experimental Algorithms (SEA 2017), 21-23 June 2017, London, UKPortfolio parallelization is an approach that runs several solver instances in parallel and terminates when one of them succeeds in solving the problem. Despite it’s simplicity portfolio parallelization has been shown to perform well for modern mixed-integer programming (MIP) and boolean satisfiability problem (SAT) solvers. Domain propagation has also been shown to be a simple technique in modern MIP and SAT solvers that effectively finds additional domain reductions after a variables domain has been reduced. This paper investigates the impact of distributed domain propagation in modern MIP solvers that employ portfolio parallelization. Computational experiments were conducted for two implementations of this parallelization approach. While both share global variable bounds and solutions they communicate differently. In one implementation the communication is performed only at designated points in the solving process and in the other it is performed completely asynchronously. Computational experiments show a positive performance impact of communicating global variable bounds and provide valuable insights in communication strategies for parallel solvers.German Federal Ministry of Education and Researc

    Unified Offloading Decision Making and Resource Allocation in ME-RAN

    Get PDF
    In order to support communication and computation cooperation, we propose a mobile edge cloud-radio access network (ME-RAN) architecture, which consists of mobile edge cloud (ME) as the computation provision platform and radio access network (RAN) as the communication interface. A cooperative offloading framework is proposed to achieve the following tasks: first, to increase user equipment' (UE') computing capacity by triggering offloading action, especially for the UE, which cannot complete the computation locally; second, to reduce the energy consumption for all the UEs by considering limited computing and communication resources. Based on abovementioned objectives, we formulate the energy consumption minimization problem, which is shown to be a non-convex mixed-integer programming. First, decentralized local decision algorithm is proposed for each UE to estimate the possible local resource consumption and decide if offloading is in its interest. This operation will reduce the overhead and signalling in the later stage. Then, centralized decision and resource allocation algorithm (CAR) is proposed to conduct the decision making and resource allocation in ME-RAN. Moreover, two low-complexity algorithms, i.e., UE with largest saved energy consumption accepted first (CAR-E) and UE with smallest required data rate accepted first (CAR-D) are proposed. Simulations show that the performance of the proposed algorithms is very close to the exhaustive search but with much less complexity

    Evolving Solutions for Design and Management Tasks on Computers

    Get PDF
    Uninitiated may find it strange that artificial evolution resides among a class of problem solving methods belonging to a field named computational intelligence. Some people still believe that nature's trial-and-error way to adapt subsystems to their environment is a prodigal game at dice that has led to admirable results only due to vast resources of time and space. A rather simple gedankenexperiment, however, reveals that all 10E80 most elementary particles in the universe together with 10E60 tiniest time steps since the begin of time cannot explain the development of even simplest bacterial genomes by pure random sampling. Organic evolution must have found a more efficient way to develop clever individuals and manage complex systems. Since about forty years now, a couple of scientists have tried to mimic this process, and they have learned to exploit some tricks of life for solving an amazing variety of design and management tasks. This paper tries to give an overview of some recent applications as well as a summary of what we know about the general behavior of evolutionary algorithms

    Assessing Quantum Computing Performance for Energy Optimization in a Prosumer Community

    Full text link
    The efficient management of energy communities relies on the solution of the "prosumer problem", i.e., the problem of scheduling the household loads on the basis of the user needs, the electricity prices, and the availability of local renewable energy, with the aim of reducing costs and energy waste. Quantum computers can offer a significant breakthrough in treating this problem thanks to the intrinsic parallel nature of quantum operations. The most promising approach is to devise variational hybrid algorithms, in which quantum computation is driven by parameters that are optimized classically, in a cycle that aims at finding the best solution with a significant speed-up with respect to classical approaches. This paper provides a reformulation of the prosumer problem, allowing to address it with a hybrid quantum algorithm, namely, Quantum Approximate Optimization Algorithm (QAOA), and with a recent variant, the Recursive QAOA. We report on an extensive set of experiments, on simulators and real quantum hardware, for different problem sizes. Results are encouraging in that Recursive QAOA is able, for problems involving up to 10 qubits, to provide optimal and admissible solutions with good probabilities, while the computation time is nearly independent of the system sizeComment: 14 pages, 13 figures. IEEE Transactions on Smart Grid (2023

    An exact extended formulation for the unrelated parallel machine total weighted completion time problem

    Get PDF
    The plethora of research on NP-hard parallel machine scheduling problems is focused on heuristics due to the theoretically and practically challenging nature of these problems. Only a handful of exact approaches are available in the literature, and most of these suffer from scalability issues. Moreover, the majority of the papers on the subject are restricted to the identical parallel machine scheduling environment. In this context, the main contribution of this work is to recognize and prove that a particular preemptive relaxation for the problem of minimizing the total weighted completion time (TWCT) on a set of unrelated parallel machines naturally admits a non-preemptive optimal solution and gives rise to an exact mixed integer linear programming formulation of the problem. Furthermore, we exploit the structural properties of TWCT and attain a very fast and scalable exact Benders decomposition-based algorithm for solving this formulation. Computationally, our approach holds great promise and may even be embedded into iterative algorithms for more complex shop scheduling problems as instances with up to 1000 jobs and 8 machines are solved to optimality within a few seconds

    Synchromodal logistics: An overview of critical success factors, enabling technologies, and open research issues

    Get PDF
    Abstract As supply chain management is becoming demand driven, logistics service providers need to use real-time information efficiently and integrate new technologies into their business. Synchromodal logistics has emerged recently to improve flexibility in supply chains, cooperation among stakeholders, and utilization of resources. We survey the existing scientific literature and real-life developments on synchromodality. We focus on the critical success factors of synchromodality and six categories of enabling technologies. We identify open research issues and propose the introduction of a new stakeholder, which takes on the role of orchestrator to coordinate and provide services through a technology-based platform

    Self-Tuning Service Provisioning for Decentralised Cloud Applications

    Get PDF
    Cloud computing has revolutionized service delivery by providing on-demand invocation and elasticity. To reap these benefits, computation has been displaced from client devices and into data centers. This partial centralization is undesirable for applications that have stringent locality requirements, e.g., low latency. This problem could be addressed with large numbers of smaller cloud resources closer to users. However, as cloud computing diffuses from within data centers and into the network, there will be a need for cloud resource allocation algorithms that operate on resource-constrained computational units that serve localized subsets of customers. In this paper, we present a mechanism for service provisioning in distributed clouds where applications compete for resources. The mechanism operates by enabling execution zones to assign resources based on Vickrey auctions and provides high-quality probabilistic models that applications can use to predict the outcomes of such auctions. This allows applications to use knowledge of the locality distribution of their clients to accurately select the number of bids to be sent to each execution zone and their value. The proposed mechanism is highly scalable, efficient, and validated by extensive simulations
    • …
    corecore