229 research outputs found

    Formula partitioning revisited

    Get PDF
    Dividing a Boolean formula into smaller independent sub-formulae can be a useful technique for accelerating the solution of Boolean problems, including SAT and #SAT. Nevertheless, and despite promising early results, formula partitioning is hardly used in state-of-the-art solvers. In this paper, we show that this is rooted in a lack of consistency of the usefulness of formula partitioning techniques. In particular, we evaluate two existing and a novel partitioning model, coupled with two existing and two novel partitioning algorithms, on a wide range of benchmark instances. Our results show that there is no one-size-fits-all solution: for different formula types, different partitioning models and algorithms are the most suitable. While these results might seem negative, they help to improve our understanding about formula partitioning; moreover, the findings also give guidance as to which method to use for what kinds of formulae

    Conflict-Free Routing of Mobile Robots

    Get PDF
    The recent advances in perception have enabled the development of more autonomous mobile robots in the sense that they can operate in a more dynamic environment where obstacles surrounding the robot emerge, disappear, and move. The increased perception of Autonomous Mobile Robots (AMRs) allows them to plan detailed on-line trajectories in order to avoid previously unforeseen obstacles, making AMRs useful in dynamic environments where humans, traditional fork-lifts, and also other mobile robots operate. These abilities contributed to increase automation in logistic applications. This thesis discusses how to efficiently operate a fleet of AMRs and make sure that all tasks are successfully completed.Assigning robots to specific delivery tasks and deciding the routes they have to travel can be modelled as a variant of the classical Vehicle Routing Problem (VRP), the combinatorial optimization problem of designing routes for vehicles. In related research it has been extended to scheduling routes for vehicles to serve customers according to predetermined specifications, such as arrival time at a customer, amount of goods to deliver, etc.In this thesis we consider to schedule a fleet of robots such that areas avoid being congested, delivery time-windows are met, the need for robots to recharge is considered, while at the same time the robots have freedom to use alternative paths to handle changes in the environment. This particular version of the VRP, called CF-EVRP (Conflict-free Electrical Vehicle Routing Problem) is motivated by an industrial need. In this work we consider using optimizing general purpose solvers, in particular, MILP and SMT solvers are investigated. We run extensive computational analysis over well-known combinatorial optimization problems, such as job shop scheduling and bin-packing problems, to evaluate modeling techniques and the relative performance of state-of-the-art MILP and SMT solvers.We propose a monolithic model for the CF-EVRP as well as a compositional approach that decomposes the problem into sub-problems and formulate them as either MILP or SMT problems depending on what fits each particular problem best. The performance of the two approaches is evaluated on a set of CF-EVRP benchmark problems, showing the feasibility of using a compositional approach for solving practical fleet scheduling problems

    The Effect of Network Topology on Credit Network Throughput

    Full text link
    Credit networks rely on decentralized, pairwise trust relationships (channels) to exchange money or goods. Credit networks arise naturally in many financial systems, including the recent construct of payment channel networks in blockchain systems. An important performance metric for these networks is their transaction throughput. However, predicting the throughput of a credit network is nontrivial. Unlike traditional communication channels, credit channels can become imbalanced; they are unable to support more transactions in a given direction once the credit limit has been reached. This potential for imbalance creates a complex dependency between a network's throughput and its topology, path choices, and the credit balances (state) on every channel. Even worse, certain combinations of these factors can lead the credit network to deadlocked states where no transactions can make progress. In this paper, we study the relationship between the throughput of a credit network and its topology and credit state. We show that the presence of deadlocks completely characterizes a network's throughput sensitivity to different credit states. Although we show that identifying deadlocks in an arbitrary topology is NP-hard, we propose a peeling algorithm inspired by decoding algorithms for erasure codes that upper bounds the severity of the deadlock. We use the peeling algorithm as a tool to compare the performance of different topologies as well as to aid in the synthesis of topologies robust to deadlocks

    Model-based symbolic design space exploration at the electronic system level: a systematic approach

    Get PDF
    In this thesis, a novel, fully systematic approach is proposed that addresses the automated design space exploration at the electronic system level. The problem is formulated as multi-objective optimization problem and is encoded symbolically using Answer Set Programming (ASP). Several specialized solvers are tightly coupled as background theories with the foreground ASP solver under the ASP modulo Theories (ASPmT) paradigm. By utilizing the ASPmT paradigm, the search is executed entirely systematically and the disparate synthesis steps can be coupled to explore the search space effectively.In dieser Arbeit wird ein vollständig systematischer Ansatz präsentiert, der sich mit der Entwurfsraumexploration auf der elektronischen Systemebene befasst. Das Problem wird als multikriterielles Optimierungsproblem formuliert und symbolisch mit Hilfe von Answer Set Programming (ASP) kodiert. Spezialisierte Solver sind im Rahmen des ASP modulo Theories (ASPmT) Paradigmas als Hintergrundtheorien eng mit dem ASP Solver gekoppelt. Durch die Verwendung von ASPmT wird die Suche systematisch ausgeführt und die individuellen Schritte können gekoppelt werden, um den Suchraum effektiv zu durchsuchen

    Preprocessing and Stochastic Local Search in Maximum Satisfiability

    Get PDF
    Problems which ask to compute an optimal solution to its instances are called optimization problems. The maximum satisfiability (MaxSAT) problem is a well-studied combinatorial optimization problem with many applications in domains such as cancer therapy design, electronic markets, hardware debugging and routing. Many problems, including the aforementioned ones, can be encoded in MaxSAT. Thus MaxSAT serves as a general optimization paradigm and therefore advances in MaxSAT algorithms translate to advances in solving other problems. In this thesis, we analyze the effects of MaxSAT preprocessing, the process of reformulating the input instance prior to solving, on the perceived costs of solutions during search. We show that after preprocessing most MaxSAT solvers may misinterpret the costs of non-optimal solutions. Many MaxSAT algorithms use the found non-optimal solutions in guiding the search for solutions and so the misinterpretation of costs may misguide the search. Towards remedying this issue, we introduce and study the concept of locally minimal solutions. We show that for some of the central preprocessing techniques for MaxSAT, the perceived cost of a locally minimal solution to a preprocessed instance equals the cost of the corresponding reconstructed solution to the original instance. We develop a stochastic local search algorithm for MaxSAT, called LMS-SLS, that is prepended with a preprocessor and that searches over locally minimal solutions. We implement LMS-SLS and analyze the performance of its different components, particularly the effects of preprocessing and computing locally minimal solutions, and also compare LMS-SLS with the state-of-the-art SLS solver SATLike for MaxSAT.

    Computer Aided Verification

    Get PDF
    The open access two-volume set LNCS 11561 and 11562 constitutes the refereed proceedings of the 31st International Conference on Computer Aided Verification, CAV 2019, held in New York City, USA, in July 2019. The 52 full papers presented together with 13 tool papers and 2 case studies, were carefully reviewed and selected from 258 submissions. The papers were organized in the following topical sections: Part I: automata and timed systems; security and hyperproperties; synthesis; model checking; cyber-physical systems and machine learning; probabilistic systems, runtime techniques; dynamical, hybrid, and reactive systems; Part II: logics, decision procedures; and solvers; numerical programs; verification; distributed systems and networks; verification and invariants; and concurrency

    A Representation for Serial Robotic Tasks

    Get PDF
    The representation for serial robotic tasks proposed in this thesis is a language of temporal constraints derived directly from a model of the space of serial plans. It was specifically designed to encompass problems that include disjunctive ordering constraints. This guarantees that the proposed language can completely and, to a certain extent, compactly represent all possible serial robotic tasks. The generality of this language carries a penalty. The proposed language of temporal constraints is NP-Complete. Specific methods have been demonstrated for normalizing constraints posed in this language in order to make subsequent sequencing and analysis more tractable. Using this language, the planner can specify necessary and alternative orderings to control undesirable interactions between steps of a plan. For purposes of analysis, the planner can factor a plan into strategies, and decompose those strategies into essential components. Using properly normalized constraint expressions the sequencer can derive admissible sequences and admissible next operations. Using these facilities, a robot can be given the specification of a task and it can adapt its sequence of operations according to run-time events and the constraints on the operations to be performed

    Design of vehicle routing problem domains for a hyper-heuristic framework

    Get PDF
    The branch of algorithms that uses adaptive methods to select or tune heuristics, known as hyper-heuristics, is one that has seen a large amount of interest and development in recent years. With an aim to develop techniques that can deliver results on multiple problem domains and multiple instances, this work is getting ever closer to mirroring the complex situations that arise in the corporate world. However, the capability of a hyper-heuristic is closely tied to the representation of the problem it is trying to solve and the tools that are available to do so. This thesis considers the design of such problem domains for hyper-heuristics. In particular, this work proposes that through the provision of high-quality data and tools to a hyper-heuristic, improved results can be achieved. A definition is given which describes the components of a problem domain for hyper-heuristics. Building on this definition, a domain for the Vehicle Routing Problem with Time Windows is presented. Through this domain, examples are given of how a hyper- heuristic can be provided extra information with which to make intelligent search decisions. One of these pieces of information is a measure of distance between solution which, when used to aid selection of mutation heuristics, is shown to improve results of an Iterative Local Search hyper-heuristic. A further example of the advantages of providing extra information is given in the form of the provision of a set of tools for the Vehicle Routing Problem domain to promote and measure ’fairness’ between routes. By offering these extra features at a domain level, it is shown how a hyper-heuristic can drive toward a fairer solution while maintaining a high level of performance

    Offline Learning for Sequence-based Selection Hyper-heuristics

    Get PDF
    This thesis is concerned with finding solutions to discrete NP-hard problems. Such problems occur in a wide range of real-world applications, such as bin packing, industrial flow shop problems, determining Boolean satisfiability, the traveling salesman and vehicle routing problems, course timetabling, personnel scheduling, and the optimisation of water distribution networks. They are typically represented as optimisation problems where the goal is to find a ``best'' solution from a given space of feasible solutions. As no known polynomial-time algorithmic solution exists for NP-hard problems, they are usually solved by applying heuristic methods. Selection hyper-heuristics are algorithms that organise and combine a number of individual low level heuristics into a higher level framework with the objective of improving optimisation performance. Many selection hyper-heuristics employ learning algorithms in order to enhance optimisation performance by improving the selection of single heuristics, and this learning may be classified as either online or offline. This thesis presents a novel statistical framework for the offline learning of subsequences of low level heuristics in order to improve the optimisation performance of sequenced-based selection hyper-heuristics. A selection hyper-heuristic is used to optimise the HyFlex set of discrete benchmark problems. The resulting sequences of low level heuristic selections and objective function values are used to generate an offline learning database of heuristic selections. The sequences in the database are broken down into subsequences and the mathematical concept of a logarithmic return is used to discriminate between ``effective'' subsequences, that tend to lead to improvements in optimisation performance, and ``disruptive'' subsequences that tend to lead to worsening performance. Effective subsequences are used to improve hyper-heuristics performance directly, by embedding them in a simple hyper-heuristic design, and indirectly as the inputs to an appropriate hyper-heuristic learning algorithm. Furthermore, by comparing effective subsequences across different problem domains it is possible to investigate the potential for cross-domain learning. The results presented here demonstrates that the use of well chosen subsequences of heuristics can lead to small, but statistically significant, improvements in optimisation performance
    corecore