512 research outputs found

    Learning dynamic algorithm portfolios

    Get PDF
    Algorithm selection can be performed using a model of runtime distribution, learned during a preliminary training phase. There is a trade-off between the performance of model-based algorithm selection, and the cost of learning the model. In this paper, we treat this trade-off in the context of bandit problems. We propose a fully dynamic and online algorithm selection technique, with no separate training phase: all candidate algorithms are run in parallel, while a model incrementally learns their runtime distributions. A redundant set of time allocators uses the partially trained model to propose machine time shares for the algorithms. A bandit problem solver mixes the model-based shares with a uniform share, gradually increasing the impact of the best time allocators as the model improves. We present experiments with a set of SAT solvers on a mixed SAT-UNSAT benchmark; and with a set of solvers for the Auction Winner Determination proble

    Don't Be Strict in Local Search!

    Full text link
    Local Search is one of the fundamental approaches to combinatorial optimization and it is used throughout AI. Several local search algorithms are based on searching the k-exchange neighborhood. This is the set of solutions that can be obtained from the current solution by exchanging at most k elements. As a rule of thumb, the larger k is, the better are the chances of finding an improved solution. However, for inputs of size n, a na\"ive brute-force search of the k-exchange neighborhood requires n to the power of O(k) time, which is not practical even for very small values of k. Fellows et al. (IJCAI 2009) studied whether this brute-force search is avoidable and gave positive and negative answers for several combinatorial problems. They used the notion of local search in a strict sense. That is, an improved solution needs to be found in the k-exchange neighborhood even if a global optimum can be found efficiently. In this paper we consider a natural relaxation of local search, called permissive local search (Marx and Schlotter, IWPEC 2009) and investigate whether it enhances the domain of tractable inputs. We exemplify this approach on a fundamental combinatorial problem, Vertex Cover. More precisely, we show that for a class of inputs, finding an optimum is hard, strict local search is hard, but permissive local search is tractable. We carry out this investigation in the framework of parameterized complexity.Comment: (author's self-archived copy

    Finding cores of random 2-SAT formulae via Poisson cloning

    Full text link
    For the random 2-SAT formula F(n,p)F(n,p), let FC(n,p)F_C (n,p) be the formula left after the pure literal algorithm applied to F(n,p)F(n,p) stops. Using the recently developed Poisson cloning model together with the cut-off line algorithm (COLA), we completely analyze the structure of FC(n,p)F_{C} (n,p). In particular, it is shown that, for \gl:= p(2n-1) = 1+\gs with \gs\gg n^{-1/3}, the core of F(n,p)F(n,p) has \thl^2 n +O((\thl n)^{1/2}) variables and \thl^2 \gl n+O((\thl n))^{1/2} clauses, with high probability, where \thl is the larger solution of the equation \th- (1-e^{-\thl \gl})=0. We also estimate the probability of F(n,p)F(n,p) being satisfiable to obtain \pr[ F_2(n, \sfrac{\gl}{2n-1}) is satisfiable ] = \caseth{1-\frac{1+o(1)}{16\gs^3 n}}{if $\gl= 1-\gs$ with $\gs\gg n^{-1/3}$}{}{}{e^{-\Theta(\gs^3n)}}{if $\gl=1+\gs$ with $\gs\gg n^{-1/3}$,} where o(1)o(1) goes to 0 as \gs goes to 0. This improves the bounds of Bollob\'as et al. \cite{BBCKW}

    SAT and CP: Parallelisation and Applications

    Get PDF
    This thesis is considered with the parallelisation of solvers which search for either an arbitrary, or an optimum, solution to a problem stated in some formal way. We discuss the parallelisation of two solvers, and their application in three chapters.In the first chapter, we consider SAT, the decision problem of propositional logic, and algorithms for showing the satisfiability or unsatisfiability of propositional formulas. We sketch some proof-theoretic foundations which are related to the strength of different algorithmic approaches. Furthermore, we discuss details of the implementations of SAT solvers, and show how to improve upon existing sequential solvers. Lastly, we discuss the parallelisation of these solvers with a focus on clause exchange, the communication of intermediate results within a parallel solver. The second chapter is concerned with Contraint Programing (CP) with learning. Contrary to classical Constraint Programming techniques, this incorporates learning mechanisms as they are used in the field of SAT solving. We present results from parallelising CHUFFED, a learning CP solver. As this is both a kind of CP and SAT solver, it is not clear which parallelisation approaches work best here. In the final chapter, we will discuss Sorting networks, which are data oblivious sorting algorithms, i. e., the comparisons they perform do not depend on the input data. Their independence of the input data lends them to parallel implementation. We consider the question how many parallel sorting steps are needed to sort some inputs, and present both lower and upper bounds for several cases

    CONJURE: automatic generation of constraint models from problem specifications

    Get PDF
    Funding: Engineering and Physical Sciences Research Council (EP/V027182/1, EP/P015638/1), Royal Society (URF/R/180015).When solving a combinatorial problem, the formulation or model of the problem is critical tothe efficiency of the solver. Automating the modelling process has long been of interest because of the expertise and time required to produce an effective model of a given problem. We describe a method to automatically produce constraint models from a problem specification written in the abstract constraint specification language Essence. Our approach is to incrementally refine the specification into a concrete model by applying a chosen refinement rule at each step. Any nontrivial specification may be refined in multiple ways, creating a space of models to choose from. The handling of symmetries is a particularly important aspect of automated modelling. Many combinatorial optimisation problems contain symmetry, which can lead to redundant search. If a partial assignment is shown to be invalid, we are wasting time if we ever consider a symmetric equivalent of it. A particularly important class of symmetries are those introduced by the constraint modelling process: modelling symmetries. We show how modelling symmetries may be broken automatically as they enter a model during refinement, obviating the need for an expensive symmetry detection step following model formulation. Our approach is implemented in a system called Conjure. We compare the models producedby Conjure to constraint models from the literature that are known to be effective. Our empirical results confirm that Conjure can reproduce successfully the kernels of the constraint models of 42 benchmark problems found in the literature.Publisher PDFPeer reviewe

    Access Control Synthesis for Physical Spaces

    Full text link
    Access-control requirements for physical spaces, like office buildings and airports, are best formulated from a global viewpoint in terms of system-wide requirements. For example, "there is an authorized path to exit the building from every room." In contrast, individual access-control components, such as doors and turnstiles, can only enforce local policies, specifying when the component may open. In practice, the gap between the system-wide, global requirements and the many local policies is bridged manually, which is tedious, error-prone, and scales poorly. We propose a framework to automatically synthesize local access control policies from a set of global requirements for physical spaces. Our framework consists of an expressive language to specify both global requirements and physical spaces, and an algorithm for synthesizing local, attribute-based policies from the global specification. We empirically demonstrate the framework's effectiveness on three substantial case studies. The studies demonstrate that access control synthesis is practical even for complex physical spaces, such as airports, with many interrelated security requirements

    Planning while Believing to Know

    Get PDF
    Over the last few years, the concept of Artificial Intelligence (AI) has become essential in our daily life and in several working scenarios. Among the various branches of AI, automated planning and the study of multi-agent systems are central research fields. This thesis focuses on a combination of these two areas: that is, a specialized kind of planning known as Multi-agent Epistemic Planning. This field of research is concentrated on all those scenarios where agents, reasoning in the space of knowledge/beliefs, try to find a plan to reach a desirable state from a starting one. This requires agents able to reason about her/his and others’ knowledge/beliefs and, therefore, capable of performing epistemic reasoning. Being aware of the information flows and the others’ states of mind is, in fact, a key aspect in several planning situations. That is why developing autonomous agents, that can reason considering the perspectives of their peers, is paramount to model a variety of real-world domains. The objective of our work is to formalize an environment where a complete characterization of the agents’ knowledge/beliefs interactions and updates are possible. In particular, we achieved such a goal by defining a new action-based language for Multi-agent Epistemic Planning and implementing epistemic planners based on it. These solvers, flexible enough to reason about various domains and different nuances of knowledge/belief update, can provide a solid base for further research on epistemic reasoning or real-base applications. This dissertation also proposes the design of a more general epistemic planning architecture. This architecture, following famous cognitive theories, tries to emulate some characteristics of the human decision-making process. In particular, we envisioned a system composed of several solving processes, each one with its own trade-off between efficiency and correctness, which are arbitrated by a meta-cognitive module
    • …
    corecore