26 research outputs found

    Preprocessing and Stochastic Local Search in Maximum Satisfiability

    Get PDF
    Problems which ask to compute an optimal solution to its instances are called optimization problems. The maximum satisfiability (MaxSAT) problem is a well-studied combinatorial optimization problem with many applications in domains such as cancer therapy design, electronic markets, hardware debugging and routing. Many problems, including the aforementioned ones, can be encoded in MaxSAT. Thus MaxSAT serves as a general optimization paradigm and therefore advances in MaxSAT algorithms translate to advances in solving other problems. In this thesis, we analyze the effects of MaxSAT preprocessing, the process of reformulating the input instance prior to solving, on the perceived costs of solutions during search. We show that after preprocessing most MaxSAT solvers may misinterpret the costs of non-optimal solutions. Many MaxSAT algorithms use the found non-optimal solutions in guiding the search for solutions and so the misinterpretation of costs may misguide the search. Towards remedying this issue, we introduce and study the concept of locally minimal solutions. We show that for some of the central preprocessing techniques for MaxSAT, the perceived cost of a locally minimal solution to a preprocessed instance equals the cost of the corresponding reconstructed solution to the original instance. We develop a stochastic local search algorithm for MaxSAT, called LMS-SLS, that is prepended with a preprocessor and that searches over locally minimal solutions. We implement LMS-SLS and analyze the performance of its different components, particularly the effects of preprocessing and computing locally minimal solutions, and also compare LMS-SLS with the state-of-the-art SLS solver SATLike for MaxSAT.

    SAT-based approaches for constraint optimization

    Get PDF
    La optimització amb restriccions ha estat utilitzada amb èxit par a resoldre problemes en molts dominis reals (industrials). Aquesta tesi es centra en les aproximacions lògiques, concretament en Màxima Satisfactibilitat (MaxSAT) que és la versió d’optimització del problema de Satisfactibilitat booleana (SAT). A través de MaxSAT, s’han resolt molts problemes de forma eficient. Famílies d’instàncies de la majoria d’aquests problemes han estat sotmeses a la MaxSAT Evaluation (MSE), creant així una col•lecció pública i accessible d’instàncies de referència. En les edicions recents de la MSE, els algorismes SAT-based han estat les aproximacions que han tingut un millor comportament per a les instàncies industrials. Aquesta tesi està centrada en millorar els algorismes SAT-based . El nostre treball ha contribuït a tancar varies instàncies obertes i a reduir dramàticament el temps de resolució en moltes altres. A més, hem trobat sorprenentment que reformular y resoldre el problema MaxSAT a través de programació lineal sencera era especialment adequat per algunes famílies. Finalment, hem desenvolupat el primer portfoli altament eficient par a MaxSAT que ha dominat en totes las categories de la MSE des de 2013.La optimización con restricciones ha sido utilizada con éxito para resolver problemas en muchos dominios reales (industriales). Esta tesis se centra en las aproximaciones lógicas, concretamente en Máxima Satisfacibilidad (MaxSAT) que es la versión de optimización del problema de Satisfacibilidad booleana (SAT). A través de MaxSAT, se han resuelto muchos problemas de forma eficiente. Familias de instancias de la mayoría de ellos han sido sometidas a la MaxSAT Evaluation (MSE), creando así una colección pública y accesible de instancias de referencia. En las ediciones recientes de la MSE, los algoritmos SAT-based han sido las aproximaciones que han tenido un mejor comportamiento para las instancias industriales. Esta tesis está centrada en mejorar los algoritmos SAT-based. Nuestro trabajo ha contribuido a cerrar varias instancias abiertas y a reducir dramáticamente el tiempo de resolución en muchas otras. Además, hemos encontrado sorprendentemente que reformular y resolver el problema MaxSAT a través de programación lineal entera era especialmente adecuado para algunas familias. Finalmente, hemos desarrollado el primer portfolio altamente eficiente para MaxSAT que ha dominado en todas las categorías de la MSE desde 2013.Constraint optimization has been successfully used to solve problems in many real world (industrial) domains. This PhD thesis is focused on logic-based approaches, in particular, on Maximum Satisfiability (MaxSAT) which is the optimization version of Satisfiability (SAT). There have been many problems efficiency solved through MaxSAT. Instance families on the majority of them have been submitted to the international MaxSAT Evaluation (MSE), creating a collection of publicly available benchmark instances. At recent editions of MSE, SAT-based algorithms were the best performing single algorithm approaches for industrial problems. This PhD thesis is focused on the improvement of SAT-based algorithms. All this work has contributed to close up some open instances and to reduce dramatically the solving time in many others. In addition, we have surprisingly found that reformulating and solving the MaxSAT problem through Integer Linear Programming (ILP) was extremely well suited for some families. Finally, we have developed the first highly efficient MaxSAT portfolio that dominated all categories of MSE since 2013

    CONJURE: automatic generation of constraint models from problem specifications

    Get PDF
    Funding: Engineering and Physical Sciences Research Council (EP/V027182/1, EP/P015638/1), Royal Society (URF/R/180015).When solving a combinatorial problem, the formulation or model of the problem is critical tothe efficiency of the solver. Automating the modelling process has long been of interest because of the expertise and time required to produce an effective model of a given problem. We describe a method to automatically produce constraint models from a problem specification written in the abstract constraint specification language Essence. Our approach is to incrementally refine the specification into a concrete model by applying a chosen refinement rule at each step. Any nontrivial specification may be refined in multiple ways, creating a space of models to choose from. The handling of symmetries is a particularly important aspect of automated modelling. Many combinatorial optimisation problems contain symmetry, which can lead to redundant search. If a partial assignment is shown to be invalid, we are wasting time if we ever consider a symmetric equivalent of it. A particularly important class of symmetries are those introduced by the constraint modelling process: modelling symmetries. We show how modelling symmetries may be broken automatically as they enter a model during refinement, obviating the need for an expensive symmetry detection step following model formulation. Our approach is implemented in a system called Conjure. We compare the models producedby Conjure to constraint models from the literature that are known to be effective. Our empirical results confirm that Conjure can reproduce successfully the kernels of the constraint models of 42 benchmark problems found in the literature.Publisher PDFPeer reviewe

    Proceedings of the 21st Conference on Formal Methods in Computer-Aided Design – FMCAD 2021

    Get PDF
    The Conference on Formal Methods in Computer-Aided Design (FMCAD) is an annual conference on the theory and applications of formal methods in hardware and system verification. FMCAD provides a leading forum to researchers in academia and industry for presenting and discussing groundbreaking methods, technologies, theoretical results, and tools for reasoning formally about computing systems. FMCAD covers formal aspects of computer-aided system design including verification, specification, synthesis, and testing

    Quantum Algorithm Implementations for Beginners

    Full text link
    As quantum computers become available to the general public, the need has arisen to train a cohort of quantum programmers, many of whom have been developing classical computer programs for most of their careers. While currently available quantum computers have less than 100 qubits, quantum computing hardware is widely expected to grow in terms of qubit count, quality, and connectivity. This review aims to explain the principles of quantum programming, which are quite different from classical programming, with straightforward algebra that makes understanding of the underlying fascinating quantum mechanical principles optional. We give an introduction to quantum computing algorithms and their implementation on real quantum hardware. We survey 20 different quantum algorithms, attempting to describe each in a succinct and self-contained fashion. We show how these algorithms can be implemented on IBM's quantum computer, and in each case, we discuss the results of the implementation with respect to differences between the simulator and the actual hardware runs. This article introduces computer scientists, physicists, and engineers to quantum algorithms and provides a blueprint for their implementations

    Streamlined constraint reasoning : an automated approach from high level constraint specifications

    Get PDF
    Constraint Programming (CP) is a powerful technique for solving large-scale combinatorial (optimisation) problems. Solving a problem proceeds in two distinct phases: modelling and solving. Effective modelling has a huge impact on the performance of the solving process. Even with the advance of modern automated modelling tools, search spaces involved can be so vast that problems can still be difficult to solve. To further constrain the model a more aggressive step that can be taken is the addition of streamliner constraints, which are not guaranteed to be sound but are designed to focus effort on a highly restricted but promising portion of the search space. Previously, producing effective streamlined models was a manual, difficult and time-consuming task. This thesis presents a completely automated process to the generation, search and selection of streamliner portfolios to produce a substantial reduction in search effort across a diverse range of problems. First, we propose a method for the generation and evaluation of streamliner conjectures automatically from the type structure present in an Essence specification. Second, the possible streamliner combinations are structured into a lattice and a multi-objective search method for searching the lattice of combinations and building a portfolio of streamliner combinations is defined. Third, the problem of "Streamliner Selection" is introduced which deals with selecting from the portfolio an effective streamliner for an unseen instance. The work is evaluated by presenting two sets of experiments on a variety of problem classes. Lastly, we explore the effect of model selection in the context of streamlined specifications and discuss the process of streamlining for Constrained Optimization Problems."This work was supported by: EPSRC funding award EP/N509759/1" -- Fundin

    Avances en algoritmos de estimación de distribuciones. Alternativas en el aprendizaje y representación de problemas

    Get PDF
    131 p.: graf.[ES]La Computación Evolutiva es una disciplina que se caracteriza por crear un conjunto de posibles soluciones y hacerlas evolucionar generación a generación con el propósito de resolver problemas de optimización. Ejemplos de paradigmas de computación evolutiva ampliamente reconocidos son los Algoritmos Genéticos (Genetic Algorithms,GAs) y los Algoritmos de Estimación de las Distribuciones (Estimation Distribution Algorithms, EDAs). La principal diferencia entre estos modelos está en la forma de mejorar el conjunto de soluciones en cada generación. En los GAs, la evolución se basa en la utilización de operadores de cruce y mutacion, sin expresar explicitamente las caracteristicas de los individuos seleccionados dentro de una poblacion. Los EDAs tienen en cuenta estas caracteristicas explicitas al considerar las interdependencias entre las diferentes variables que representan a un individuo y aprender el modelo grafico probabilistico que las representa. Aunque en la mayoria de los problemas de optimizacion los EDAs obtienen muy buenos resultados, es posible encontrar aspectos de estos algoritmos susceptibles de mejora con el ¯n de aumentar el rendimiento de este metodo. En este sentido, cabe destacar la utilizacion del valor obtenido por cada individuo al ser evaluado. Mediante la funcion de evaluacion o funcion objetivo, a cada individuo se le calcula un valor que indica como de proximo se encuentra ese individuo del optimo. Los EDAs utilizan este valor para seleccionar a aquellos individuos que se utilizarian para crear el modelo grafico probabilistico. Sin embargo, los EDAs no utilizan toda la informacion que proporciona la funcion objetivo. Es decir, a la hora de obtener el modelo grafico probabilistico, solo tienen en cuenta el valor de las variables predictoras de los individuos seleccionados, considerandolos todos iguales. Esto provoca que individuos con diferente valor de funcion objetivo sean, si han sido seleccionados, considerados iguales por los EDAs a la hora de obtener el modelo que se utilizaria para generar nuevos individuos. Este paso es fundamental para la convergencia del proceso, ya que el modelo grafico probabilistico es la forma que utilizan los EDAs para representar la informacion de los datos analizados. Tambien es importante analizar la manera mas apropiada de representacion de un problema para su resolucion mediante EDAs. Aunque este aspecto esta directamente ligado a cada problema particular, siempre es posible encontrar diferentes formas de representar las caracteristicas que mejor definan las diferencias entre posibles soluciones. Elegir una u otra forma de afrontar el problema puede resultar muy importante a la hora de resolverlo. Ademas, cada representacion necesita su funcion de evaluacion correspondiente, lo cual nos lleva a elegir la funcion objetivo mas adecuada para una mejor y mas rapida convergencia del proceso. En esta tesis se analiza la importancia de estos dos aspectos. Pensando en añadir informacion sobre la funcion objetivo en el proceso de aprendizaje, se ha desarrollado un nuevo metodo de optimizacion que funciona de manera similar a los EDAs, pero que utiliza clasificadores Bayesianos en el proceso de evolucion. De esta forma, los individuos se clasifican segun el valor obtenido en la funcion objetivo y utilizamos esta informacion para, a traves de clasificadores Bayesianos, obtener el modelo grafico probabilistico. A este nuevo metodo se le ha llamado Evolutionary Bayesian Classi¯er- based Optimization Algorithms (EBCOA). Para analizar la importancia que puede tener, en el proceso de evolucion de los EDAs, utilizar una determinada forma de representacion del problema, es necesario trabajar con un ejemplo concreto que resulte realmente completo y que pueda utilizarse para formalizar gran cantidad de casos practicos. Teniendo en cuenta esto, nos hemos centrado en el problema de la satisfactibilidad (SAT). Este problema es uno de los mas importantes en la teoria computacional, ya que representa un modelo generico mediante el cual se pueden formalizar gran cantidad de casos practicos en diferentes campos de investigacion, sobre todo problemas de decision tales como diseño, sintesis y verifiacion de circuitos integrados, optimizacion y planifiacion de tareas. Ademas, este problema es NP-completo. La mayor parte de los trabajos realizados para resolver el SAT estan basados en metodos exactos y completos. Ultimamente tambien se han utilizado algoritmos evolutivos en su resolucion, principalmente algoritmos geneticos. Basandonos en los trabajos realizados en esta linea, se analizan nuevas formas de representacion, buscando en cada caso, la funcion objetivo adecuada y estudiando el comportamiento de los EDAs en los diferentes casos.----------------------------------------------------------------------------------------------[EU]Konputazio ebolutiboa optimizazio diziplina bat da non optimizazio problemak ebazteko soluzio posibleen azpimultzo baten bidez hurrengo soluzioen generazio bat sortzen den. Ezagunen diren teknikak arlo honen barruan Algoritmo genetikoak (Genetic Algorithms, GAs) eta Estimation Distribution Algorithms (EDA) izenekoak ditugu. Bien arteko desberdintasun nagusiena eboluzionatzeko prozeduran datza, non GAetan gurutzaketa eta mutazio tekniketan oinarritzen den eta EDAetan eredu gra¯ko probabilistikoetan oinarritutako tekniketan. Nahiz EDAk hainbat optimizazio problemetan emaitza onak lortzen dituztela frogatu bada ere, hobekuntzarako hainbat esparru existitzen dira. Adibidez, EDAen kasuan populazio bakoitzean eboluziorako aukeratzen diren indibiduoen aldagaien balioak kontsideratzen dira, baina indibiduoen arteko ¯tness edo soluzioaren egokitasun maila ez da normalean kontuan hartzen eta indibiduo guztiak berdin bezala tratatzen dira eredu probabilistikoa sortzeko orduan. Ezaugarri hau inportantea da eredu hau erabiltzen baita aldagai guztien arteko menpekotasunak irudikatzeko, hau izanik eboluzioa bideratzeko mekanismo nagusiena. Bestalde, EDAen eraginkortasunari dagokionez, optimizazio problemaren adierazpen modua oso inportantea da. Ezaugarri hau optimizazio problema bakoitzeko desberdina bada ere, egokiena da aukeratutako problemaren adierazpideak hoberen erakustea soluzio posible desberdinen arteko desberdintasun nagusienak. Problemaren indibiduoen adierazpenaz gain, egokitasun funtzioaren de¯nizio egokiena ere aukeratu beharra dago eboluzio prozedura erabakigarria izan dadin. Tesi honetan bi ezaugarri hauen garrantzia aztertzen da EDAen eraginkortasun orokorrean duten eragina hobetzeko. Egokitze funtzioaren balioaren esangarritasuna hobetzeko helburuari dagokionez, tesiaren emaitzetako bat izan da EDAen antzeko optimizazio paradigma berri bat de¯nitu izatea, problemaren ikasketa fasea aldatuz. Gure proposamen berri honen arabera, populazio bakoitzeko indibiduoak beren egokitze funtzioaren balioaren arabera sailkatzen dira, eta orden hau jarraituz klasi¯katzaile Bayesiar formako eredu gra¯ko probabilistiko bat eraikitzen da. Teknika berri honi Evolutionary Bayesian Classi¯er-based Optimization Algorithms (EBCOA) deitu dio- gu. Azkenik, indibiduoen adierazpenaren eragina aztertzeko asmoz, EDAen eraginkor- tasun orokorrean dagokionez, optimizazio problema konkretu bat aukeratu dugu, SAT izenekoa (satis¯ability). Hau konputazio teoriaren barnean oso ezaguna den eta NP-osoa den optimizazio problema multzo bat da, zeinak problemen adierazpen eredu orokor bat aurkezten duen ikerketa kasu praktikoei erantzun bat eman ahal izateko, hots, erabakiak hartzeko problemak, zirkuitu integratuen diseinu eta frogaketa, optimizazioa eta atazen plani¯kazioa. Publikatuta dauden SATerako konputazio ebolutiboaren tekniken ikerketa lanetan oinarrituz, SAT optimizazio problemen indibiduo adierazpen desberdinak aztertu ditugu tesi honetan, eta hauekin batera egokien zaizkien ¯tness funtzio optimoenak zein diren aztertuz EDAentzako eraginkortasun hoberena lortzeko asmoz.----------------------------------------------------------------------------------------------[EN]Evolutionary computation is a discipline characterised by creating a set of possible solutions and to make it evolve generation after generation in order to solve optimization problems. Examples of broadly known evolutionary computation paradigms are Genetic Algorithms (GAs) and Estimation Distribution Algorithms (EDAs). The main di®erence between these models is the evolution process: In GAs this is based on crossover and mutation operators, without explicit expression of the characteristics of selected individuals within the population. EDAs take these explicit characteristics into account by considering the interdependencies between the variables that form an individual and by learning a probabilistic graphical model that represents them. Even if EDAs show good results in many optimization problems, there are many aspects for improvement in order to enhance their performance. One of such aspects is to take into consideration the ¯tness of each individual when it is evaluated using the ¯tness function, which assigns a value to each individual expressing how close it is from the optimum. EDAs use this value to select those individuals that will be considered for creating the probabilistic graphical model. However, EDAs only consider the values of predictor variables of selected values, considering them all as equally valid when Teresa Miquelez, Tesis Doctoral, 2010 vii learning the model to generate the new population. This step is essential since the probabilistic graphical model is used by EDAs to represent all dependencies between values. Secondly, it is important to analyse the most appropriate way to represent a problem for its solving by EDAs. Even if this aspect is fully dependent on each particular optimization problem, the best choice is to try to best show the di®erences between the di®erent solutions. The way in which the problem is represented has an important in°uence on the performance of EDAs. Furthermore, each individual representation of the problem requires a corresponding ¯tness function to be de¯ned, which leads us to the need to choose the most appropriated ¯tness function for a better convergence process. This thesis analyses the importance of these two aspects in the overall performance. With the aim of adding more information to the ¯tness function in the learning process, we have developed a new optimization method that works in a similar way as EDAs but using Bayesian classi¯ers for the learning step. Following this, individuals are classi¯ed according to their ¯tness and this information is used for building a probabilistic graphical model in the form of a Bayesian classi¯er. We call this new method Evolutionary Bayesian Classi¯er-based Optimization Algorithms (EBCOA). Finally, in order to study the importance of a concrete individual representation in EDA's performance, we choose as a concrete optimization problem the satis¯ability one (SAT). This NP-complete problem is broadly known in computational theory since it represents a generic model through which diverse research practical cases can be formalised using a generic representation, mainly decision making problems such as synthesis and verification of integrated circuits, optimization and task planning. Based on published works using evolutionary computation techniques for SAT, di®erent representation ways are presented and analysed in order to investigate which is the best problem representation and ¯tness function to improve the performance of EDAs

    IST Austria Thesis

    Get PDF
    In this thesis we present a computer-aided programming approach to concurrency. Our approach helps the programmer by automatically fixing concurrency-related bugs, i.e. bugs that occur when the program is executed using an aggressive preemptive scheduler, but not when using a non-preemptive (cooperative) scheduler. Bugs are program behaviours that are incorrect w.r.t. a specification. We consider both user-provided explicit specifications in the form of assertion statements in the code as well as an implicit specification. The implicit specification is inferred from the non-preemptive behaviour. Let us consider sequences of calls that the program makes to an external interface. The implicit specification requires that any such sequence produced under a preemptive scheduler should be included in the set of sequences produced under a non-preemptive scheduler. We consider several semantics-preserving fixes that go beyond atomic sections typically explored in the synchronisation synthesis literature. Our synthesis is able to place locks, barriers and wait-signal statements and last, but not least reorder independent statements. The latter may be useful if a thread is released to early, e.g., before some initialisation is completed. We guarantee that our synthesis does not introduce deadlocks and that the synchronisation inserted is optimal w.r.t. a given objective function. We dub our solution trace-based synchronisation synthesis and it is loosely based on counterexample-guided inductive synthesis (CEGIS). The synthesis works by discovering a trace that is incorrect w.r.t. the specification and identifying ordering constraints crucial to trigger the specification violation. Synchronisation may be placed immediately (greedy approach) or delayed until all incorrect traces are found (non-greedy approach). For the non-greedy approach we construct a set of global constraints over synchronisation placements. Each model of the global constraints set corresponds to a correctness-ensuring synchronisation placement. The placement that is optimal w.r.t. the given objective function is chosen as the synchronisation solution. We evaluate our approach on a number of realistic (albeit simplified) Linux device-driver benchmarks. The benchmarks are versions of the drivers with known concurrency-related bugs. For the experiments with an explicit specification we added assertions that would detect the bugs in the experiments. Device drivers lend themselves to implicit specification, where the device and the operating system are the external interfaces. Our experiments demonstrate that our synthesis method is precise and efficient. We implemented objective functions for coarse-grained and fine-grained locking and observed that different synchronisation placements are produced for our experiments, favouring e.g. a minimal number of synchronisation operations or maximum concurrency
    corecore