693 research outputs found

    The Performance Optimization of ASP Solving Based on Encoding Rewriting and Encoding Selection

    Get PDF
    Answer set programming (ASP) has long been used for modeling and solving hard search problems. These problems are modeled in ASP as encodings, a collection of rules that declaratively describe the logic of the problem without explicitly listing how to solve it. It is common that the same problem has several different but equivalent encodings in ASP. Experience shows that the performance of these ASP encodings may vary greatly from instance to instance when processed by current state-of-the-art ASP grounder/solver systems. In particular, it is rarely the case that one encoding outperforms all others. Moreover, running an ASP system on one encoding for a specific instance may “take forever,” while running it on another encoding for this instance may yield a solution in a fraction of a second. The selection of a ”good” encoding for each instance is crucial to the performance of ASP solving. In this dissertation, I propose methods to improve the performance of ASP solving that exploit these observations. First, I designed and implemented methods that, given an encoding for a problem, rewrite it in several ways into new different but equivalent encodings. Second, I designed and implemented a system that given a set of input encodings of a problem, a set of problem instances, and an ASP grounder/solver system, automatically generates equivalent encodings and builds for each selected encoding its performance model. The model predicts for any instance the execution time that the grounder/solver system takes to process the instance under the corresponding encoding. These performance models are then used to improve solving efficiency: whenever a new instance arrives, the system selects the encoding predicted to perform the best on the instance and invokes the grounder/solver. The system also supports a scheduled execution and an interleaved execution of encodings, which are complementary to machine learning techniques. Third, I implemented algorithms that generate hard structured instances for several combinatorial problems I selected for our experimental study of the efficacy of the methods I developed. Hard instances can serve as the benchmark for evaluating the hardness of specific problems and contribute as training data to the platform I created to help build encoding selection models. The process can also provide meaningful insights into finding hard instances of other combinatorial problems

    : Méthodes d'Inférence Symbolique pour les Bases de Données

    Get PDF
    This dissertation is a summary of a line of research, that I wasactively involved in, on learning in databases from examples. Thisresearch focused on traditional as well as novel database models andlanguages for querying, transforming, and describing the schema of adatabase. In case of schemas our contributions involve proposing anoriginal languages for the emerging data models of Unordered XML andRDF. We have studied learning from examples of schemas for UnorderedXML, schemas for RDF, twig queries for XML, join queries forrelational databases, and XML transformations defined with a novelmodel of tree-to-word transducers.Investigating learnability of the proposed languages required us toexamine closely a number of their fundamental properties, often ofindependent interest, including normal forms, minimization,containment and equivalence, consistency of a set of examples, andfinite characterizability. Good understanding of these propertiesallowed us to devise learning algorithms that explore a possibly largesearch space with the help of a diligently designed set ofgeneralization operations in search of an appropriate solution.Learning (or inference) is a problem that has two parameters: theprecise class of languages we wish to infer and the type of input thatthe user can provide. We focused on the setting where the user inputconsists of positive examples i.e., elements that belong to the goallanguage, and negative examples i.e., elements that do not belong tothe goal language. In general using both negative and positiveexamples allows to learn richer classes of goal languages than usingpositive examples alone. However, using negative examples is oftendifficult because together with positive examples they may cause thesearch space to take a very complex shape and its exploration may turnout to be computationally challenging.Ce mémoire est une courte présentation d’une direction de recherche, à laquelle j’ai activementparticipé, sur l’apprentissage pour les bases de données à partir d’exemples. Cette recherches’est concentrée sur les modèles et les langages, aussi bien traditionnels qu’émergents, pourl’interrogation, la transformation et la description du schéma d’une base de données. Concernantles schémas, nos contributions consistent en plusieurs langages de schémas pour les nouveaumodèles de bases de données que sont XML non-ordonné et RDF. Nous avons ainsi étudiél’apprentissage à partir d’exemples des schémas pour XML non-ordonné, des schémas pour RDF,des requêtes twig pour XML, les requêtes de jointure pour bases de données relationnelles et lestransformations XML définies par un nouveau modèle de transducteurs arbre-à-mot.Pour explorer si les langages proposés peuvent être appris, nous avons été obligés d’examinerde près un certain nombre de leurs propriétés fondamentales, souvent souvent intéressantespar elles-mêmes, y compris les formes normales, la minimisation, l’inclusion et l’équivalence, lacohérence d’un ensemble d’exemples et la caractérisation finie. Une bonne compréhension de cespropriétés nous a permis de concevoir des algorithmes d’apprentissage qui explorent un espace derecherche potentiellement très vaste grâce à un ensemble d’opérations de généralisation adapté àla recherche d’une solution appropriée.L’apprentissage (ou l’inférence) est un problème à deux paramètres : la classe précise delangage que nous souhaitons inférer et le type d’informations que l’utilisateur peut fournir. Nousnous sommes placés dans le cas où l’utilisateur fournit des exemples positifs, c’est-à-dire deséléments qui appartiennent au langage cible, ainsi que des exemples négatifs, c’est-à-dire qui n’enfont pas partie. En général l’utilisation à la fois d’exemples positifs et négatifs permet d’apprendredes classes de langages plus riches que l’utilisation uniquement d’exemples positifs. Toutefois,l’utilisation des exemples négatifs est souvent difficile parce que les exemples positifs et négatifspeuvent rendre la forme de l’espace de recherche très complexe, et par conséquent, son explorationinfaisable
    • …
    corecore