23 research outputs found

    Acquiring local preferences of Weighted Partial MaxSAT

    Get PDF
    Many real-life problems can be formulated as boolean satisfiability (SAT). In addition, in many of these problems, there are some hard clauses that must be satisfied but also some other soft clauses that can remain unsatisfied at some cost. These problems are referred to as Weighted Partial Maximum Satisfiability (WPMS). For solving them, the challenge is to find a solution that minimizes the total sum of costs of the unsatisfied clauses. Configuration problems are real-life examples of these, which involve customizing products according to a user's specific requirements. In the literature there exist many efficient techniques for finding solutions having minimum total cost. However, less attention has been paid to the fact that in many real-life problems the associated weights for soft clauses can be unknown. An example of such situations is when users cannot provide local preferences but instead express global preferences over complete assignments. In these cases, the acquisition of preferences can be the key for finding the best solution. In this paper, we propose a method to formalize the acquisition of local preferences. The process involves solving the associated system of linear equations for a set of complete assignments and their costs. Furthermore, we formalize the characteristics and size of the complete assignments required to acquire all local weights. We present an heuristic algorithm that searches for such assignments which performs promisingly on many benchmarks from the literature

    Integer programming methods for large-scale practical classroom assignment problems

    Get PDF
    In this paper we present an integer programming method for solving the Classroom Assignment Problem in University Course Timetabling. We introduce a novel formulation of the problem which generalises existing models and maintains tractability even for large instances. The model is validated through computational results based on our experiences at the University of Auckland, and on instances from the 2007 International Timetabling Competition. We also expand upon existing results into the computational difficulty of room assignment problems

    Proceedings of the 21st Conference on Formal Methods in Computer-Aided Design – FMCAD 2021

    Get PDF
    The Conference on Formal Methods in Computer-Aided Design (FMCAD) is an annual conference on the theory and applications of formal methods in hardware and system verification. FMCAD provides a leading forum to researchers in academia and industry for presenting and discussing groundbreaking methods, technologies, theoretical results, and tools for reasoning formally about computing systems. FMCAD covers formal aspects of computer-aided system design including verification, specification, synthesis, and testing

    Foundations of implementations for formal argumentation

    Get PDF
    We survey the current state of the art of general techniques, as well as specific software systems for solving tasks in abstract argumentation frameworks, structured argumentation frameworks, and approaches for visualizing and analysing argumentation. Furthermore, we discuss challenges and promising techniques such as parallel processing and approximation approaches. Finally, we address the issue of evaluating software systems empirically with links to the International Competition on Computational Models of Argumentation

    Emergency response, the built environment and GPS signal quality: simulation and analysis of urban canyons in Quebec City

    Get PDF
    L’objectif général de cette recherche est de synthétiser les informations disponibles sur le développement d’un système d’urgence 9-1-1 pour les téléphones cellulaires dans le contexte nord américain. L’objectif spécifique du projet est de proposer une méthodologie qui détermine les conditions moyennes d’obstruction causée par les bâtiments qui nuisent à la qualité de la réception de signal GPS. Un modèle statistique de la qualité de signal GPS basé sur une campagne de mesures de réception des signaux GPS dans les arrondissements à caractère urbain dans la Ville de Québec (Canada) est employé pour simuler l’effet d’obstruction. Ces mesures ont montré une variabilité spatiale de la qualité de signal selon les conditions locales d’obstruction des édifices sur la voûte céleste. Une augmentation du pourcentage de ciel obstrué (effet de masque) a entraîné une augmentation de la probabilité de perte de signal GPS. Des cartes continues de la probabilité de perte de signal GPS ont été créés pour des feuillets de la Base de données topographiques de Québec au 1 : 20 000 en employant la technique d’interpolation spatiale par la méthode de la distance inverse pondérée (DIP).The general objective of this investigation is to extract the most pertinent information currently available on developing an emergency 9-1-1 system for cellular phones in the North American context. The specific objective of this project is to propose a methodology for determining the average obstruction by buildings which affect GPS satellite signal quality. A statistical model of GPS signal quality based on a field measurement campaign in the urban districts of Quebec City (Canada) was used to simulate this phenomenon. The measurements demonstrated a spatial variation in signal quality according to the building obstruction over the local sky. An increase in the percent of obstructed sky led to an increase in the probability of losing GPS signal lock. Continuous maps of GPS signal loss probability were created for sheets of the Quebec topographic database at the 1:20,000 scale using the Inverse Distance Weighting technique of spatial interpolation (IDW)

    L'extraction d'information des sources de données non structurées et semi-structurées

    Get PDF
    L'objectif de la thèse: Dans le contexte des dépôts de connaissances de grandes dimensions récemment apparues, on exige l'investigation de nouvelles méthodes innovatrices pour résoudre certains problèmes dans le domaine de l'Extraction de l'Information (EI), tout comme dans d'autres sous-domaines apparentés. La thèse débute par un tour d'ensemble dans le domaine de l'Extraction de l'Information, tout en se concentrant sur le problème de l'identification des entités dans des textes en langage naturel. Cela constitue une démarche nécessaire pour tout système EI. L'apparition des dépôts de connaissances de grandes dimensions permet le traitement des sous-problèmes de désambigüisation au Niveau du Sens (WSD) et La Reconnaissance des Entités dénommées (NER) d'une manière unifiée. Le premier système implémenté dans cette thèse identifie les entités (les noms communs et les noms propres) dans un texte libre et les associe à des entités dans une ontologie, pratiquement, tout en les désambigüisant. Un deuxième système implémenté, inspiré par l'information sémantique contenue dans les ontologies, essaie, également, l'utilisation d'une nouvelle méthode pour la solution du problème classique de classement de texte, obtenant de bons résultats.Thesis objective: In the context of recently developed large scale knowledge sources (general ontologies), investigate possible new approaches to major areas of Information Extraction (IE) and related fields. The thesis overviews the field of Information Extraction and focuses on the task of entity recognition in natural language texts, a required step for any IE system. Given the availability of large knowledge resources in the form of semantic graphs, an approach that treats the sub-tasks of Word Sense Disambiguation and Named Entity Recognition in a unified manner is possible. The first implemented system using this approach recognizes entities (words, both common and proper nouns) from free text and assigns them ontological classes, effectively disambiguating them. A second implemented system, inspired by the semantic information contained in the ontologies, also attempts a new approach to the classic problem of text classification, showing good results

    Automated Service Negotiation Between Autonomous Computational Agents

    Get PDF
    PhDMulti-agent systems are a new computational approach for solving real world, dynamic and open system problems. Problems are conceptualized as a collection of decentralised autonomous agents that collaborate to reach the overall solution. Because of the agents autonomy, their limited rationality, and the distributed nature of most real world problems, the key issue in multi-agent system research is how to model interactions between agents. Negotiation models have emerged as suitable candidates to solve this interaction problem due to their decentralised nature, emphasis on mutual selection of an action, and the prevalence of negotiation in real social systems. The central problem addressed in this thesis is the design and engineering of a negotiation model for autonomous agents for sharing tasks and/or resources. To solve this problem a negotiation protocol and a set of deliberation mechanisms are presented which together coordinate the actions of a multiple agent system. In more detail, the negotiation protocol constrains the action selection problem solving of the agents through the use of normative rules of interaction. These rules temporally order, according to the agents' roles, communication utterances by specifying both who can say what, as well as when. Specifically, the presented protocol is a repeated, sequential model where offers are iteratively exchanged. Under this protocol, agents are assumed to be fully committed to their utterances and utterances are private between the two agents. The protocol is distributed, symmetric, supports bi and/or multi-agent negotiation as well as distributive and integrative negotiation. In addition to coordinating the agent interactions through normative rules, a set of mechanisms are presented that coordinate the deliberation process of the agents during the ongoing negotiation. Whereas the protocol normatively describes the orderings of actions, the mechanisms describe the possible set of agent strategies in using the protocol. These strategies are captured by a negotiation architecture that is composed of responsive and deliberative decision mechanisms. Decision making with the former mechanism is based on a linear combination of simple functions called tactics, which manipulate the utility of deals. The latter mechanisms are subdivided into trade-off and issue manipulation mechanisms. The trade-off mechanism generates offers that manipulate the value, rather than the overall utility, of the offer. The issue manipulation mechanism aims to increase the likelihood of an agreement by adding and removing issues into the negotiation set. When taken together, these mechanisms represent a continuum of possible decision making capabilities: ranging from behaviours that exhibit greater awareness of environmental resources and less to solution quality, to behaviours that attempt to acquire a given solution quality independently of the resource consumption. The protocol and mechanisms are empirically evaluated and have been applied to real world task distribution problems in the domains of business process management and telecommunication management. The main contribution and novelty of this research are: i) a domain independent computational model of negotiation that agents can use to support a wide variety of decision making strategies, ii) an empirical evaluation of the negotiation model for a given agent architecture in a number of different negotiation environments, and iii) the application of the developed model to a number of target domains. An increased strategy set is needed because the developed protocol is less restrictive and less constrained than the traditional ones, thus supporting development of strategic interaction models that belong more to open systems. Furthermore, because of the combination of the large number of environmental possibilities and the size of the set of possible strategies, the model has been empirically investigated to evaluate the success of strategies in different environments. These experiments have facilitated the development of general guidelines that can be used by designers interested in developing strategic negotiating agents. The developed model is grounded from the requirement considerations from both the business process management and telecommunication application domains. It has also been successfully applied to five other real world scenarios

    Efficient Reasoning with Constrained Goal Models

    Get PDF
    GOAL models have been widely used in Computer Science to represent software requirements, business objectives, and design qualities. Existing goal modelling techniques, however, have shown limitations of expressiveness and/or tractability in coping with complex real-world problems. In this work, we exploit advances in automated reasoning technologies, notably Satisfiability and Optimization Modulo Theories (SMT/OMT), and we propose and formalize: (i) an extended modelling language for goals, namely the Constrained Goal Model (CGM), which makes explicit the notion of goal refinements and of domain as- sumptions, allows for expressing preferences between goals and refinements, and allows for associating numerical attributes to goals and refinements for defining constraints and optimization goals over multiple objective functions, refinements and their numerical attributes; (i) a novel set of automated reasoning functionalities over CGMs, allowing for automatically generating suitable realizations of input CGMs, under user-specified assumptions and constraints, that also maximize preferences and optimize given objective functions. We are also interested in supporting software evolution caused by changing requirements and/or changes in the operational environment of a software system. For example, users of a system may want new functionalities or performance enhancements to cope with growing user population (requirements evolution). Alternatively, vendors of a system may want to minimize costs in implementing requirements changes (evolution requirements). We propose to use CGMs to represent the requirements of a system and capture requirements changes in terms of incremental operations on a goal model. Evolution requirements are then represented as optimization goals that minimize implementation costs or customer value. We can then exploit reasoning techniques to derive optimal new specifications for an evolving software system. We have implemented these modelling and reasoning functionalities in a tool, named CGM-Tool, using the OMT solver OptiMathSAT as automated reasoning backend. More- over, we have conducted an experimental evaluation on large CGMs to support the claim that our proposal scales well for goal models with thousands of elements. To access our framework usability, we have employed a user-oriented evaluation using enquiry evaluation method

    Probabilistic Models for Scalable Knowledge Graph Construction

    Get PDF
    In the past decade, systems that extract information from millions of Internet documents have become commonplace. Knowledge graphs -- structured knowledge bases that describe entities, their attributes and the relationships between them -- are a powerful tool for understanding and organizing this vast amount of information. However, a significant obstacle to knowledge graph construction is the unreliability of the extracted information, due to noise and ambiguity in the underlying data or errors made by the extraction system and the complexity of reasoning about the dependencies between these noisy extractions. My dissertation addresses these challenges by exploiting the interdependencies between facts to improve the quality of the knowledge graph in a scalable framework. I introduce a new approach called knowledge graph identification (KGI), which resolves the entities, attributes and relationships in the knowledge graph by incorporating uncertain extractions from multiple sources, entity co-references, and ontological constraints. I define a probability distribution over possible knowledge graphs and infer the most probable knowledge graph using a combination of probabilistic and logical reasoning. Such probabilistic models are frequently dismissed due to scalability concerns, but my implementation of KGI maintains tractable performance on large problems through the use of hinge-loss Markov random fields, which have a convex inference objective. This allows the inference of large knowledge graphs using 4M facts and 20M ground constraints in 2 hours. To further scale the solution, I develop a distributed approach to the KGI problem which runs in parallel across multiple machines, reducing inference time by 90%. Finally, I extend my model to the streaming setting, where a knowledge graph is continuously updated by incorporating newly extracted facts. I devise a general approach for approximately updating inference in convex probabilistic models, and quantify the approximation error by defining and bounding inference regret for online models. Together, my work retains the attractive features of probabilistic models while providing the scalability necessary for large-scale knowledge graph construction. These models have been applied on a number of real-world knowledge graph projects, including the NELL project at Carnegie Mellon and the Google Knowledge Graph
    corecore