59 research outputs found

    Special issue on logics and artificial intelligence

    Get PDF
    There is a significant range of ongoing challenges in artificial intelligence (AI) dealing with reasoning, planning, learning, perception and cognition, among others. In this scenario, many-valued logics emerge as one of the topics in many of the solutions to some of those AI problems. This special issue presents a brief introduction to the relation between logics and AI and collects recent research works on logic-based approaches in AI

    On primal-dual schema for the minimum satisfiability problem

    Get PDF
    Satisfiability problem is the first problem known to be NP-complete [8, 28]. In this thesis, we have studied the minimization version of the satisfiability problem called the MINSAT. Given a set of boolean variables and a set of clauses, such that each clause is a disjunction of variables, the goal is to find the boolean values of the variables so that minimum number of clauses are satisfied. We have used the concept of linear programming and the primal-dual method to study the problem. We have constructed the Linear program of the MINSAT and its restricted version. We have proposed two combinatorial methods to solve the dual of the restricted primal of the MINSAT. Further to this, these two algorithms also obtain an integral solution to the dual of the MINSAT problem. Lastly, we performed a comparison analysis of our proposed algorithms with the simplex method

    Flow-augmentation III: Complexity dichotomy for Boolean CSPs parameterized by the number of unsatisfied constraints

    Get PDF
    We study the parameterized problem of satisfying ``almost all'' constraints of a given formula FF over a fixed, finite Boolean constraint language Γ\Gamma, with or without weights. More precisely, for each finite Boolean constraint language Γ\Gamma, we consider the following two problems. In Min SAT(Γ)(\Gamma), the input is a formula FF over Γ\Gamma and an integer kk, and the task is to find an assignment α ⁣:V(F){0,1}\alpha \colon V(F) \to \{0,1\} that satisfies all but at most kk constraints of FF, or determine that no such assignment exists. In Weighted Min SAT(Γ(\Gamma), the input additionally contains a weight function w ⁣:FZ+w \colon F \to \mathbb{Z}_+ and an integer WW, and the task is to find an assignment α\alpha such that (1) α\alpha satisfies all but at most kk constraints of FF, and (2) the total weight of the violated constraints is at most WW. We give a complete dichotomy for the fixed-parameter tractability of these problems: We show that for every Boolean constraint language Γ\Gamma, either Weighted Min SAT(Γ)(\Gamma) is FPT; or Weighted Min SAT(Γ)(\Gamma) is W[1]-hard but Min SAT(Γ)(\Gamma) is FPT; or Min SAT(Γ)(\Gamma) is W[1]-hard. This generalizes recent work of Kim et al. (SODA 2021) which did not consider weighted problems, and only considered languages Γ\Gamma that cannot express implications (uv)(u \to v) (as is used to, e.g., model digraph cut problems). Our result generalizes and subsumes multiple previous results, including the FPT algorithms for Weighted Almost 2-SAT, weighted and unweighted \ell-Chain SAT, and Coupled Min-Cut, as well as weighted and directed versions of the latter. The main tool used in our algorithms is the recently developed method of directed flow-augmentation (Kim et al., STOC 2022)

    Partitionnement d’instances de processus basé sur les techniques de conformité de modèles

    Get PDF
    As event data becomes an ubiquitous source of information, data science techniques represent an unprecedented opportunity to analyze and react to the processes that generate this data. Process Mining is an emerging field that bridges the gap between traditional data analysis techniques, like Data Mining, and Business Process Management. One core value of Process Mining is the discovery of formal process models like Petri nets or BPMN models which attempt to make sense of the events recorded in logs. Due to the complexity of event data, automated process discovery algorithms tend to create dense process models which are hard to interpret by humans. Fortunately, Conformance Checking, a sub-field of Process Mining, enables relating observed and modeled behavior, so that humans can map these two pieces of process information. Conformance checking is possible through alignment artefacts, which associate process models and event logs. Different types of alignment artefacts exist, namely alignments, multi-alignments and anti-alignments. Currently, only alignment artefacts are deeply addressed in the literature. It allows to relate the process model to a given process instance. However, because many behaviors exist in logs, identifying an alignment per process instance hinders the readability of the log-to-model relationships.The present thesis proposes to exploit the conformance checking artefacts for clustering the process executions recorded in event logs, thereby extracting a restrictive number of modeled representatives. Data clustering is a common method for extracting information from dense and complex data. By grouping objects by similarities into clusters, data clustering enables to mine simpler datasets which embrace the similarities and the differences contained in data. Using the conformance checking artefacts in a clustering approach allows to consider a reliable process model as a baseline for grouping the process instances. Hence, the discovered clusters are associated with modeled artefacts, that we call model-based trace variants, which provides opportune log-to-model explanations.From this motivation, we have elaborated a set of methods for computing conformance checking artefacts. The first contribution is the computation of a unique modeled behavior that represents of a set of process instances, namely multi-alignment. Then, we propose several alignment-based clustering approaches which provide clusters of process instances associated to a modeled artefact. Finally, we highlight the interest of anti-alignment for extracting deviations of process models with respect to the log. This latter artefact enables to estimate model precision, and we show its impact in model-based clustering. We provide SAT encoding for all the proposed techniques. Heuristic algorithms are then added to deal with computing capacity of today’s computers, at the expense of loosing optimality.Les données d'événements devenant une source d'information omniprésente, les techniques d'analyse de données représentent une opportunité sans précédent pour étudier et réagir aux processus qui génèrent ces données. Le Process Mining est un domaine émergent qui comble le fossé entre les techniques d'analyse de données, comme le Data Mining, et les techniques de management des entreprises, à savoir, le Business Process Management. L'une des bases fondamentales du Process Mining est la découverte de modèles de processus formels tels que les réseaux de Petri ou les modèles BPMN qui tentent de donner un sens aux événements enregistrés dans les journaux. En raison de la complexité des données d'événements, les algorithmes de découverte de processus ont tendance à créer des modèles de processus denses, qui sont difficiles à interpréter par les humains. Heureusement, la Vérification de Conformité, un sous-domaine du Process Mining, permet d'établir des liens entre le comportement observé et le comportement modélisé, facilitant ainsi la compréhension des correspondance entre ces deux éléments d'information sur les processus. La Vérification de Conformité est possible grâce aux artefacts d'alignement, qui associent les modèles de processus et les journaux d'événements. Il existe différents types d'artefacts d'alignement, à savoir les alignements, les multi-alignements et les anti-alignements. Actuellement, seuls les alignements sont traités en profondeur dans la littérature scientifique. Un alignement permet de relier le modèle de processus à une instance de processus donnée. Cependant, étant donné que de nombreux comportements existent dans les logs, l'identification d'un alignement par instance de processus nuit à la lisibilité des relations log-modèle.La présente thèse propose d'exploiter les artefacts de conformité pour regrouper les exécutions de processus enregistrées dans les journaux d'événements, et ainsi extraire un nombre restrictif de représentations modélisées. Le regroupement de données, communément appelé partitionnement, est une méthode courante pour extraire l'information de données denses et complexes. En regroupant les objets par similarité dans des clusters, le partitionnement permet d'extraire des ensembles de données plus simples qui englobent les similarités et les différences contenues dans les données. L'utilisation des artefacts de conformité dans une approche de partitionnement permet de considérer un modèle de processus fiable comme une base de référence pour le regroupement des instances de processus. Ainsi, les clusters découverts sont associés à des artefacts modélisés, que nous appelons variantes modélisées des traces, ce qui fournit des explications opportunes sur les relations entre le journal et le modèle.Avec cette motivation, nous avons élaboré un ensemble de méthodes pour calculer les artefacts de conformité. La première contribution est le calcul d'un comportement modélisé unique qui représente un ensemble d'instances de processus, à savoir le multi-alignement. Ensuite, nous proposons plusieurs approches de partitionnement basées sur l'alignement qui fournissent des clusters d'instances de processus associés à un artefact modélisé. Enfin, nous soulignons l'intérêt de l'anti-alignement pour extraire les déviations des modèles de processus par rapport au journal. Ce dernier artefact permet d'estimer la précision du modèle. Nous montrons son impact sur nos approches de partitionnement basées sur des modèles. Nous fournissons un encodage SAT pour toutes les techniques proposées. Des heuristiques sont ensuite ajoutées pour tenir compte de la capacité de calcul des ordinateurs actuels, au prix d'une perte d'optimalité

    Solving the 2-MAXSAT Problem in Polynomial Time: A Proof of P = NP

    Full text link
    By the MAXSAT problem, we are given a set VV of mm variables and a collection CC of nn clauses over VV. We will seek a truth assignment to maximize the number of satisfied clauses. This problem is NP\textit{NP}-hard even for its restricted version, the 2-MAXSAT problem by which every clause contains at most 2 literals. In this paper, we discuss a polynomial time algorithm to solve this problem. Its time complexity is bounded by O(n2m3n^2m^3). Hence, we provide a proof of PP = NP\textit{NP}

    MODELING, LEARNING AND REASONING ABOUT PREFERENCE TREES OVER COMBINATORIAL DOMAINS

    Get PDF
    In my Ph.D. dissertation, I have studied problems arising in various aspects of preferences: preference modeling, preference learning, and preference reasoning, when preferences concern outcomes ranging over combinatorial domains. Preferences is a major research component in artificial intelligence (AI) and decision theory, and is closely related to the social choice theory considered by economists and political scientists. In my dissertation, I have exploited emerging connections between preferences in AI and social choice theory. Most of my research is on qualitative preference representations that extend and combine existing formalisms such as conditional preference nets, lexicographic preference trees, answer-set optimization programs, possibilistic logic, and conditional preference networks; on learning problems that aim at discovering qualitative preference models and predictive preference information from practical data; and on preference reasoning problems centered around qualitative preference optimization and aggregation methods. Applications of my research include recommender systems, decision support tools, multi-agent systems, and Internet trading and marketing platforms
    corecore