249 research outputs found

    Do Hard SAT-Related Reasoning Tasks Become Easier in the Krom Fragment?

    Full text link
    Many reasoning problems are based on the problem of satisfiability (SAT). While SAT itself becomes easy when restricting the structure of the formulas in a certain way, the situation is more opaque for more involved decision problems. We consider here the CardMinSat problem which asks, given a propositional formula ϕ\phi and an atom xx, whether xx is true in some cardinality-minimal model of ϕ\phi. This problem is easy for the Horn fragment, but, as we will show in this paper, remains Θ2\Theta_2-complete (and thus NP\mathrm{NP}-hard) for the Krom fragment (which is given by formulas in CNF where clauses have at most two literals). We will make use of this fact to study the complexity of reasoning tasks in belief revision and logic-based abduction and show that, while in some cases the restriction to Krom formulas leads to a decrease of complexity, in others it does not. We thus also consider the CardMinSat problem with respect to additional restrictions to Krom formulas towards a better understanding of the tractability frontier of such problems

    Complexity of Non-Monotonic Logics

    Full text link
    Over the past few decades, non-monotonic reasoning has developed to be one of the most important topics in computational logic and artificial intelligence. Different ways to introduce non-monotonic aspects to classical logic have been considered, e.g., extension with default rules, extension with modal belief operators, or modification of the semantics. In this survey we consider a logical formalism from each of the above possibilities, namely Reiter's default logic, Moore's autoepistemic logic and McCarthy's circumscription. Additionally, we consider abduction, where one is not interested in inferences from a given knowledge base but in computing possible explanations for an observation with respect to a given knowledge base. Complexity results for different reasoning tasks for propositional variants of these logics have been studied already in the nineties. In recent years, however, a renewed interest in complexity issues can be observed. One current focal approach is to consider parameterized problems and identify reasonable parameters that allow for FPT algorithms. In another approach, the emphasis lies on identifying fragments, i.e., restriction of the logical language, that allow more efficient algorithms for the most important reasoning tasks. In this survey we focus on this second aspect. We describe complexity results for fragments of logical languages obtained by either restricting the allowed set of operators (e.g., forbidding negations one might consider only monotone formulae) or by considering only formulae in conjunctive normal form but with generalized clause types. The algorithmic problems we consider are suitable variants of satisfiability and implication in each of the logics, but also counting problems, where one is not only interested in the existence of certain objects (e.g., models of a formula) but asks for their number.Comment: To appear in Bulletin of the EATC

    Spatio-Temporal Reasoning About Agent Behavior

    Get PDF
    There are many applications where we wish to reason about spatio-temporal aspects of an agent's behavior. This dissertation examines several facets of this type of reasoning. First, given a model of past agent behavior, we wish to reason about the probability that an agent takes a given action at a certain time. Previous work combining temporal and probabilistic reasoning has made either independence or Markov assumptions. This work introduces Annotated Probabilistic Temporal (APT) logic which makes neither assumption. Statements in APT logic consist of rules of the form "Formula G becomes true with a probability [L,U] within T time units after formula F becomes true'' and can be written by experts or extracted automatically. We explore the problem of entailment - finding the probability that an agent performs a given action at a certain time based on such a model. We study this problem's complexity and develop a sound, but incomplete fixpoint operator as a heuristic - implementing it and testing it on automatically generated models from several datasets. Second, agent behavior often results in "observations'' at geospatial locations that imply the existence of other, unobserved, locations we wish to find ("partners"). In this dissertation, we formalize this notion with "geospatial abduction problems" (GAPs). GAPs try to infer a set of partner locations for a set of observations and a model representing the relationship between observations and partners for a given agent. This dissertation presents exact and approximate algorithms for solving GAPs as well as an implemented software package for addressing these problems called SCARE (the Spatio-Cultural Abductive Reasoning Engine). We tested SCARE on counter-insurgency data from Iraq and obtained good results. We then provide an adversarial extension to GAPs as follows: given a fixed set of observations, if an adversary has probabilistic knowledge of how an agent were to find a corresponding set of partners, he would place the partners in locations that minimize the expected number of partners found by the agent. We examine this problem, along with its complement by studying their computational complexity, developing algorithms, and implementing approximation schemes. We also introduce a class of problems called geospatial optimization problems (GOPs). Here the agent has a set of actions that modify attributes of a geospatial region and he wishes to select a limited number of such actions (with respect to some budget and other constraints) in a manner that maximizes a benefit function. We study the complexity of this problem and develop exact methods. We then develop an approximation algorithm with a guarantee. For some real-world applications, such as epidemiology, there is an underlying diffusion process that also affects geospatial proprieties. We address this with social network optimization problems (SNOPs) where given a weighted, labeled, directed graph we seek to find a set of vertices, that if given some initial property, optimize an aggregate study with respect to such diffusion. We develop and implement a heuristic that obtains a guarantee for a large class of such problems

    Automating the analysis of stateful feature models

    Get PDF
    Tesis descargada desde una página web de la Universidad de Sevilla http://www.lsi.us.es/~trinidad/docs/tesis.pdfEl modelado de la variabilidad es una de las principales tareas en el desarrollo de l´ıneas de productos software (LPS). Los FMs son el modelo mas utilizado para ello. Los FMs representan el conjunto de decisiones que pueden tomar los usuarios para configurar su producto como una jerarqu´ıa de caracter´ısticas. Hasta la fecha, estas decisiones se limitan a elegir y descartar las características que se desean, impidiendo la toma de decisiones sobre otros elementos importantes como son las cardinalidades y los atributos. Por otro lado, la extraccion automática de información de los FMs, también conocida como an´alisis automático de FMs (AAFM) es un tema que ha sido objeto de investigacion en los últimos veinteaños. El AAFM ofrece un amplio catálogo de operaciones de análisis para distintos propósitos. El enfoque general para resolver estas operciones de analisis consiste en dar una semántica operacional en términos de lenguajes declarativos que permiten la extraccion de información por medio de resolutores lógicos. Siguiendo este enfoque se han propuesto hasta la fecha más de 30 operaciones de análisis. Un subconjunto de estas operaciones denominadas explicativas ofrecen la posibilidad de obtener explicaciones sobre las relaciones que provocan determinados errores o las decisiones de usuario conflictivas que deben repararse en una configuración. Sin embargo, de todas las operaciones explicativas propuestas hasta la fecha, solo un subconjunto de ellas dispone de una semantica formal. En este escenario encontramos tres problemas a los que esta tesis se enfrenta: en primer lugar, los FMs no son modelos completamente configurables al impedir la toma de decisiones sobre todos sus elementos. En segundo lugar, es necesario dotar a todas las operaciones explicativas de una semantica formal. En tercer lugar, existe un elevado número de operaciones y la incapacidad de algunas de ellas para trabajar con FMs completamente configurables plantea una necesidad de proponer un nuevo marco de trabajo formal que les de soporte. En este trabajo partimos de dos conjeturas: que existe una correlacion entre determinadas operaciones explicativas y otras no explicativas; y que es posible interpretar ambos tipos de operaciones como problemas de analisis abductivo y deductivo (DAP). Apoyandonos en estas conjeturas, en esta tesis presentamos tres principales contribuciones a fin de resolver los problemas planteados: (i) proponemos los SFMs como modelos completamente configurables, que permiten a los usuarios tomar decisiones sobre todos sus elementos, (ii) el uso de los SFMs y su interpretacion como DAPs nos permite dar una semantica formal al análisis explicativo de una manera compacta, interpretando todas las operaciones propuestas hasta la fecha como casos particulares de dos operaciones de analisis explicativo, (iii) al proponer un nuevo modelo para el análisis, vemos la oportunidad de revisar todo el catálogo de operaciones del AAFM, proponiendo un catalogo simplificado de operaciones y un conjunto de mecanismos de composición que otorga flexibilidad a la hora de definir nuevas operaciones de análisis. Con estas contribuciones, entendemos que este trabajo establece las bases del analisis automático de SFMs (AASFM) que resuelve las limitaciones identificadas en este trabajo para el AAFM y que simplifica el proceso de formalizacion, de implementación y de pruebas de los motores de analisis.Modeling variability is a major task in developing Software Product Lines (SPLs). Feature Models (FMs) are the most widely used model for this purpose. A FM represents as a hierarchy of features, the set of decisions that users can take to configure their products. To date, these decisions are limited to select and remove features, preventing decisions on other important elements such as cardinalities and attributes. Moreover, the automated extraction of information from FMs, a.k.a Automated Analysis of Feature Models (AAFM) is a thriving topic that has caught the attention of researchers for the last twenty years. The AAFM offers a wide range of analysis operations for different purposes. The general approach to solve these analysis operations is to give an operational semantics in terms of declarative languages that allow the extraction of information by means of logic solvers. Following this approach over 30 operations analysis have been proposed to date. A subset of these transactions so-called explanatory operations offers the possibility of providing explanations for the relationships that cause certain errors or conflicting user decisions to be repaired in a configuration. However, of all proposed explanatory operations to date, only a subset of them has a formal semantics. In this scenario there are three problems that this thesis faces: first, FMs are not fully-configurable since they prevent decisions on any kind of element. Second, it is necessary to endow all the explanatory operations with a formal semantics. Third, there is a large number of analysis operations that do not support fully-configurable FMs. It raises a need to propose a new formal framework for their support. In this work we start from two conjectures: that there is a correlation between explanatory and non-explanatory operations, and it is possible to interpret both types of operations as Deductive and Abductive Problems (DAPs). Relying on these assumptions, in this thesis we present three main contributions in order to solve the raised problems: (i) we propose Stateful Feature Models (SFMs) as fully-configurable models that enable users to make decisions about all of its elements, (ii) the use of SFMs and its interpretation as DAPs allow us to give a formal semantics for explanatory analysis in a compact manner, performing all the operations proposed to date as special cases of two explanatory operations, (iii) as we propose a new model, we see the opportunity to review the entire catalogue AAFM operations, proposing a simplified catalogue operations and a set of composition mechanisms which give flexibility to define new analysis operations. With these contributions, we believe that this work sets the basis for the Automated Analysis of Stateful Feature Models (AASFM) that solves the limitations identified in this work for the AAFM and simplifies the formalisation process and the implementation and testing of the analysis engines

    Parametrised enumeration

    Get PDF
    In this thesis, we develop a framework of parametrised enumeration complexity. At first, we provide the reader with preliminary notions such as machine models and complexity classes besides proving them to be well-chosen. Then, we study the interplay and the landscape of these classes and present connections to classical enumeration classes. Afterwards, we translate the fundamental methods of kernelisation and self-reducibility into equivalent techniques in the setting of parametrised enumeration. Subsequently, we illustrate the introduced classes by investigating the parametrised enumeration complexity of Max-Ones-SAT and strong backdoor sets as well as sharpen the first result by presenting a dichotomy theorem for Max-Ones-SAT. After this, we extend the definitions of parametrised enumeration algorithms by allowing orders on the solution space. In this context, we study the relations ``order by size'' and ``lexicographic order'' for graph modification problems and observe a trade-off between enumeration delay and space requirements of enumeration algorithms. These results then yield an enumeration technique for generalised modification problems that is illustrated by applying this method to the problems closest string, weak and strong backdoor sets, and weighted satisfiability. Eventually, we consider the enumeration of satisfying teams of formulas of poor man's propositional dependence logic. There, we present an enumeration algorithm with FPT delay and exponential space which is one of the first enumeration complexity results of a problem in a team logic. Finally, we show how this algorithm can be modified such that only polynomial space is required, however, by increasing the delay to incremental FPT time.In diesem Werk begründen wir die Theorie der parametrisierten Enumeration, präsentieren die grundlegenden Definitionen und prüfen ihre Sinnhaftigkeit. Im nächsten Schritt, untersuchen wir das Zusammenspiel der eingeführten Komplexitätsklassen und zeigen Verbindungen zur klassischen Enumerationskomplexität auf. Anschließend übertragen wir die zwei fundamentalen Techniken der Kernelisierung und Selbstreduzierbarkeit in Entsprechungen in dem Gebiet der parametrisierten Enumeration. Schließlich untersuchen wir das Problem Max-Ones-SAT und das Problem der Aufzählung starker Backdoor-Mengen als typische Probleme in diesen Klassen. Die vorherigen Resultate zu Max-Ones-SAT werden anschließend in einem Dichotomie-Satz vervollständigt. Im nächsten Abschnitt erweitern wir die neuen Definitionen auf Ordnungen (auf dem Lösungsraum) und erforschen insbesondere die zwei Relationen \glqq Größenordnung\grqq\ und \glqq lexikographische Reihenfolge\grqq\ im Kontext von Graphen-Modifikationsproblemen. Hierbei scheint es, als müsste man zwischen Delay und Speicheranforderungen von Aufzählungsalgorithmen abwägen, wobei dies jedoch nicht abschließend gelöst werden kann. Aus den vorherigen Überlegungen wird schließlich ein generisches Enumerationsverfahren für allgemeine Modifikationsprobleme entwickelt und anhand der Probleme Closest String, schwacher und starker Backdoor-Mengen sowie gewichteter Erfüllbarkeit veranschaulicht. Im letzten Abschnitt betrachten wir die parametrisierte Enumerationskomplexität von Erfüllbarkeitsproblemen im Bereich der Poor Man's Propositional Dependence Logic und stellen einen Aufzählungsalgorithmus mit FPT Delay vor, der mit exponentiellem Platz arbeitet. Dies ist einer der ersten Aufzählungsalgorithmen im Bereich der Teamlogiken. Abschließend zeigen wir, wie dieser Algorithmus so modifiziert werden kann, dass nur polynomieller Speicherplatz benötigt wird, bezahlen jedoch diese Einsparung mit einem Anstieg des Delays auf inkrementelle FPT Zeit (IncFPT)

    Parameterized aspects of team-based formalisms and logical inference

    Get PDF
    Parameterized complexity is an interesting subfield of complexity theory that has received a lot of attention in recent years. Such an analysis characterizes the complexity of (classically) intractable problems by pinpointing the computational hardness to some structural aspects of the input. In this thesis, we study the parameterized complexity of various problems from the area of team-based formalisms as well as logical inference. In the context of team-based formalism, we consider propositional dependence logic (PDL). The problems of interest are model checking (MC) and satisfiability (SAT). Peter Lohmann studied the classical complexity of these problems as a part of his Ph.D. thesis proving that both MC and SAT are NP-complete for PDL. This thesis addresses the parameterized complexity of these problems with respect to a wealth of different parameterizations. Interestingly, SAT for PDL boils down to the satisfiability of propositional logic as implied by the downwards closure of PDL-formulas. We propose an interesting satisfiability variant (mSAT) asking for a satisfiable team of size m. The problem mSAT restores the ‘team semantic’ nature of satisfiability for PDL-formulas. We propose another problem (MaxSubTeam) asking for a maximal satisfiable team if a given team does not satisfy the input formula. From the area of logical inference, we consider (logic-based) abduction and argumentation. The problem of interest in abduction (ABD) is to determine whether there is an explanation for a manifestation in a knowledge base (KB). Following Pfandler et al., we also consider two of its variants by imposing additional restrictions over the size of an explanation (ABD and ABD=). In argumentation, our focus is on the argument existence (ARG), relevance (ARG-Rel) and verification (ARG-Check) problems. The complexity of these problems have been explored already in the classical setting, and each of them is known to be complete for the second level of the polynomial hierarchy (except for ARG-Check which is DP-complete) for propositional logic. Moreover, the work by Nord and Zanuttini (resp., Creignou et al.) explores the complexity of these problems with respect to various restrictions over allowed KBs for ABD (ARG). In this thesis, we explore a two-dimensional complexity analysis for these problems. The first dimension is the restrictions over KB in Schaefer’s framework (the same direction as Nord and Zanuttini and Creignou et al.). What differentiates the work in this thesis from an existing research on these problems is that we add another dimension, the parameterization. The results obtained in this thesis are interesting for two reasons. First (from a theoretical point of view), ideas used in our reductions can help in developing further reductions and prove (in)tractability results for related problems. Second (from a practical point of view), the obtained tractability results might help an agent designing an instance of a problem come up with the one for which the problem is tractable

    Motive-Directed Meter

    Get PDF
    This dissertation isolates, defines, and explores the phenomenon of Motive-Directed Meter (MDM), which has hitherto received little scholarly attention. MDM is a listening experience evoked by music that is temporally regular enough to encourage metric listening and prediction, but irregular enough to frustrate these behaviors. MDM arises when recurring musical motives suggest parallel metric hearings, but shifting durational spans make metrical parallelism difficult to achieve. Listeners are therefore caught in a state of expectational limbo, urged to continually revise predictions that are recurrently thwarted. To approach this phenomenon, Chapter 1 describes the model of musical meter that undergirds this project, in which meter is viewed as an experiential process of temporal orientation taking place in the mind and body of a listener. Central to this dissertation is the notion that, like temporal orientation itself, the category “metric music” is not binary but graded, permitting degrees of inclusion; this removes the need to determine whether MDM can be considered “metric.” In order to accommodate this fluid conception, a flexible model of meter is introduced, which assesses the entrained listening experience according to four continua: timepoint specificity, pulse periodicity, hierarchic depth, and motivic saturation. These criteria are combined to create the multidimensional Flexible Metric Space, which accommodates all metric experiences, including Motive-Directed Meter, traditionally deep meter, and any other listening experience arising from synchronization with felt pulsation. This graded approach to membership in “metric music” allows analysts to compare and contrast musics from diverse repertoires. After Chapter 1 defines Motive-Directed Meter and the model of meter in which it is situated, Chapter 2 introduces five analytic tools appropriate to MDM. Some of these are adapted, some are newly developed, and each captures a different aspect of real-time listening. First, motive maps provide visual representations that summarize and highlight relationships between motives and durational spans, providing an overview of the interplay between these domains. Second, the variability index ranks categories of meter according to entrainment difficulty in isolation. Taken together, these two methods provide a rough picture of the shifting levels of unpredictability across a given passage of MDM. Third, Mark Gotham’s metric relations describe the relative difficulty and quality of connections between adjacent meters, further refining the processual approach undertaken here. Fourth, the metric displacement technique assesses the degree of mismatch between a listener’s expectations and realized musical events, comparing the expected metric depth—roughly, the metric strength—of certain important musical events with the “actual,” realized metric depth of those moments. This technique thereby describes the magnitude of the entrainment shift a listener must undertake in order to adjust to musical events at unexpected temporal positions. Fifth and finally, three expectation-generation methods are used to produce hypothetical sets of predictions intended to roughly approximate listener expectations at various stages of the learning process; these are local inertia, motivic inertia, and prototype methods. The utility of these analytic techniques is highlighted by way of a diverse series of analyses. Chapters 2 and 3 focus on the music of Igor Stravinsky: Chapter 2 analyzes brief passages from the Rite of Spring, the Soldier’s Tale, and Petrushka, while Chapter 3 delves deeply into three large works: the “Sacrificial Dance” and “Glorification of the Chosen One” from the Rite of Spring, and the “Feast at the Emperor’s Palace” from the Song of the Nightingale. Chapter 4 then moves beyond Stravinsky to explore the music of a large number of late twentieth- and early twenty-first century composers and popular music artists working in diverse styles and genres. The artists studied in this chapter include the composers Meredith Monk and Julia Wolfe, and the groups Rolo Tomassi and Mayors of Miyazaki. The analyses comprising this dissertation employ an experiential perspective, combining the techniques outlined above in order to better understand how we as listeners may work to orient ourselves to these pieces of music. In contrast to traditional structuralist approaches, all of the analyses presented in chapters 2-4, as well as the tools supporting them, are directed at the listening experience. Indeed, this dissertation—from its conceptions about meter and the tools it introduces, to the analyses that stem from both—is driven by a belief that the experience of the listener must lie at the heart of the analytic process. Central to all of the analyses is thus this aim: to illustrate how Motive-Directed Meter arises and to elucidate what it feels like to listen to it. With hope, this experience-driven approach may serve as a starting point for others seeking to similarly represent musical meter
    corecore