3,370 research outputs found

    The Complexity of Repairing, Adjusting, and Aggregating of Extensions in Abstract Argumentation

    Full text link
    We study the computational complexity of problems that arise in abstract argumentation in the context of dynamic argumentation, minimal change, and aggregation. In particular, we consider the following problems where always an argumentation framework F and a small positive integer k are given. - The Repair problem asks whether a given set of arguments can be modified into an extension by at most k elementary changes (i.e., the extension is of distance k from the given set). - The Adjust problem asks whether a given extension can be modified by at most k elementary changes into an extension that contains a specified argument. - The Center problem asks whether, given two extensions of distance k, whether there is a "center" extension that is a distance at most (k-1) from both given extensions. We study these problems in the framework of parameterized complexity, and take the distance k as the parameter. Our results covers several different semantics, including admissible, complete, preferred, semi-stable and stable semantics

    From 'scientific revolution' to 'unscientific revolution': an analysis of approaches to the history of generative linguistics

    Get PDF
    This paper is devoted to the challenge that generative linguistics poses for linguistic historiography. As a first step, it presents a systematic overview of 19 approaches to the history of generative linguistics. Second, it analyzes the approaches overviewed by asking and answering the following questions: (a) To what extent and how are the views at issue biased? (b) What central topics do the approaches discuss, how successfully do they tackle them, and how do the various standpoints converge and diverge? (c) How do the approaches relate to general trends in the philosophy and history of science? The concluding step summarizes our findings with respect to Chomsky’s impact on linguistic historiography

    Pareto Optimality and Strategy Proofness in Group Argument Evaluation (Extended Version)

    Get PDF
    An inconsistent knowledge base can be abstracted as a set of arguments and a defeat relation among them. There can be more than one consistent way to evaluate such an argumentation graph. Collective argument evaluation is the problem of aggregating the opinions of multiple agents on how a given set of arguments should be evaluated. It is crucial not only to ensure that the outcome is logically consistent, but also satisfies measures of social optimality and immunity to strategic manipulation. This is because agents have their individual preferences about what the outcome ought to be. In the current paper, we analyze three previously introduced argument-based aggregation operators with respect to Pareto optimality and strategy proofness under different general classes of agent preferences. We highlight fundamental trade-offs between strategic manipulability and social optimality on one hand, and classical logical criteria on the other. Our results motivate further investigation into the relationship between social choice and argumentation theory. The results are also relevant for choosing an appropriate aggregation operator given the criteria that are considered more important, as well as the nature of agents' preferences

    Empirical Evaluation of Abstract Argumentation: Supporting the Need for Bipolar and Probabilistic Approaches

    Get PDF
    In dialogical argumentation it is often assumed that the involved parties always correctly identify the intended statements posited by each other, realize all of the associated relations, conform to the three acceptability states (accepted, rejected, undecided), adjust their views when new and correct information comes in, and that a framework handling only attack relations is sufficient to represent their opinions. Although it is natural to make these assumptions as a starting point for further research, removing them or even acknowledging that such removal should happen is more challenging for some of these concepts than for others. Probabilistic argumentation is one of the approaches that can be harnessed for more accurate user modelling. The epistemic approach allows us to represent how much a given argument is believed by a given person, offering us the possibility to express more than just three agreement states. It is equipped with a wide range of postulates, including those that do not make any restrictions concerning how initial arguments should be viewed, thus potentially being more adequate for handling beliefs of the people that have not fully disclosed their opinions in comparison to Dung's semantics. The constellation approach can be used to represent the views of different people concerning the structure of the framework we are dealing with, including cases in which not all relations are acknowledged or when they are seen differently than intended. Finally, bipolar argumentation frameworks can be used to express both positive and negative relations between arguments. In this paper we describe the results of an experiment in which participants judged dialogues in terms of agreement and structure. We compare our findings with the aforementioned assumptions as well as with the constellation and epistemic approaches to probabilistic argumentation and bipolar argumentation

    Pragmatic constraints on (adverbial) (temporal) quantification

    Get PDF
    Even if we can generate a logical form, principles of use may limit the ways in which we can use it. In this paper, I motivate one such principle of use, and explore its effects. Much of the discussion involves kinds of sentences that have received attention in the literature on "individual-level predicates"

    Variable types for meaning assembly: a logical syntax for generic noun phrases introduced by most

    Get PDF
    This paper proposes a way to compute the meanings associated with sentences with generic noun phrases corresponding to the generalized quantifier most. We call these generics specimens and they resemble stereotypes or prototypes in lexical semantics. The meanings are viewed as logical formulae that can thereafter be interpreted in your favourite models. To do so, we depart significantly from the dominant Fregean view with a single untyped universe. Indeed, our proposal adopts type theory with some hints from Hilbert \epsilon-calculus (Hilbert, 1922; Avigad and Zach, 2008) and from medieval philosophy, see e.g. de Libera (1993, 1996). Our type theoretic analysis bears some resemblance with ongoing work in lexical semantics (Asher 2011; Bassac et al. 2010; Moot, Pr\'evot and Retor\'e 2011). Our model also applies to classical examples involving a class, or a generic element of this class, which is not uttered but provided by the context. An outcome of this study is that, in the minimalism-contextualism debate, see Conrad (2011), if one adopts a type theoretical view, terms encode the purely semantic meaning component while their typing is pragmatically determined
    • 

    corecore