50,998 research outputs found

    Influence of Context on Decision Making during Requirements Elicitation

    Get PDF
    Requirements engineers should strive to get a better insight into decision making processes. During elicitation of requirements, decision making influences how stakeholders communicate with engineers, thereby affecting the engineers' understanding of requirements for the future information system. Empirical studies issued from Artificial Intelligence offer an adequate groundwork to understand how decision making is influenced by some particular contextual factors. However, no research has gone into the validation of such empirical studies in the process of collecting needs of the future system's users. As an answer, the paper empirically studies factors, initially identified by AI literature, that influence decision making and communication during requirements elicitation. We argue that the context's structure of the decision should be considered as a cornerstone to adequately study how stakeholders decide to communicate or not a requirement. The paper proposes a context framework to categorize former factors into specific families, and support the engineers during the elicitation process.Comment: appears in Proceedings of the 4th International Workshop on Acquisition, Representation and Reasoning with Contextualized Knowledge (ARCOE), 2012, Montpellier, France, held at the European Conference on Artificial Intelligence (ECAI-12

    Defeasible Logic Programming: An Argumentative Approach

    Full text link
    The work reported here introduces Defeasible Logic Programming (DeLP), a formalism that combines results of Logic Programming and Defeasible Argumentation. DeLP provides the possibility of representing information in the form of weak rules in a declarative manner, and a defeasible argumentation inference mechanism for warranting the entailed conclusions. In DeLP an argumentation formalism will be used for deciding between contradictory goals. Queries will be supported by arguments that could be defeated by other arguments. A query q will succeed when there is an argument A for q that is warranted, ie, the argument A that supports q is found undefeated by a warrant procedure that implements a dialectical analysis. The defeasible argumentation basis of DeLP allows to build applications that deal with incomplete and contradictory information in dynamic domains. Thus, the resulting approach is suitable for representing agent's knowledge and for providing an argumentation based reasoning mechanism to agents.Comment: 43 pages, to appear in the journal "Theory and Practice of Logic Programming

    Coping with the Limitations of Rational Inference in the Framework of Possibility Theory

    Full text link
    Possibility theory offers a framework where both Lehmann's "preferential inference" and the more productive (but less cautious) "rational closure inference" can be represented. However, there are situations where the second inference does not provide expected results either because it cannot produce them, or even provide counter-intuitive conclusions. This state of facts is not due to the principle of selecting a unique ordering of interpretations (which can be encoded by one possibility distribution), but rather to the absence of constraints expressing pieces of knowledge we have implicitly in mind. It is advocated in this paper that constraints induced by independence information can help finding the right ordering of interpretations. In particular, independence constraints can be systematically assumed with respect to formulas composed of literals which do not appear in the conditional knowledge base, or for default rules with respect to situations which are "normal" according to the other default rules in the base. The notion of independence which is used can be easily expressed in the qualitative setting of possibility theory. Moreover, when a counter-intuitive plausible conclusion of a set of defaults, is in its rational closure, but not in its preferential closure, it is always possible to repair the set of defaults so as to produce the desired conclusion.Comment: Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996
    corecore