90,312 research outputs found

    Explaining Explanation

    Get PDF
    It is not a particularly hard thing to want or seek explanations. In fact, explanations seem to be a large and natural part of our cognitive lives. Children ask why and how questions very early in development and seem genuinely to want some sort of answer, despite our often being poorly equipped to provide them at the appropriate level of sophistication and detail. We seek and receive explanations in every sphere of our adult lives, whether it be to understand why a friendship has foundered, why a car will not start, or why ice expands when it freezes. Moreover, correctly or incorrectly, most of the time we think we know when we have or have not received a good explanation. There is a sense both that a given, successful explanation satisfies a cognitive need, and that a questionable or dubious explanation does not. There are also compelling intuitions about what make good explanations in terms of their form, that is, a sense of when they are structured correctly

    The Structured Process Modeling Theory (SPMT): a cognitive view on why and how modelers benefit from structuring the process of process modeling

    Get PDF
    After observing various inexperienced modelers constructing a business process model based on the same textual case description, it was noted that great differences existed in the quality of the produced models. The impression arose that certain quality issues originated from cognitive failures during the modeling process. Therefore, we developed an explanatory theory that describes the cognitive mechanisms that affect effectiveness and efficiency of process model construction: the Structured Process Modeling Theory (SPMT). This theory states that modeling accuracy and speed are higher when the modeler adopts an (i) individually fitting (ii) structured (iii) serialized process modeling approach. The SPMT is evaluated against six theory quality criteria

    Visualising Discourse Coherence in Non-Linear Documents

    Get PDF
    To produce coherent linear documents, Natural Language Generation systems have traditionally exploited the structuring role of textual discourse markers such as relational and referential phrases. These coherence markers of the traditional notion of text, however, do not work in non-linear documents: a new set of graphical devices is needed together with formation rules to govern their usage, supported by sound theoretical frameworks. If in linear documents graphical devices such as layout and formatting complement textual devices in the expression of discourse coherence, in non-linear documents they play a more important role. In this paper, we present our theoretical and empirical work in progress, which explores new possibilities for expressing coherence in the generation of hypertext documents

    Causality in concurrent systems

    Full text link
    Concurrent systems identify systems, either software, hardware or even biological systems, that are characterized by sets of independent actions that can be executed in any order or simultaneously. Computer scientists resort to a causal terminology to describe and analyse the relations between the actions in these systems. However, a thorough discussion about the meaning of causality in such a context has not been developed yet. This paper aims to fill the gap. First, the paper analyses the notion of causation in concurrent systems and attempts to build bridges with the existing philosophical literature, highlighting similarities and divergences between them. Second, the paper analyses the use of counterfactual reasoning in ex-post analysis in concurrent systems (i.e. execution trace analysis).Comment: This is an interdisciplinary paper. It addresses a class of causal models developed in computer science from an epistemic perspective, namely in terms of philosophy of causalit

    Dynamic Influence Networks for Rule-based Models

    Get PDF
    We introduce the Dynamic Influence Network (DIN), a novel visual analytics technique for representing and analyzing rule-based models of protein-protein interaction networks. Rule-based modeling has proved instrumental in developing biological models that are concise, comprehensible, easily extensible, and that mitigate the combinatorial complexity of multi-state and multi-component biological molecules. Our technique visualizes the dynamics of these rules as they evolve over time. Using the data produced by KaSim, an open source stochastic simulator of rule-based models written in the Kappa language, DINs provide a node-link diagram that represents the influence that each rule has on the other rules. That is, rather than representing individual biological components or types, we instead represent the rules about them (as nodes) and the current influence of these rules (as links). Using our interactive DIN-Viz software tool, researchers are able to query this dynamic network to find meaningful patterns about biological processes, and to identify salient aspects of complex rule-based models. To evaluate the effectiveness of our approach, we investigate a simulation of a circadian clock model that illustrates the oscillatory behavior of the KaiC protein phosphorylation cycle.Comment: Accepted to TVCG, in pres

    The Pragmatic Turn in Explainable Artificial Intelligence (XAI)

    Get PDF
    In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stakeholders. But without a previous grasp of what it means to say that an agent understands a model or a decision, the explanatory strategies will lack a well-defined goal. Aside from providing a clearer objective for XAI, focusing on understanding also allows us to relax the factivity condition on explanation, which is impossible to fulfill in many machine learning models, and to focus instead on the pragmatic conditions that determine the best fit between a model and the methods and devices deployed to understand it. After an examination of the different types of understanding discussed in the philosophical and psychological literature, I conclude that interpretative or approximation models not only provide the best way to achieve the objectual understanding of a machine learning model, but are also a necessary condition to achieve post hoc interpretability. This conclusion is partly based on the shortcomings of the purely functionalist approach to post hoc interpretability that seems to be predominant in most recent literature
    • …
    corecore