304,554 research outputs found

    The relation between prior knowledge and students' collaborative discovery learning processes

    Get PDF
    In this study we investigate how prior knowledge influences knowledge development during collaborative discovery learning. Fifteen dyads of students (pre-university education, 15-16 years old) worked on a discovery learning task in the physics field of kinematics. The (face-to-face) communication between students was recorded and the interaction with the environment was logged. Based on students' individual judgments of the truth-value and testability of a series of domain-specific propositions, a detailed description of the knowledge configuration for each dyad was created before they entered the learning environment. Qualitative analyses of two dialogues illustrated that prior knowledge influences the discovery learning processes, and knowledge development in a pair of students. Assessments of student and dyad definitional (domain-specific) knowledge, generic (mathematical and graph) knowledge, and generic (discovery) skills were related to the students' dialogue in different discovery learning processes. Results show that a high level of definitional prior knowledge is positively related to the proportion of communication regarding the interpretation of results. Heterogeneity with respect to generic prior knowledge was positively related to the number of utterances made in the discovery process categories hypotheses generation and experimentation. Results of the qualitative analyses indicated that collaboration between extremely heterogeneous dyads is difficult when the high achiever is not willing to scaffold information and work in the low achiever's zone of proximal development

    Descartes, corpuscles and reductionism : mechanism and systems in Descartes' physiology

    Get PDF
    I argue that Descartes explains physiology in terms of whole systems, and not in terms of the size, shape and motion of tiny corpuscles (corpuscular mechanics). It is a standard, entrenched view that Descartesā€™s proper means of explanation in the natural world is through strict reduction to corpuscular mechanics. This view is bolstered by a handful of corpuscular-mechanical explanations in Descartesā€™s physics, which have been taken to be representative of his treatment of all natural phenomena. However, Descartesā€™s explanations of the ā€˜principal partsā€™ of physiology do not follow the corpuscularā€“mechanical pattern. Des Chene (2001) has identified systems in Descartesā€™s account of physiology, but takes them ultimately to reduce down to the corpuscle level. I argue that they do not. Rather, Descartes maintains entire systems, with components selected from multiple levels of organisation, in order to construct more complete explanations than corpuscular mechanics alone would allow

    Reasoning by analogy in the generation of domain acceptable ontology refinements

    Get PDF
    Refinements generated for a knowledge base often involve the learning of new knowledge to be added to or replace existing parts of a knowledge base. However, the justifiability of the refinement in the context of the domain (domain acceptability) is often overlooked. The work reported in this paper describes an approach to the generation of domain acceptable refinements for incomplete and incorrect ontology individuals through reasoning by analogy using existing domain knowledge. To illustrate this approach, individuals for refinement are identified during the application of a knowledge-based system, EIRA; when EIRA fails in its task, areas of its domain ontology are identified as requiring refinement. Refinements are subsequently generated by identifying and reasoning with similar individuals from the domain ontology. To evaluate this approach EIRA has been applied to the Intensive Care Unit (ICU) domain. An evaluation (by a domain expert) of the refinements generated by EIRA has indicated that this approach successfully produces domain acceptable refinements

    The Grammar of Interactive Explanatory Model Analysis

    Full text link
    The growing need for in-depth analysis of predictive models leads to a series of new methods for explaining their local and global properties. Which of these methods is the best? It turns out that this is an ill-posed question. One cannot sufficiently explain a black-box machine learning model using a single method that gives only one perspective. Isolated explanations are prone to misunderstanding, which inevitably leads to wrong or simplistic reasoning. This problem is known as the Rashomon effect and refers to diverse, even contradictory interpretations of the same phenomenon. Surprisingly, the majority of methods developed for explainable machine learning focus on a single aspect of the model behavior. In contrast, we showcase the problem of explainability as an interactive and sequential analysis of a model. This paper presents how different Explanatory Model Analysis (EMA) methods complement each other and why it is essential to juxtapose them together. The introduced process of Interactive EMA (IEMA) derives from the algorithmic side of explainable machine learning and aims to embrace ideas developed in cognitive sciences. We formalize the grammar of IEMA to describe potential human-model dialogues. IEMA is implemented in the human-centered framework that adopts interactivity, customizability and automation as its main traits. Combined, these methods enhance the responsible approach to predictive modeling.Comment: 17 pages, 10 figures, 3 table

    Local Rule-Based Explanations of Black Box Decision Systems

    Get PDF
    The recent years have witnessed the rise of accurate but obscure decision systems which hide the logic of their internal decision processes to the users. The lack of explanations for the decisions of black box systems is a key ethical issue, and a limitation to the adoption of machine learning components in socially sensitive and safety-critical contexts. %Therefore, we need explanations that reveals the reasons why a predictor takes a certain decision. In this paper we focus on the problem of black box outcome explanation, i.e., explaining the reasons of the decision taken on a specific instance. We propose LORE, an agnostic method able to provide interpretable and faithful explanations. LORE first leans a local interpretable predictor on a synthetic neighborhood generated by a genetic algorithm. Then it derives from the logic of the local interpretable predictor a meaningful explanation consisting of: a decision rule, which explains the reasons of the decision; and a set of counterfactual rules, suggesting the changes in the instance's features that lead to a different outcome. Wide experiments show that LORE outperforms existing methods and baselines both in the quality of explanations and in the accuracy in mimicking the black box
    • ā€¦
    corecore