4 research outputs found

    Arguments using ontological and causal knowledge (FoIKS 2014)

    Get PDF
    International audienceWe explore an approach to reasoning about causes via argumentation. We consider a causal model for a physical system, and we look for arguments about facts. Some arguments are meant to provide explanations of facts whereas some challenge these explanations and so on. At the root of argumentation here, are causal links ({A_1, ... ,A_n} causes B) and also ontological links (c_1 is_a} c_2). We introduce here a logical approach which provides a candidate explanation ({A_1, ... ,A_n} explains {B_1, ... ,B_m}) by resorting to an underlying causal link substantiated with appropriate ontological links. Argumentation is then at work from these various explanation links. A case study is developed: a severe storm Xynthia that devastated a county in France in 2010, with an unaccountably high number of casualties

    Giving Arguments, Justifications, and Explanations: An Analysis on giving reasons Stories in Child Discourse

    Get PDF
    El presente artículo tiene por objetivo abordarla actividad de argumentar, justificar yexplicar en niños pequeños a partir del análisis de interacciones naturales entre niños extraídas de situaciones de juego y ronda en el jardín de infantes. Para dar cuenta de dicho objetivo, en primer lugar, se revisan trabajos empíricos provenientes principalmente del campo de la psicología y de las ciencias de la educación y se analiza el rol otorgado a los tres actos de habla abordados. En segundo lugar, se plantean distinciones teóricas fundamentales en torno a la familia de actos de dar razones nutriendo el desarrollo con precisiones provenientes de teoría de la argumentación contemporánea, la filosofía de la acción y el análisis del discurso. Concluimos que en los trabajos empíricos no siempre se distinguen conceptualmente los términos de explicar, justificar y argumentar. Asimismo, los aportes teóricos nos permiten aventurar que para lograr distinguir qué actividad de dar razones se está produciendo es primordial reconstruir el contexto discursivo más amplio. Finalmente, el análisis de ejemplos de interacciones nos permite reflexionar en torno a la importancia de considerar el tipo de conflicto cognitivo que da origen al tipo de acto de habla de dar razones expuesto por el hablante.This paper aims to characterize the activity of arguing, justifying, and explaining made by children through the analysis of natural interactions during play and role play in kindergarten. The methodology used was as follows: Firstly, a literature review of previous studies on the three speech acts in the areas of psycology and education was done. Secondly, theoretical approaches on giving reasons speech acts were analyzed together with contributions from contemporary argumentation theory, philosophy of action, and discourse analysis. The results show that research studies do not always determine the distinction between the three terms under study (explanation, justification, and argumentation). Likewise, theoretical contributions suggest that a wider discourse context is essential for understanding and characterizing the type of giving reasons activity. Finally, analysis of authentic interactions helps to determine which cognitive conflict may be involved in the giving reasons speech act used by the speaker.Fil: Raynaudo, Gabriela Maria. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Rosario. Instituto Rosario de Investigaciones en Ciencias de la Educación. Universidad Nacional de Rosario. Instituto Rosario de Investigaciones en Ciencias de la Educación; ArgentinaFil: Migdalek, Maia Julieta. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Saavedra 15. Centro Interdisciplinario de Investigaciones en Psicología Matemática y Experimental Dr. Horacio J. A. Rimoldi; ArgentinaFil: Santibañez, Cristian. Universidad Diego Portales; Chil

    Requirements engineering for explainable systems

    Get PDF
    Information systems are ubiquitous in modern life and are powered by evermore complex algorithms that are often difficult to understand. Moreover, since systems are part of almost every aspect of human life, the quality in interaction and communication between humans and machines has become increasingly important. Hence the importance of explainability as an essential element of human-machine communication; it has also become an important quality requirement for modern information systems. However, dealing with quality requirements has never been a trivial task. To develop quality systems, software professionals have to understand how to transform abstract quality goals into real-world information system solutions. Requirements engineering provides a structured approach that aids software professionals in better comprehending, evaluating, and operationalizing quality requirements. Explainability has recently regained prominence and been acknowledged and established as a quality requirement; however, there is currently no requirements engineering recommendations specifically focused on explainable systems. To fill this gap, this thesis investigated explainability as a quality requirement and how it relates to the information systems context, with an emphasis on requirements engineering. To this end, this thesis proposes two theories that delineate the role of explainability and establish guidelines for the requirements engineering process of explainable systems. These theories are modeled and shaped through five artifacts. These theories and artifacts should help software professionals 1) to communicate and achieve a shared understanding of the concept of explainability; 2) to comprehend how explainability affects system quality and what role it plays; 3) in translating abstract quality goals into design and evaluation strategies; and 4) to shape the software development process for the development of explainable systems. The theories and artifacts were built and evaluated through literature studies, workshops, interviews, and a case study. The findings show that the knowledge made available helps practitioners understand the idea of explainability better, facilitating the creation of explainable systems. These results suggest that the proposed theories and artifacts are plausible, practical, and serve as a strong starting point for further extensions and improvements in the search for high-quality explainable systems

    Von Requirements zu Privacy Explanations: Ein nutzerzentrierter Ansatz fĂĽr Usable Privacy

    Get PDF
    Im Zeitalter der fortschreitenden Digitalisierung, in dem die Technologie zunehmend in unsere Gesellschaft eindringt, rücken sogenannte human values wie Ethik, Fairness, Privatsphäre und Vertrauen weiter in den Mittelpunkt. Digitale Informationssysteme dringen immer stärker in private und berufliche Bereiche vor und bieten den Nutzern Unterstützung, schnell und einfach mit anderen Menschen in Kontakt zu treten, bei der Informationsbeschaffung und helfen bei der Erledigung täglicher Aufgaben. Im Gegenzug geben die Nutzer bereitwillig große Mengen an persönlichen Daten an diese Systeme weiter. Diese Datenerfassung bedeutet jedoch, dass die Privatsphäre der Nutzer zunehmend gefährdet ist. Daher ist die Aufklärung der Nutzer über die gesammelten Informationen und ihre anschließende Verarbeitung der Schlüssel, die Privatsphäre der Nutzer zu schützen. Der Gesetzgeber hat Datenschutzerklärungen als Mittel zur Kommunikation von Datenpraktiken eingeführt. Leider erweisen sich diese Dokumente für die Endnutzer als praktisch nutzlos, da sie umfangreich, vage formuliert und mit Fachausdrücken gespickt sind, die oft ein tieferes Fachwissen erfordern. Das Ergebnis ist ein Mangel an nutzerorientierten Lösungen zur transparenten und verständlichen Vermittlung von Datenpraktiken. Um diese Lücke zu schließen, wird in dieser Arbeit das Konzept der Erklärbarkeit als entscheidender Qualitätsaspekt zur Verbesserung der Kommunikation zwischen Systemen und Nutzern in Bezug auf Datenpraktiken in einer klaren, verständlichen und nachvollziehbaren Weise untersucht. Zu diesem Zweck wird ein Ansatz vorgeschlagen, der aus drei Theorien besteht, die durch sieben Artefakte gestützt werden, die die Rolle der Erklärbarkeit im Kontext der Privatsphäre skizzieren und Leitlinien für die Kommunikation von Datenschutzinformationen aufstellen. Diese Theorien und Artefakte sollen Software-Experten unterstützen, (a) privatsphärerelevante Aspekte zu identifizieren, (b) diese kontextrelevant und verständlich an den Nutzer zu kommunizieren, um (c) datenschutzfreundliche Systeme zu designen. Um die Wirksamkeit des vorgeschlagenen Ansatzes zu validieren, wurden Evaluierungen durchgeführt, darunter Literaturrecherchen, Workshops und Nutzerstudien. Die Ergebnisse bestätigen die Eignung der entwickelten Theorien und Artefakte und bieten eine vielversprechende Grundlage für die Entwicklung datenschutzfreundlicher, fairer und transparenter Systeme.In the era of ongoing digitalization, where technology increasingly infiltrates our society, fun-damental human values such as ethics, fairness, privacy, and trust have taken center stage. Digital systems have seamlessly penetrated both personal and professional spheres, offering users swift connectivity, information access, and assistance in their daily routines. In exchange, users willingly share copious amounts of personal data with these systems. However, this data collection means that that users’ privacy sphere is increasingly at stake. Therefore, educating users about the information being collected and its subsequent processing is key to protect users’ privacy sphere. Legislation has established privacy policies as a means of communicating data practices. Unfortunately, these documents often prove fruitless for end users due to their extensive, va-gue, and jargon-laden nature, replete with legal terminology that often requires a deeper level of specialized knowledge. The result is a lack of user-centric solutions to communicate privacy information transparently and understandably. To bridge this gap, this thesis explores the concept of explainability as a crucial quality aspect for improving communication between systems and users concerning data practices, in a clear, understandable, and comprehensible manner. To this end, this thesis proposes an approach consisting of three theories supported by seven artifacts that outline the role of explainability in the context of privacy and provide guidelines for communicating privacy information. These theories and artifacts are intended to help software professionals (a) to identify privacy-relevant aspects, (b) to communicate them to users in a contextually relevant and understandable way, and (c) to design privacy-aware systems. To validate the efÏcacy of the proposed approach, evaluations were conducted, including literature reviews, workshops, and user studies. The results endorse the suitability of the de-veloped theories and artifacts, offering a promising foundation for developing privacy-aware, fair, and transparent systems
    corecore