168 research outputs found

    Towards an uncausal practice of visual communication

    Get PDF
    This practice-based PhD introduces the concept of uncausality as both method and methodology to uncover potentialities for action and thought beyond habitual patterns of causality and experience. The concept derives from an investigation of asemic writing’s paradoxical dynamic, also referred to as ‘asemic effect’. Asemic writing’s formal and gestural resemblance to conventional writing evokes expectations of legibility and semantic meaning. At the same time, any effort to retrieve meaning remains unsuccessful. The asemic effect is detached from its immediate context and explored to offer a dynamic that is divergent from the ‘causal pleasure’ of human-computer interaction. The direct and predictable causality between human action and computer reaction not only appeals to, but also consolidates the human being in their position as the all-knowing agent in the face of an increasingly complex world. This thesis critiques the emphasis on pleasure, power and control that confines human thought and action to the comfortable, protected realm of the already known, hindering any venture into the unknown. The concept of uncausality taps into the potential of an encounter with the unknown, the nonsensical and the dissonant. The contemporary condition that asks humans to revaluate their habitual ways of being underlines the urgency for such an exploration. While this research originates from a practice of visual communication with a focus on interactive type design, it follows a transdisciplinary methodology, after Guattari, to weave a heterogeneous net of connections across disciplines and modes of research. It draws on the philosophical explorations of Deleuze and Guattari, their own sources and thinkers who followed them. This research engages in a practice and process of programming visually abstract real-time human-computer interfaces to explore, test and expand on the concept of uncausality. The iterative nature of the process of programming becomes an entry point to create, and encounter, a continuous mutation of the relation between cause and effect, action and reaction. The practice, conscious of the symbiotic relationship between culture and technology, explores an approach to interactivity that maintains human action and thought in a state of physical and intellectual tension. Introducing the concept of uncausality, this research hopes to invigorate practices that keep the human mind elastic in a confrontation with a changing world

    Building bridges for better machines : from machine ethics to machine explainability and back

    Get PDF
    Be it nursing robots in Japan, self-driving buses in Germany or automated hiring systems in the USA, complex artificial computing systems have become an indispensable part of our everyday lives. Two major challenges arise from this development: machine ethics and machine explainability. Machine ethics deals with behavioral constraints on systems to ensure restricted, morally acceptable behavior; machine explainability affords the means to satisfactorily explain the actions and decisions of systems so that human users can understand these systems and, thus, be assured of their socially beneficial effects. Machine ethics and explainability prove to be particularly efficient only in symbiosis. In this context, this thesis will demonstrate how machine ethics requires machine explainability and how machine explainability includes machine ethics. We develop these two facets using examples from the scenarios above. Based on these examples, we argue for a specific view of machine ethics and suggest how it can be formalized in a theoretical framework. In terms of machine explainability, we will outline how our proposed framework, by using an argumentation-based approach for decision making, can provide a foundation for machine explanations. Beyond the framework, we will also clarify the notion of machine explainability as a research area, charting its diverse and often confusing literature. To this end, we will outline what, exactly, machine explainability research aims to accomplish. Finally, we will use all these considerations as a starting point for developing evaluation criteria for good explanations, such as comprehensibility, assessability, and fidelity. Evaluating our framework using these criteria shows that it is a promising approach and augurs to outperform many other explainability approaches that have been developed so far.DFG: CRC 248: Center for Perspicuous Computing; VolkswagenStiftung: Explainable Intelligent System

    A multi level explainability framework for BDI Multi Agent Systems

    Get PDF
    As software systems become more complex and the level of abstraction increases, programming and understanding behaviour become more difficult. This is particularly evident in autonomous systems that need to be resilient to change and adapt to possibly unexpected problems, as there are not yet mature tools for managing understanding. A complete understanding of the system is indispensable at every stage of software development, starting with the initial requirements analysis by experts in the field, through to development, implementation, debugging, testing, and product validation. A common and valid approach to increasing understandability in the field of Explainable AI is to provide explanations that can convey the decision making processes and the motivations behind the choices made by the system. Motivated by the different use cases of the explanation and the different classes of target users, it is necessary to deal with different levels of abstraction in the generated explanations since they target specific classes of users with different requirements and goals. This thesis introduces the idea of multi-level explainability as a way to generate different explanations for the same systems at different levels of detail. A low-level explanation related to detailed code could help developers in the debugging and testing phases, while a high-level explanation could support domain experts and designers or contribute to the validation phase to align the system with the requirements. The model taken as a reference for the automatic generation of explanations is the BDI (Belief-Desire-Intention) model, as it would be easier for humans to understand the mentalistic explanation of a system that behaves rationally given its desires and current beliefs. In this work we have prototyped an explainability tool for BDI agents and multi-agent systems that deals with multiple levels of abstraction that can be used for different purposes by different classes of users

    Understanding Quantum Technologies 2022

    Full text link
    Understanding Quantum Technologies 2022 is a creative-commons ebook that provides a unique 360 degrees overview of quantum technologies from science and technology to geopolitical and societal issues. It covers quantum physics history, quantum physics 101, gate-based quantum computing, quantum computing engineering (including quantum error corrections and quantum computing energetics), quantum computing hardware (all qubit types, including quantum annealing and quantum simulation paradigms, history, science, research, implementation and vendors), quantum enabling technologies (cryogenics, control electronics, photonics, components fabs, raw materials), quantum computing algorithms, software development tools and use cases, unconventional computing (potential alternatives to quantum and classical computing), quantum telecommunications and cryptography, quantum sensing, quantum technologies around the world, quantum technologies societal impact and even quantum fake sciences. The main audience are computer science engineers, developers and IT specialists as well as quantum scientists and students who want to acquire a global view of how quantum technologies work, and particularly quantum computing. This version is an extensive update to the 2021 edition published in October 2021.Comment: 1132 pages, 920 figures, Letter forma

    How to Be a God

    Get PDF
    When it comes to questions concerning the nature of Reality, Philosophers and Theologians have the answers. Philosophers have the answers that can’t be proven right. Theologians have the answers that can’t be proven wrong. Today’s designers of Massively-Multiplayer Online Role-Playing Games create realities for a living. They can’t spend centuries mulling over the issues: they have to face them head-on. Their practical experiences can indicate which theoretical proposals actually work in practice. That’s today’s designers. Tomorrow’s will have a whole new set of questions to answer. The designers of virtual worlds are the literal gods of those realities. Suppose Artificial Intelligence comes through and allows us to create non-player characters as smart as us. What are our responsibilities as gods? How should we, as gods, conduct ourselves? How should we be gods

    Using contextual knowledge in interactive fault localization

    Get PDF
    Tool support for automated fault localization in program debugging is limited because state-of-the-art algorithms often fail to provide efficient help to the user. They usually offer a ranked list of suspicious code elements, but the fault is not guaranteed to be found among the highest ranks. In Spectrum-Based Fault Localization (SBFL) – which uses code coverage information of test cases and their execution outcomes to calculate the ranks –, the developer has to investigate several locations before finding the faulty code element. Yet, all the knowledge she a priori has or acquires during this process is not reused by the SBFL tool. There are existing approaches in which the developer interacts with the SBFL algorithm by giving feedback on the elements of the prioritized list. We propose a new approach called iFL which extends interactive approaches by exploiting contextual knowledge of the user about the next item in the ranked list (e. g., a statement), with which larger code entities (e. g., a whole function) can be repositioned in their suspiciousness. We implemented a closely related algorithm proposed by Gong et al. , called Talk . First, we evaluated iFL using simulated users, and compared the results to SBFL and Talk . Next, we introduced two types of imperfections in the simulation: user’s knowledge and confidence levels. On SIR and Defects4J, results showed notable improvements in fault localization efficiency, even with strong user imperfections. We then empirically evaluated the effectiveness of the approach with real users in two sets of experiments: a quantitative evaluation of the successfulness of using iFL , and a qualitative evaluation of practical uses of the approach with experienced developers in think-aloud sessions

    Beyond Narrative: Exploring Narrative Liminality and Its Cultural Work

    Get PDF
    This book calls for an investigation of the 'borderlands of narrativity' - the complex and culturally productive area where the symbolic form of narrative meets other symbolic logics, such as data(base), play, spectacle, or ritual. It opens up a conversation about the 'beyond' of narrative, about the myriad constellations in which narrativity interlaces with, rubs against, or morphs into the principles of other forms. To conceptualize these borderlands, the book introduces the notion of "narrative liminality," which the 16 articles utilize to engage literature, popular culture, digital technology, historical artifacts, and other kinds of texts from a time span of close to 200 years

    Using contextual knowledge in interactive fault localization

    Get PDF
    Tool support for automated fault localization in program debugging is limited because state-of-the-art algorithms often fail to provide efficient help to the user. They usually offer a ranked list of suspicious code elements, but the fault is not guaranteed to be found among the highest ranks. In Spectrum-Based Fault Localization (SBFL) – which uses code coverage information of test cases and their execution outcomes to calculate the ranks –, the developer has to investigate several locations before finding the faulty code element. Yet, all the knowledge she a priori has or acquires during this process is not reused by the SBFL tool. There are existing approaches in which the developer interacts with the SBFL algorithm by giving feedback on the elements of the prioritized list. We propose a new approach called iFL which extends interactive approaches by exploiting contextual knowledge of the user about the next item in the ranked list (e. g., a statement), with which larger code entities (e. g., a whole function) can be repositioned in their suspiciousness. We implemented a closely related algorithm proposed by Gong et al. , called Talk . First, we evaluated iFL using simulated users, and compared the results to SBFL and Talk . Next, we introduced two types of imperfections in the simulation: user’s knowledge and confidence levels. On SIR and Defects4J, results showed notable improvements in fault localization efficiency, even with strong user imperfections. We then empirically evaluated the effectiveness of the approach with real users in two sets of experiments: a quantitative evaluation of the successfulness of using iFL , and a qualitative evaluation of practical uses of the approach with experienced developers in think-aloud sessions

    Lecture Notes on Interactive Storytelling

    Get PDF
    These lecture notes collect the material used in the advanced course 'Interactive Storytelling' organized biannually at the Department of Future Technologies, University of Turku, Finland. Its aim is to present the key concepts behind interactive digital storytelling (IDS) as well as to review proposed and existing IDS systems. The course focuses on the four partakers of IDS: the platform, the designer, the interactor, and the storyworld. When constructing a platform, the problem is to select an appropriate approach from tightly controlled to emergent storytelling. On this platform, the designer is then responsible for creating the content (e.g., characters, props, scenes and events) for the storyworld, which is then experienced and influenced by the interactor. The structure and relationships between these partakers is explained from a theoretical perspective as well as using existing IDS systems as examples.</p
    • 

    corecore