210 research outputs found
Imagining & Sensing: Understanding and Extending the Vocalist-Voice Relationship Through Biosignal Feedback
The voice is body and instrument. Third-person interpretation of the voice by listeners, vocal teachers, and digital agents is centred largely around audio feedback. For a vocalist, physical feedback from within the body provides an additional interaction. The vocalist’s understanding of their multi-sensory experiences is through tacit knowledge of the body. This knowledge is difficult to articulate, yet awareness and control of the body are innate. In the ever-increasing emergence of technology which quantifies or interprets physiological processes, we must remain conscious also of embodiment and human perception of these processes. Focusing on the vocalist-voice relationship, this thesis expands knowledge of human interaction and how technology influences our perception of our bodies. To unite these different perspectives in the vocal context, I draw on mixed methods from cog- nitive science, psychology, music information retrieval, and interactive system design. Objective methods such as vocal audio analysis provide a third-person observation. Subjective practices such as micro-phenomenology capture the experiential, first-person perspectives of the vocalists them- selves. Quantitative-qualitative blend provides details not only on novel interaction, but also an understanding of how technology influences existing understanding of the body. I worked with vocalists to understand how they use their voice through abstract representations, use mental imagery to adapt to altered auditory feedback, and teach fundamental practice to others. Vocalists use multi-modal imagery, for instance understanding physical sensations through auditory sensations. The understanding of the voice exists in a pre-linguistic representation which draws on embodied knowledge and lived experience from outside contexts. I developed a novel vocal interaction method which uses measurement of laryngeal muscular activations through surface electromyography. Biofeedback was presented to vocalists through soni- fication. Acting as an indicator of vocal activity for both conscious and unconscious gestures, this feedback allowed vocalists to explore their movement through sound. This formed new perceptions but also questioned existing understanding of the body. The thesis also uncovers ways in which vocalists are in control and controlled by, work with and against their bodies, and feel as a single entity at times and totally separate entities at others. I conclude this thesis by demonstrating a nuanced account of human interaction and perception of the body through vocal practice, as an example of how technological intervention enables exploration and influence over embodied understanding. This further highlights the need for understanding of the human experience in embodied interaction, rather than solely on digital interpretation, when introducing technology into these relationships
Examining the Impact of Provenance-Enabled Media on Trust and Accuracy Perceptions
In recent years, industry leaders and researchers have proposed to use
technical provenance standards to address visual misinformation spread through
digitally altered media. By adding immutable and secure provenance information
such as authorship and edit date to media metadata, social media users could
potentially better assess the validity of the media they encounter. However, it
is unclear how end users would respond to provenance information, or how to
best design provenance indicators to be understandable to laypeople. We
conducted an online experiment with 595 participants from the US and UK to
investigate how provenance information altered users' accuracy perceptions and
trust in visual content shared on social media. We found that provenance
information often lowered trust and caused users to doubt deceptive media,
particularly when it revealed that the media was composited. We additionally
tested conditions where the provenance information itself was shown to be
incomplete or invalid, and found that these states have a significant impact on
participants' accuracy perceptions and trust in media, leading them, in some
cases, to disbelieve honest media. Our findings show that provenance, although
enlightening, is still not a concept well-understood by users, who confuse
media credibility with the orthogonal (albeit related) concept of provenance
credibility. We discuss how design choices may contribute to provenance
(mis)understanding, and conclude with implications for usable provenance
systems, including clearer interfaces and user education.Comment: Accepted to CSCW 202
Recommended from our members
Proceedings of the 33rd Annual Workshop of the Psychology of Programming Interest Group
This is the Proceedings of the 33rd Annual Workshop of the Psychology of Programming Interest Group (PPIG). This was the first PPIG to be held physically since 2019, following the two online-only PPIGs in 2020 and 2021, both during the Covid pandemic. It was also the first PPIG conference to be designed specifically for hybrid attendance. Reflecting the theme, it was hosted by Music Computing Lab at the Open University in Milton Keynes
A competencies framework of visual impairments for enabling shared understanding in design
Existing work in Human Computer Interaction and accessibility research has long sought to investigate the experiences of people with visual impairments in order to address their needs through technology design and integrate their participation into different stages of the design process. Yet challenges remain regarding how disabilities are framed in technology design and the extent of involvement of disabled people within it. Furthermore, accessibility is often considered a specialised job and misunderstandings or assumptions about visually impaired people’s experiences and needs occur outside dedicated fields. This thesis presents an ethnomethodology-informed design critique for supporting awareness and shared understanding of visual impairments and accessibility that centres on their experiences, abilities, and participation in early-stage design. This work is rooted in an in-depth empirical investigation of the interactional competencies that people with visual impairments exhibit through their use of technology, which informs and shapes the concept of a Competencies Framework of Visual Impairments. Although past research has established stances for considering the individual abilities of disabled people and other social and relational factors in technology design, by drawing on ethnomethodology and its interest in situated competence this thesis employs an interactional perspective to investigate the practical accomplishments of visually impaired people. Thus, this thesis frames visual impairments in terms of competencies to be considered in the design process, rather than a deficiency or problem to be fixed through technology. Accordingly, this work favours supporting awareness and reflection rather than the design of particular solutions, which are also strongly needed for advancing accessible design at large.
This PhD thesis comprises two main empirical studies branched into three different investigations. The first and second investigations are based on a four-month ethnographic study with visually impaired participants examining their everyday technology practices. The third investigation comprises the design and implementation of a workshop study developed to include people with and without visual impairments in collaborative reflections about technology and accessibility. As such, each investigation informed the ones that followed, revisiting and refining concepts and design materials throughout the thesis. Although ethnomethodology is the overarching approach running through this PhD project, each investigation has a different focus of enquiry:
• The first is focused on analysing participants’ technology practices and unearthing the interactional competencies enabling them.
• The second is focused on analysing technology demonstrations, which were a pervasive phenomenon recorded during fieldwork, and the work of demonstrating as exhibited by visually impaired participants.
• Lastly, the third investigation defines a workshop approach employing video demonstrations and a deck of reflective design cards as building blocks for enabling shared understanding among people with and without visual impairments from different technology backgrounds; that is, users, technologists, designers, and researchers.
Overall, this thesis makes several contributions to audiences within and outside academia, such as the detailed accounts of some of the main technology practices of people with visual impairments and the methodological analysis of demonstrations in empirical Human Computer Interaction and accessibility research. Moreover, the main contribution lies in the conceptualisation of a Competencies Framework of Visual Impairments from the empirical analysis of interactional competencies and their practical exhibition through demonstrations, as well as the creation and use of a deck of cards that encapsulates the competencies and external elements involved in the everyday interactional accomplishments of people with visual impairments. All these contributions are lastly brought together in the implementation of the workshop approach that enabled participants to interact with and learn from each other. Thus, this thesis builds upon and advances contemporary strands of work in Human Computer Interaction that call for re-orienting how visual impairments and, overall, disabilities are framed in technology design, and ultimately for re-shaping the design practice itself
Geographic information extraction from texts
A large volume of unstructured texts, containing valuable geographic information, is available online. This information – provided implicitly or explicitly – is useful not only for scientific studies (e.g., spatial humanities) but also for many practical applications (e.g., geographic information retrieval). Although large progress has been achieved in geographic information extraction from texts, there are still unsolved challenges and issues, ranging from methods, systems, and data, to applications and privacy. Therefore, this workshop will provide a timely opportunity to discuss the recent advances, new ideas, and concepts but also identify research gaps in geographic information extraction
An interdisciplinary concept for human-centered explainable artificial intelligence - Investigating the impact of explainable AI on end-users
Since the 1950s, Artificial Intelligence (AI) applications have captivated people. However, this fascination has always been accompanied by disillusionment about the limitations of this technology. Today, machine learning methods such as Deep Neural Networks (DNN) are successfully used in various tasks. However, these methods also have limitations: Their complexity makes their decisions no longer comprehensible to humans - they are black-boxes. The research branch of Explainable AI (XAI) has addressed this problem by investigating how to make AI decisions comprehensible. This desire is not new. In the 1970s, developers of intrinsic explainable AI approaches, so-called white-boxes (e.g., rule-based systems), were dealing with AI explanations. Nowadays, with the increased use of AI systems in all areas of life, the design of comprehensible systems has become increasingly important. Developing such systems is part of Human-Centred AI (HCAI) research, which integrates human needs and abilities in the design of AI interfaces. For this, an understanding is needed of how humans perceive XAI and how AI explanations influence the interaction between humans and AI. One of the open questions concerns the investigation of XAI for end-users, i.e., people who have no expertise in AI but interact with such systems or are impacted by the system's decisions.
This dissertation investigates the impact of different levels of interactive XAI of white- and black-box AI systems on end-users perceptions. Based on an interdisciplinary concept presented in this work, it is examined how the content, type, and interface of explanations of DNN (black box) and rule-based systems (white box) are perceived by end-users. How XAI influences end-users mental models, trust, self-efficacy, cognitive workload, and emotional state regarding the AI system is the centre of the investigation. At the beginning of the dissertation, general concepts regarding AI, explanations, and psychological constructs of mental models, trust, self-efficacy, cognitive load, and emotions are introduced. Subsequently, related work regarding the design and investigation of XAI for users is presented. This serves as a basis for the concept of a Human-Centered Explainable AI (HC-XAI) presented in this dissertation, which combines an XAI design approach with user evaluations. The author pursues an interdisciplinary approach that integrates knowledge from the research areas of (X)AI, Human-Computer Interaction, and Psychology.
Based on this interdisciplinary concept, a five-step approach is derived and applied to illustrative surveys and experiments in the empirical part of this dissertation.
To illustrate the first two steps, a persona approach for HC-XAI is presented, and based on that, a template for designing personas is provided. To illustrate the usage of the template, three surveys are presented that ask end-users about their attitudes and expectations towards AI and XAI. The personas generated from the survey data indicate that end-users often lack knowledge of XAI and that their perception of it depends on demographic and personality-related characteristics.
Steps three to five deal with the design of XAI for concrete applications. For this, different levels of interactive XAI are presented and investigated in experiments with end-users. For this purpose, two rule-based systems (i.e., white-box) and four systems based on DNN (i.e., black-box) are used.
These are applied for three purposes: Cooperation & collaboration, education, and medical decision support. Six user studies were conducted for this purpose, which differed in the interactivity of the XAI system used.
The results show that end-users trust and mental models of AI depend strongly on the context of use and the design of the explanation itself. For example, explanations that a virtual agent mediates are shown to promote trust. The content and type of explanations are also perceived differently by users. The studies also show that end-users in different application contexts of XAI feel the desire for interactive explanations.
The dissertation concludes with a summary of the scientific contribution, points out limitations of the presented work, and gives an outlook on possible future research topics to integrate explanations into everyday AI systems and thus enable the comprehensible handling of AI for all people.Seit den 1950er Jahren haben Anwendungen der Künstlichen Intelligenz (KI) die Menschen in ihren Bann gezogen. Diese Faszination wurde jedoch stets von Ernüchterung über die Grenzen dieser Technologie begleitet. Heute werden Methoden des maschinellen Lernens wie Deep Neural Networks (DNN) erfolgreich für verschiedene Aufgaben eingesetzt. Doch auch diese Methoden haben ihre Grenzen: Durch ihre Komplexität sind ihre Entscheidungen für den Menschen nicht mehr nachvollziehbar - sie sind Black-Boxes. Der Forschungszweig der Erklärbaren KI (engl. XAI) hat sich diesem Problem angenommen und untersucht, wie man KI-Entscheidungen nachvollziehbar machen kann. Dieser Wunsch ist nicht neu. In den 1970er Jahren beschäftigten sich die Entwickler von intrinsisch erklärbaren KI-Ansätzen, so genannten White-Boxes (z. B. regelbasierte Systeme), mit KI-Erklärungen. Heutzutage, mit dem zunehmenden Einsatz von KI-Systemen in allen Lebensbereichen, wird die Gestaltung nachvollziehbarer Systeme immer wichtiger. Die Entwicklung solcher Systeme ist Teil der Menschzentrierten KI (engl. HCAI) Forschung, die menschliche Bedürfnisse und Fähigkeiten in die Gestaltung von KI-Schnittstellen integriert. Dafür ist ein Verständnis darüber erforderlich, wie Menschen XAI wahrnehmen und wie KI-Erklärungen die Interaktion zwischen Mensch und KI beeinflussen. Eine der offenen Fragen betrifft die Untersuchung von XAI für Endnutzer, d.h. Menschen, die keine Expertise in KI haben, aber mit solchen Systemen interagieren oder von deren Entscheidungen betroffen sind.
In dieser Dissertation wird untersucht, wie sich verschiedene Stufen interaktiver XAI von White- und Black-Box-KI-Systemen auf die Wahrnehmung der Endnutzer auswirken. Basierend auf einem interdisziplinären Konzept, das in dieser Arbeit vorgestellt wird, wird untersucht, wie der Inhalt, die Art und die Schnittstelle von Erklärungen von DNN (Black-Box) und regelbasierten Systemen (White-Box) von Endnutzern wahrgenommen werden. Wie XAI die mentalen Modelle, das Vertrauen, die Selbstwirksamkeit, die kognitive Belastung und den emotionalen Zustand der Endnutzer in Bezug auf das KI-System beeinflusst, steht im Mittelpunkt der Untersuchung. Zu Beginn der Arbeit werden allgemeine Konzepte zu KI, Erklärungen und psychologische Konstrukte von mentalen Modellen, Vertrauen, Selbstwirksamkeit, kognitiver Belastung und Emotionen vorgestellt. Anschließend werden verwandte Arbeiten bezüglich dem Design und der Untersuchung von XAI für Nutzer präsentiert. Diese dienen als Grundlage für das in dieser Dissertation vorgestellte Konzept einer Menschzentrierten Erklärbaren KI (engl. HC-XAI), das einen XAI-Designansatz mit Nutzerevaluationen kombiniert. Die Autorin verfolgt einen interdisziplinären Ansatz, der Wissen aus den Forschungsbereichen (X)AI, Mensch-Computer-Interaktion und Psychologie integriert.
Auf der Grundlage dieses interdisziplinären Konzepts wird ein fünfstufiger Ansatz abgeleitet und im empirischen Teil dieser Arbeit auf exemplarische Umfragen und Experimente und angewendet.
Zur Veranschaulichung der ersten beiden Schritte wird ein Persona-Ansatz für HC-XAI vorgestellt und darauf aufbauend eine Vorlage für den Entwurf von Personas bereitgestellt. Um die Verwendung der Vorlage zu veranschaulichen, werden drei Umfragen präsentiert, in denen Endnutzer zu ihren Einstellungen und Erwartungen gegenüber KI und XAI befragt werden. Die aus den Umfragedaten generierten Personas zeigen, dass es den Endnutzern oft an Wissen über XAI mangelt und dass ihre Wahrnehmung dessen von demografischen und persönlichkeitsbezogenen Merkmalen abhängt.
Die Schritte drei bis fĂĽnf befassen sich mit der Gestaltung von XAI fĂĽr konkrete Anwendungen. Hierzu werden verschiedene Stufen interaktiver XAI vorgestellt und in Experimenten mit Endanwendern untersucht. Zu diesem Zweck werden zwei regelbasierte Systeme (White-Box) und vier auf DNN basierende Systeme (Black-Box) verwendet.
Diese werden für drei Zwecke eingesetzt: Kooperation & Kollaboration, Bildung und medizinische Entscheidungsunterstützung. Hierzu wurden sechs Nutzerstudien durchgeführt, die sich in der Interaktivität des verwendeten XAI-Systems unterschieden.
Die Ergebnisse zeigen, dass das Vertrauen und die mentalen Modelle der Endnutzer in KI stark vom Nutzungskontext und der Gestaltung der Erklärung selbst abhängen. Es hat sich beispielsweise gezeigt, dass Erklärungen, die von einem virtuellen Agenten vermittelt werden, das Vertrauen fördern. Auch der Inhalt und die Art der Erklärungen werden von den Nutzern unterschiedlich wahrgenommen. Die Studien zeigen zudem, dass Endnutzer in unterschiedlichen Anwendungskontexten von XAI den Wunsch nach interaktiven Erklärungen verspüren.
Die Dissertation schließt mit einer Zusammenfassung des wissenschaftlichen Beitrags, weist auf Grenzen der vorgestellten Arbeit hin und gibt einen Ausblick auf mögliche zukünftige Forschungsthemen, um Erklärungen in alltägliche KI-Systeme zu integrieren und damit den verständlichen Umgang mit KI für alle Menschen zu ermöglichen
Proceedings of the Weizenbaum Conference 2023: AI, Big Data, Social Media, and People on the Move
The conference focused on topics that arise from artificial intelligence (AI) and Big Data deployed on and used by 'people on the move'. We understand the term 'people on the move' in a broad sense: individuals and groups who - by volition or necessity - are changing their lives and/or their structural position in societies. This encompasses the role of automated systems or AI in different forms of geographical and social change, including migration and labour mobility, algorithmic uses of 'location', as well as discourses of and about people on the move
The Fifteenth Marcel Grossmann Meeting
The three volumes of the proceedings of MG15 give a broad view of all aspects of gravitational physics and astrophysics, from mathematical issues to recent observations and experiments. The scientific program of the meeting included 40 morning plenary talks over 6 days, 5 evening popular talks and nearly 100 parallel sessions on 71 topics spread over 4 afternoons. These proceedings are a representative sample of the very many oral and poster presentations made at the meeting.Part A contains plenary and review articles and the contributions from some parallel sessions, while Parts B and C consist of those from the remaining parallel sessions. The contents range from the mathematical foundations of classical and quantum gravitational theories including recent developments in string theory, to precision tests of general relativity including progress towards the detection of gravitational waves, and from supernova cosmology to relativistic astrophysics, including topics such as gamma ray bursts, black hole physics both in our galaxy and in active galactic nuclei in other galaxies, and neutron star, pulsar and white dwarf astrophysics. Parallel sessions touch on dark matter, neutrinos, X-ray sources, astrophysical black holes, neutron stars, white dwarfs, binary systems, radiative transfer, accretion disks, quasars, gamma ray bursts, supernovas, alternative gravitational theories, perturbations of collapsed objects, analog models, black hole thermodynamics, numerical relativity, gravitational lensing, large scale structure, observational cosmology, early universe models and cosmic microwave background anisotropies, inhomogeneous cosmology, inflation, global structure, singularities, chaos, Einstein-Maxwell systems, wormholes, exact solutions of Einstein's equations, gravitational waves, gravitational wave detectors and data analysis, precision gravitational measurements, quantum gravity and loop quantum gravity, quantum cosmology, strings and branes, self-gravitating systems, gamma ray astronomy, cosmic rays and the history of general relativity
Explain, Adapt and Retrain: How to improve the accuracy of a PPM classifier through different explanation styles
Recent papers have introduced a novel approach to explain why a Predictive
Process Monitoring (PPM) model for outcome-oriented predictions provides wrong
predictions. Moreover, they have shown how to exploit the explanations,
obtained using state-of-the art post-hoc explainers, to identify the most
common features that induce a predictor to make mistakes in a semi-automated
way, and, in turn, to reduce the impact of those features and increase the
accuracy of the predictive model. This work starts from the assumption that
frequent control flow patterns in event logs may represent important features
that characterize, and therefore explain, a certain prediction. Therefore, in
this paper, we (i) employ a novel encoding able to leverage DECLARE constraints
in Predictive Process Monitoring and compare the effectiveness of this encoding
with Predictive Process Monitoring state-of-the art encodings, in particular
for the task of outcome-oriented predictions; (ii) introduce a completely
automated pipeline for the identification of the most common features inducing
a predictor to make mistakes; and (iii) show the effectiveness of the proposed
pipeline in increasing the accuracy of the predictive model by validating it on
different real-life datasets
- …