10 research outputs found

    Folk Theories, Recommender Systems, and Human-Centered Explainable Artificial Intelligence (HCXAI)

    Get PDF
    This study uses folk theories to enhance human-centered “explainable AI” (HCXAI). The complexity and opacity of machine learning has compelled the need for explainability. Consumer services like Amazon, Facebook, TikTok, and Spotify have resulted in machine learning becoming ubiquitous in the everyday lives of the non-expert, lay public. The following research questions inform this study: What are the folk theories of users that explain how a recommender system works? Is there a relationship between the folk theories of users and the principles of HCXAI that would facilitate the development of more transparent and explainable recommender systems? Using the Spotify music recommendation system as an example, 19 Spotify users were surveyed and interviewed to elicit their folk theories of how personalized recommendations work in a machine learning system. Seven folk theories emerged: complies, dialogues, decides, surveils, withholds and conceals, empathizes, and exploits. These folk theories support, challenge, and augment the principles of HCXAI. Taken collectively, the folk theories encourage HCXAI to take a broader view of XAI. The objective of HCXAI is to move towards a more user-centered, less technically focused XAI. The elicited folk theories indicate that this will require adopting principles that include policy implications, consumer protection issues, and concerns about intention and the possibility of manipulation. As a window into the complex user beliefs that inform their iii interactions with Spotify, the folk theories offer insights into how HCXAI systems can more effectively provide machine learning explainability to the non-expert, lay public

    An interdisciplinary concept for human-centered explainable artificial intelligence - Investigating the impact of explainable AI on end-users

    Get PDF
    Since the 1950s, Artificial Intelligence (AI) applications have captivated people. However, this fascination has always been accompanied by disillusionment about the limitations of this technology. Today, machine learning methods such as Deep Neural Networks (DNN) are successfully used in various tasks. However, these methods also have limitations: Their complexity makes their decisions no longer comprehensible to humans - they are black-boxes. The research branch of Explainable AI (XAI) has addressed this problem by investigating how to make AI decisions comprehensible. This desire is not new. In the 1970s, developers of intrinsic explainable AI approaches, so-called white-boxes (e.g., rule-based systems), were dealing with AI explanations. Nowadays, with the increased use of AI systems in all areas of life, the design of comprehensible systems has become increasingly important. Developing such systems is part of Human-Centred AI (HCAI) research, which integrates human needs and abilities in the design of AI interfaces. For this, an understanding is needed of how humans perceive XAI and how AI explanations influence the interaction between humans and AI. One of the open questions concerns the investigation of XAI for end-users, i.e., people who have no expertise in AI but interact with such systems or are impacted by the system's decisions. This dissertation investigates the impact of different levels of interactive XAI of white- and black-box AI systems on end-users perceptions. Based on an interdisciplinary concept presented in this work, it is examined how the content, type, and interface of explanations of DNN (black box) and rule-based systems (white box) are perceived by end-users. How XAI influences end-users mental models, trust, self-efficacy, cognitive workload, and emotional state regarding the AI system is the centre of the investigation. At the beginning of the dissertation, general concepts regarding AI, explanations, and psychological constructs of mental models, trust, self-efficacy, cognitive load, and emotions are introduced. Subsequently, related work regarding the design and investigation of XAI for users is presented. This serves as a basis for the concept of a Human-Centered Explainable AI (HC-XAI) presented in this dissertation, which combines an XAI design approach with user evaluations. The author pursues an interdisciplinary approach that integrates knowledge from the research areas of (X)AI, Human-Computer Interaction, and Psychology. Based on this interdisciplinary concept, a five-step approach is derived and applied to illustrative surveys and experiments in the empirical part of this dissertation. To illustrate the first two steps, a persona approach for HC-XAI is presented, and based on that, a template for designing personas is provided. To illustrate the usage of the template, three surveys are presented that ask end-users about their attitudes and expectations towards AI and XAI. The personas generated from the survey data indicate that end-users often lack knowledge of XAI and that their perception of it depends on demographic and personality-related characteristics. Steps three to five deal with the design of XAI for concrete applications. For this, different levels of interactive XAI are presented and investigated in experiments with end-users. For this purpose, two rule-based systems (i.e., white-box) and four systems based on DNN (i.e., black-box) are used. These are applied for three purposes: Cooperation & collaboration, education, and medical decision support. Six user studies were conducted for this purpose, which differed in the interactivity of the XAI system used. The results show that end-users trust and mental models of AI depend strongly on the context of use and the design of the explanation itself. For example, explanations that a virtual agent mediates are shown to promote trust. The content and type of explanations are also perceived differently by users. The studies also show that end-users in different application contexts of XAI feel the desire for interactive explanations. The dissertation concludes with a summary of the scientific contribution, points out limitations of the presented work, and gives an outlook on possible future research topics to integrate explanations into everyday AI systems and thus enable the comprehensible handling of AI for all people.Seit den 1950er Jahren haben Anwendungen der Künstlichen Intelligenz (KI) die Menschen in ihren Bann gezogen. Diese Faszination wurde jedoch stets von Ernüchterung über die Grenzen dieser Technologie begleitet. Heute werden Methoden des maschinellen Lernens wie Deep Neural Networks (DNN) erfolgreich für verschiedene Aufgaben eingesetzt. Doch auch diese Methoden haben ihre Grenzen: Durch ihre Komplexität sind ihre Entscheidungen für den Menschen nicht mehr nachvollziehbar - sie sind Black-Boxes. Der Forschungszweig der Erklärbaren KI (engl. XAI) hat sich diesem Problem angenommen und untersucht, wie man KI-Entscheidungen nachvollziehbar machen kann. Dieser Wunsch ist nicht neu. In den 1970er Jahren beschäftigten sich die Entwickler von intrinsisch erklärbaren KI-Ansätzen, so genannten White-Boxes (z. B. regelbasierte Systeme), mit KI-Erklärungen. Heutzutage, mit dem zunehmenden Einsatz von KI-Systemen in allen Lebensbereichen, wird die Gestaltung nachvollziehbarer Systeme immer wichtiger. Die Entwicklung solcher Systeme ist Teil der Menschzentrierten KI (engl. HCAI) Forschung, die menschliche Bedürfnisse und Fähigkeiten in die Gestaltung von KI-Schnittstellen integriert. Dafür ist ein Verständnis darüber erforderlich, wie Menschen XAI wahrnehmen und wie KI-Erklärungen die Interaktion zwischen Mensch und KI beeinflussen. Eine der offenen Fragen betrifft die Untersuchung von XAI für Endnutzer, d.h. Menschen, die keine Expertise in KI haben, aber mit solchen Systemen interagieren oder von deren Entscheidungen betroffen sind. In dieser Dissertation wird untersucht, wie sich verschiedene Stufen interaktiver XAI von White- und Black-Box-KI-Systemen auf die Wahrnehmung der Endnutzer auswirken. Basierend auf einem interdisziplinären Konzept, das in dieser Arbeit vorgestellt wird, wird untersucht, wie der Inhalt, die Art und die Schnittstelle von Erklärungen von DNN (Black-Box) und regelbasierten Systemen (White-Box) von Endnutzern wahrgenommen werden. Wie XAI die mentalen Modelle, das Vertrauen, die Selbstwirksamkeit, die kognitive Belastung und den emotionalen Zustand der Endnutzer in Bezug auf das KI-System beeinflusst, steht im Mittelpunkt der Untersuchung. Zu Beginn der Arbeit werden allgemeine Konzepte zu KI, Erklärungen und psychologische Konstrukte von mentalen Modellen, Vertrauen, Selbstwirksamkeit, kognitiver Belastung und Emotionen vorgestellt. Anschließend werden verwandte Arbeiten bezüglich dem Design und der Untersuchung von XAI für Nutzer präsentiert. Diese dienen als Grundlage für das in dieser Dissertation vorgestellte Konzept einer Menschzentrierten Erklärbaren KI (engl. HC-XAI), das einen XAI-Designansatz mit Nutzerevaluationen kombiniert. Die Autorin verfolgt einen interdisziplinären Ansatz, der Wissen aus den Forschungsbereichen (X)AI, Mensch-Computer-Interaktion und Psychologie integriert. Auf der Grundlage dieses interdisziplinären Konzepts wird ein fünfstufiger Ansatz abgeleitet und im empirischen Teil dieser Arbeit auf exemplarische Umfragen und Experimente und angewendet. Zur Veranschaulichung der ersten beiden Schritte wird ein Persona-Ansatz für HC-XAI vorgestellt und darauf aufbauend eine Vorlage für den Entwurf von Personas bereitgestellt. Um die Verwendung der Vorlage zu veranschaulichen, werden drei Umfragen präsentiert, in denen Endnutzer zu ihren Einstellungen und Erwartungen gegenüber KI und XAI befragt werden. Die aus den Umfragedaten generierten Personas zeigen, dass es den Endnutzern oft an Wissen über XAI mangelt und dass ihre Wahrnehmung dessen von demografischen und persönlichkeitsbezogenen Merkmalen abhängt. Die Schritte drei bis fünf befassen sich mit der Gestaltung von XAI für konkrete Anwendungen. Hierzu werden verschiedene Stufen interaktiver XAI vorgestellt und in Experimenten mit Endanwendern untersucht. Zu diesem Zweck werden zwei regelbasierte Systeme (White-Box) und vier auf DNN basierende Systeme (Black-Box) verwendet. Diese werden für drei Zwecke eingesetzt: Kooperation & Kollaboration, Bildung und medizinische Entscheidungsunterstützung. Hierzu wurden sechs Nutzerstudien durchgeführt, die sich in der Interaktivität des verwendeten XAI-Systems unterschieden. Die Ergebnisse zeigen, dass das Vertrauen und die mentalen Modelle der Endnutzer in KI stark vom Nutzungskontext und der Gestaltung der Erklärung selbst abhängen. Es hat sich beispielsweise gezeigt, dass Erklärungen, die von einem virtuellen Agenten vermittelt werden, das Vertrauen fördern. Auch der Inhalt und die Art der Erklärungen werden von den Nutzern unterschiedlich wahrgenommen. Die Studien zeigen zudem, dass Endnutzer in unterschiedlichen Anwendungskontexten von XAI den Wunsch nach interaktiven Erklärungen verspüren. Die Dissertation schließt mit einer Zusammenfassung des wissenschaftlichen Beitrags, weist auf Grenzen der vorgestellten Arbeit hin und gibt einen Ausblick auf mögliche zukünftige Forschungsthemen, um Erklärungen in alltägliche KI-Systeme zu integrieren und damit den verständlichen Umgang mit KI für alle Menschen zu ermöglichen

    Human-Centered Explainable Artificial Intelligence for Anomaly Detection in Quality Inspection: A Collaborative Approach to Bridge the Gap Between Humans and AI

    Get PDF
    In the quality inspection industry, the use of Artificial Intelligence (AI) continues to advance to produce safer and faster autonomous systems that can perceive, learn, decide, and act independently. As observed by the researcher interacting with the local energy company over a one-year period, these AI systems’ performance is limited by the machine’s current inability to explain its decisions and actions to human users. Especially in energy companies, eXplainable-AI (XAI) is critical to achieve speed, reliability, and trustworthiness with human inspection workers. Placing humans alongside AI will establish a sense of trust that augments the individual’s capabilities at the workplace. To achieve such an XAI system centered around humans, it is necessary to design and develop more explainable AI models. Incorporating these XAI systems centered around human workers in the inspection industry brings a significant shift in conducting visual inspections. Adding this explainability factor to the AI intelligent inspection systems makes the decision-making process more sustainable and trustworthy by bringing a collaborative approach. Currently, there is a lack of trust between the inspection workers and AI, creating uncertainty among inspection workers about the use of the existing AI models. To address this gap, the purpose of this qualitative research study was to explore and understand the need for human-centered XAI systems to detect anomalies in quality inspection in energy industries

    Diagnosis and Prognosis of Occupational disorders based on Machine Learn- ing Techniques applied to Occupational Profiles

    Get PDF
    Work-related disorders have a global influence on people’s well-being and quality of life and are a financial burden for organizations because they reduce productivity, increase absenteeism, and promote early retirement. Work-related musculoskeletal disorders, in particular, represent a significant fraction of the total in all occupational contexts. In automotive and industrial settings where workers are exposed to work-related muscu- loskeletal disorders risk factors, occupational physicians are responsible for monitoring workers’ health protection profiles. Occupational technicians report in the Occupational Health Protection Profiles database to understand which exposure to occupational work- related musculoskeletal disorder risk factors should be ensured for a given worker. Occu- pational Health Protection Profiles databases describe the occupational physician states, and which exposure the physicians considers necessary to ensure the worker’s health protection in terms of their functional work ability. The application of Human-Centered explainable artificial intelligence can support the decision making to go from worker’s Functional Work Ability to explanations by integrating explainability into medical (re- striction) and supporting in two decision contexts: prognosis and diagnosis of individual, work related and organizational risk condition. Although previous machine learning ap- proaches provided good predictions, their application in an actual occupational setting is limited because their predictions are difficult to interpret and hence, not actionable. In this thesis, injured body parts in which the ability changed in a worker’s functional work ability status are targeted. On the one hand, artificial intelligence algorithms can help technical teams, occupational physicians, and ergonomists determine a worker’s workplace risk via the diagnosis and prognosis of body part(s) injuries; on the other hand, these approaches can help prevent work-related musculoskeletal disorders by identifying which processes are lacking in working condition improvement and which workplaces have a better match between the remaining functional work abilities. A sample of 2025 for the prognosis part (from the years of 2019 to 2020) and 7857 for the prognosis part of Occupational Health Protection Profiles based on Functional Work Ability textual re- ports in the Portuguese language in automotive industry factory. Machine learning-based Natural Language Processing methods were implemented to extract standardized infor- mation. The prognosis and diagnosis of Occupational Health Protection Profiles factors were developed in reliable Human-Centered explainable artificial intelligence system to promote a trustworthy Human-Centered explainable artificial intelligence system (enti- tled Industrial microErgo application). The most suitable regression models to predict the next medical appointment for the injured body regions were the models based on CatBoost regression, with R square and an RMSLE of 0.84 and 1.23 weeks, respectively. In parallel, CatBoost’s best regression model for most body parts is the prediction of the next injured body parts based on these two errors. This information can help tech- nical industrial teams understand potential risk factors for Occupational Health Protec- tion Profiles and identify warning signs of the early stages of musculoskeletal disorders.Os transtornos relacionados ao trabalho têm influência global no bem-estar e na quali- dade de vida das pessoas e são um ônus financeiro para as organizações, pois reduzem a produtividade, aumentam o absenteísmo e promovem a aposentadoria precoce. Os distúr- bios osteomusculares relacionados ao trabalho, em particular, representam uma fração significativa do total em todos os contextos ocupacionais. Em ambientes automotivos e industriais onde os trabalhadores estão expostos a fatores de risco de distúrbios osteomus- culares relacionados ao trabalho, os médicos do trabalho são responsáveis por monitorar os perfis de proteção à saúde dos trabalhadores. Os técnicos do trabalho reportam-se à base de dados dos Perfis de Proteção da Saúde Ocupacional para compreender quais os fatores de risco de exposição a perturbações músculo-esqueléticas relacionadas com o tra- balho que devem ser assegurados para um determinado trabalhador. As bases de dados de Perfis de Proteção à Saúde Ocupacional descrevem os estados do médico do trabalho e quais exposições os médicos consideram necessária para garantir a proteção da saúde do trabalhador em termos de sua capacidade funcional para o trabalho. A aplicação da inteligência artificial explicável centrada no ser humano pode apoiar a tomada de decisão para ir da capacidade funcional de trabalho do trabalhador às explicações, integrando a explicabilidade à médica (restrição) e apoiando em dois contextos de decisão: prognóstico e diagnóstico da condição de risco individual, relacionado ao trabalho e organizacional . Embora as abordagens anteriores de aprendizado de máquina tenham fornecido boas pre- visões, sua aplicação em um ambiente ocupacional real é limitada porque suas previsões são difíceis de interpretar e portanto, não acionável. Nesta tese, as partes do corpo lesiona- das nas quais a habilidade mudou no estado de capacidade funcional para o trabalho do trabalhador são visadas. Por um lado, os algoritmos de inteligência artificial podem aju- dar as equipes técnicas, médicos do trabalho e ergonomistas a determinar o risco no local de trabalho de um trabalhador por meio do diagnóstico e prognóstico de lesões em partes do corpo; por outro lado, essas abordagens podem ajudar a prevenir distúrbios muscu- loesqueléticos relacionados ao trabalho, identificando quais processos estão faltando na melhoria das condições de trabalho e quais locais de trabalho têm uma melhor correspon- dência entre as habilidades funcionais restantes do trabalho. Para esta tese, foi utilizada uma base de dados com Perfis de Proteção à Saúde Ocupacional, que se baseiam em relató- rios textuais de Aptidão para o Trabalho em língua portuguesa, de uma fábrica da indús- tria automóvel (Auto Europa). Uma amostra de 2025 ficheiros foi utilizada para a parte de prognóstico (de 2019 a 2020) e uma amostra de 7857 ficheiros foi utilizada para a parte de diagnóstico. . Aprendizado de máquina- métodos baseados em Processamento de Lingua- gem Natural foram implementados para extrair informações padronizadas. O prognóstico e diagnóstico dos fatores de Perfis de Proteção à Saúde Ocupacional foram desenvolvidos em um sistema confiável de inteligência artificial explicável centrado no ser humano (inti- tulado Industrial microErgo application). Os modelos de regressão mais adequados para prever a próxima consulta médica para as regiões do corpo lesionadas foram os modelos baseados na regressão CatBoost, com R quadrado e RMSLE de 0,84 e 1,23 semanas, res- pectivamente. Em paralelo, a previsão das próximas partes do corpo lesionadas com base nesses dois erros relatados pelo CatBoost como o melhor modelo de regressão para a mai- oria das partes do corpo. Essas informações podem ajudar as equipes técnicas industriais a entender os possíveis fatores de risco para os Perfis de Proteção à Saúde Ocupacio- nal e identificar sinais de alerta dos estágios iniciais de distúrbios musculoesqueléticos

    Working in contexts for which transparency is important: A recordkeeping view of Explainable Artificial Intelligence (XAI)

    Get PDF
    This paper introduces the topic of Explainable Artificial Intelligence (XAI) and reports on the outcomes of an interdisciplinary workshop exploring it. It reflects on XAI through the frame and concerns of the recordkeeping profession. This paper takes a reflective approach. The origins of XAI are outlined as a way of exploring how it can be viewed and how it is currently taking shape. The workshop and its outcomes are briefly described and reflections on the process of investigating and taking part in conversations about XAI are offered. The article reinforces the value of undertaking interdisciplinary and exploratory conversations with others. It offers new perspectives on XAI and suggests ways in which recordkeeping can productively engage with it, as both a disruptive force on its thinking and a set of newly emerging record forms to be created and managed. The value of this paper comes from the way in which the introduction it provides will allow recordkeepers to gain a sense of what XAI is and the different ways in which they are both already engaging and can continue to engage with it

    Automotive Occupational Health Protection Profiles in Prevention Musculoskeletal Symptoms

    Get PDF
    This work was partly supported by science and technology foundation (FCT), under projects OPERATOR (ref. 04/SI/2019) and PREVOCUPAI (DSAIPA/AI/0105/2019), and Ph.D. grants PD/BDE/142816/2018 and PD/BDE/142973/2018.In automotive and industrial settings, occupational physicians are responsible for monitoring workers' health protection profiles. Workers' Functional Work Ability (FWA) status is used to create Occupational Health Protection Profiles (OHPP). This is a novel longitudinal study in comparison with previous research that has predominantly relied on the causality and explainability of human-understandable models for industrial technical teams like ergonomists. The application of artificial intelligence can support the decision-making to go from a worker's Functional Work Ability to explanations by integrating explainability into medical (restriction) and support in contexts of individual, work-related, and organizational risk conditions. A sample of 7857 for the prognosis part of OHPP based on Functional Work Ability in the Portuguese language in the automotive industry was taken from 2019 to 2021. The most suitable regression models to predict the next medical appointment for the workers' body parts protection were the models based on CatBoost regression, with an RMSLE of 0.84 and 1.23 weeks (mean error), respectively. CatBoost algorithm is also used to predict the next body part severity of OHPP. This information can help our understanding of potential risk factors for OHPP and identify warning signs of the early stages of musculoskeletal symptoms and work-related absenteeism.publishersversionpublishe

    A New Coastal Crawler Prototype to Expand the Ecological Monitoring Radius of OBSEA Cabled Observatory

    Get PDF
    The use of marine cabled video observatories with multiparametric environmental data collection capability is becoming relevant for ecological monitoring strategies. Their ecosystem surveying can be enforced in real time, remotely, and continuously, over consecutive days, seasons, and even years. Unfortunately, as most observatories perform such monitoring with fixed cameras, the ecological value of their data is limited to a narrow field of view, possibly not representative of the local habitat heterogeneity. Docked mobile robotic platforms could be used to extend data collection to larger, and hence more ecologically representative areas. Among the various state-of-the-art underwater robotic platforms available, benthic crawlers are excellent candidates to perform ecological monitoring tasks in combination with cabled observatories. Although they are normally used in the deep sea, their high positioning stability, low acoustic signature, and low energetic consumption, especially during stationary phases, make them suitable for coastal operations. In this paper, we present the integration of a benthic crawler into a coastal cabled observatory (OBSEA) to extend its monitoring radius and collect more ecologically representative data. The extension of the monitoring radius was obtained by remotely operating the crawler to enforce back-and-forth drives along specific transects while recording videos with the onboard cameras. The ecological relevance of the monitoring-radius extension was demonstrated by performing a visual census of the species observed with the crawler’s cameras in comparison to the observatory’s fixed cameras, revealing non-negligible differences. Additionally, the videos recorded from the crawler’s cameras during the transects were used to demonstrate an automated photo-mosaic of the seabed for the first time on this class of vehicles. In the present work, the crawler travelled in an area of 40 m away from the OBSEA, producing an extension of the monitoring field of view (FOV), and covering an area approximately 230 times larger than OBSEA’s camera. The analysis of the videos obtained from the crawler’s and the observatory’s cameras revealed differences in the species observed. Future implementation scenarios are also discussed in relation to mission autonomy to perform imaging across spatial heterogeneity gradients around the OBSEA

    Building trust in AI Systems

    Get PDF
    Artificial Intelligence has integrated as a part of humans’ daily life while at the same time the AI-enabled services and applications are widely considered distrustful. Because the majority of the users are not expert in Machine Learning, not to mention Deep learning, it is important to create trustworthy AI services that understand humans but also explains themselves in an easily understandable way. This type of approach to Artificial Intelligence is called Explainable Hu- man-Centered thinking and it has been discovered as a solution for the distrust problem between human-AI interaction. This research is a qualitative study of User-Experience of different AI-based applications and services that are used in daily life activities such as navigation or checking grammar mistakes. The goal is to find UX elements that affect to user’s trust-perception of the service or application and create a united list of these elements based on previous literature. This list can be used for designing better, explainable, and human-centered AI, but it also fulfills its purpose by gathering together and validating research of the field. The results showed that even in the most strongly trusted services and applications, users can notice problems such as privacy issues or missing explainability. However, many of the commonly used services pro- vide added value for its user and they are relatively better than the other similar services. Based on the results, this study discusses also critically whether implementing HAI is only a UX-de- sign problem but rather a part of sharing knowledge of trustworthy AI and not accepting non- transparent functions and data usage.Tekoäly on integroitunut osaksi ihmisten jokapäiväistää elämää, mutta yleisesti tekoälyperustaisia palveluja ja sovelluksia ei pidetä luotettavina. T ämän lisäksi välillä palveluja tai sovelluksia käyttäessä on mahdotonta todentaa, onko käyttäjä kosketuksissa ihmisen vai koneen kanssa ja mihinkä käyttäjän saama informaatio, kuten ohjeet tai ehdotukset, perustuvat. Koska tämän tyyppinen käyttäjäkokemus lisää epäluottamusta ihmisen ja tietokoneen kanssakäymisessä ja koska suurin osa käyttäjistä ei ole koneoppimisen asiantuntijoita, on tärkeää luoda luotettavia tekoälypalveluja, jotka ymmärtävät ihmisiä ja selittävät omaa toimintaansa helposti ymmärrettävällä tavalla. Tämän tyyppistä lähestymistapaa tekoälyyn kutsutaan selittäväksi ihmiskeskiseksi (explainable human-centered) ajatteluksi ja sitä on pidetty ratkaisuna nimenomaiseen ihmisen ja tekoälyn välisen epäluottamuksen ongelmaan. Tämä kvalitatiivinen tutkimus tarkastelee käyttäjäkokemusta erilaisissa tekoälypohjaisista sovelluksista ja palveluista, joita käytetään jokapäiväisessä elämässä, kuten navigoinnissa tai esimerkiksi kieliasun tai kielioppivirheiden tarkastuksessa. Tavoitteena on löytää UX-elementit, jotka vaikuttavat käyttäjän kokemukseen luottamuksesta käyttäessään palvelua tai sovellusta, ja luoda yhtenäinen luettelo näistä elementeistä aiemman kirjallisuuden perusteella. Tätä luetteloa voidaan käyttää apuna ihmiskeskeisessä tekoälysuunnittelussa, mutta se täyttää tarkoituksensa myös kokoamalla yhteen ja validoimalla alan aiempaa tutkimusta nimenomaan tekoälyperusteisista sovelluksiin liittyen. Kirjallisuuskatsaus esittelee tutkimuksen keskeiset käsitteet, kuten tekoälyn, luottamuksen ja käyttäjäkokemuksen. Lisäksi tässä osiossa kerätään yhteen tärkeimmät edellisissä tutkimuksissa jo identifioidut UX-elementit, jotka vaikuttavat käyttäjän kokemaan luottamukseen muun muassa web-suunnittelussa. Itse tutkimus jakaantuu kolmeen vaiheeseen, jossa ensimmäisenä tekoälyperustaiset sovellukset listataan perustuen alan kirjallisuuden tyyppimääritelmiin sekä käyttäjämäärä arvioiden mukaan. Toisessa vaiheessa, valitut sovellukset ja palvelut listattiin luotetuimmasta epäluotettavimpaan perustuen lyhyeen kyselytutkimukseen. Viimeiseksi syvähaastattelu, perustuen kriittisten tapahtumien tekniikkaan, suoritettiin kyselyyn vastanneille. Avoimilla kysymyksillä kartoitettiin tietoja tapahtumasta, jossa käyttäjä tunsi luottamusta tai epäluottamusta käyttäessään valittua tekoälyperusteistasovellusta tai palvelua. Tulokset analysoitiin teemoittamalla havaitut UX elementit, jotka lisäävät luottamusta tai vähentävät epäluottamusta ja vertaamalla niitä listaan alan edellisistä havainnoista luottamukseen liittyen. Tuloksena saatiin tutkimuksen tavoitteen mukainen lista, jossa on validoitu kirjallisuuden havaintoja, että lisätty uusia havaintoja luottamukseen vaikuttavista UX- elementeistä perustuen tehtyihin käyttäjähaastatteluihin. Kaiken kaikkiaan tämän tutkimuksen tärkeimmät havainnot vahvistivat luettelon tärkeistä UX- elementeistä, jotka on otettava huomioon luotaessa käyttäjien ja tekoälyjärjestelmien välistä luottamusta, mutta samalla vain luotettavien palvelujen suunnittelu ei riitä. Yksi tutkimuksen johtopäätös onkin, että kyselyn osallistujat käyttivät näitä palveluja, vaikka monet olivat huolissaan esimerkiksi omasta yksityisyydestään tai järjestelmän epämääräisestä datakäytöstä. Näin ollen nämä tulokset osoittavat, että käyttäjät hyväksyivät nämä käytännöt, koska sovelluksen tai palvelun käyttäminen toi suhteellista etua muihin palveluihin verrattuna tai merkittävää lisäarvoa käyttäjän jokapäiväiseen elämään. Näiden tulosten perusteella, tässä tutkimuksessa keskustellaan myös kriittisesti siitä, onko HAI:n (Human Centered Artificial intelligence) eli ihmiskeskeisen tekoälyn käyttöönotto vain UX-suunnittelun ongelma, vaan pikemminkin osa koulutusta ja tiedon jakamista luotettavasta tekoälystä jolloin käyttäjät eivät hyväksy läpinäkymättömiä toimintoja tai tietojen väärinkäyttöä, vaan vaativat luotettavia ja avoimia käytäntöjä, jotka selitetään heille erilaisten käyttöliittymäelementtien kautta

    Human-Centered Explainable Artificial Intelligence for Marine Autonomous Surface Vehicles

    No full text
    Explainable Artificial Intelligence (XAI) for Autonomous Surface Vehicles (ASVs) addresses developers’ needs for model interpretation, understandability, and trust. As ASVs approach wide-scale deployment, these needs are expanded to include end user interactions in real-world contexts. Despite recent successes of technology-centered XAI for enhancing the explainability of AI techniques to expert users, these approaches do not necessarily carry over to non-expert end users. Passengers, other vessels, and remote operators will have XAI needs distinct from those of expert users targeted in a traditional technology-centered approach. We formulate a concept called ‘human-centered XAI’ to address emerging end user interaction needs for ASVs. To structure the concept, we adopt a model-based reasoning method for concept formation consisting of three processes: analogy, visualization, and mental simulation, drawing from examples of recent ASV research at the Norwegian University of Science and Technology (NTNU). The examples show how current research activities point to novel ways of addressing XAI needs for distinct end user interactions and underpin the human-centered XAI approach. Findings show how representations of (1) usability, (2) trust, and (3) safety make up the main processes in human-centered XAI. The contribution is the formation of human-centered XAI to help advance the research community’s efforts to expand the agenda of interpretability, understandability, and trust to include end user ASV interactions

    Human-Centered Explainable Artificial Intelligence for Marine Autonomous Surface Vehicles

    No full text
    Explainable Artificial Intelligence (XAI) for Autonomous Surface Vehicles (ASVs) addresses developers’ needs for model interpretation, understandability, and trust. As ASVs approach wide-scale deployment, these needs are expanded to include end user interactions in real-world contexts. Despite recent successes of technology-centered XAI for enhancing the explainability of AI techniques to expert users, these approaches do not necessarily carry over to non-expert end users. Passengers, other vessels, and remote operators will have XAI needs distinct from those of expert users targeted in a traditional technology-centered approach. We formulate a concept called ‘human-centered XAI’ to address emerging end user interaction needs for ASVs. To structure the concept, we adopt a model-based reasoning method for concept formation consisting of three processes: analogy, visualization, and mental simulation, drawing from examples of recent ASV research at the Norwegian University of Science and Technology (NTNU). The examples show how current research activities point to novel ways of addressing XAI needs for distinct end user interactions and underpin the human-centered XAI approach. Findings show how representations of (1) usability, (2) trust, and (3) safety make up the main processes in human-centered XAI. The contribution is the formation of human-centered XAI to help advance the research community’s efforts to expand the agenda of interpretability, understandability, and trust to include end user ASV interactions
    corecore