13 research outputs found
A Closer Look into Recent Video-based Learning Research: A Comprehensive Review of Video Characteristics, Tools, Technologies, and Learning Effectiveness
People increasingly use videos on the Web as a source for learning. To
support this way of learning, researchers and developers are continuously
developing tools, proposing guidelines, analyzing data, and conducting
experiments. However, it is still not clear what characteristics a video should
have to be an effective learning medium. In this paper, we present a
comprehensive review of 257 articles on video-based learning for the period
from 2016 to 2021. One of the aims of the review is to identify the video
characteristics that have been explored by previous work. Based on our
analysis, we suggest a taxonomy which organizes the video characteristics and
contextual aspects into eight categories: (1) audio features, (2) visual
features, (3) textual features, (4) instructor behavior, (5) learners
activities, (6) interactive features (quizzes, etc.), (7) production style, and
(8) instructional design. Also, we identify four representative research
directions: (1) proposals of tools to support video-based learning, (2) studies
with controlled experiments, (3) data analysis studies, and (4) proposals of
design guidelines for learning videos. We find that the most explored
characteristics are textual features followed by visual features, learner
activities, and interactive features. Text of transcripts, video frames, and
images (figures and illustrations) are most frequently used by tools that
support learning through videos. The learner activity is heavily explored
through log files in data analysis studies, and interactive features have been
frequently scrutinized in controlled experiments. We complement our review by
contrasting research findings that investigate the impact of video
characteristics on the learning effectiveness, report on tasks and technologies
used to develop tools that support learning, and summarize trends of design
guidelines to produce learning video
Recommended from our members
AI and the Writer: How Language Models Support Creative Writers
Writing underlies a vast landscape of cultural artifacts, from poetry to journalism to scientific papers. While technology has been used to reduce the cognitive load of writing with accurate next word prediction, recent developments in natural language generation may prove able to go beyond predicting what we were going to write anyway, and give us new ideas relevant to a particular writing task. This proposal, of computers giving writers valuable ideas, is quite new in the history of writing tools, and has so far proven illusory.
Existing systems that address story continuation, which present writers with options for the next sentence in their story, has continually found that suggested sentences are nonsensical, inconsistent with what's already written, or a deviation from the writer's intended direction. Thus, it's not understood if---and if so, how---generative language technologies can support writers with complex writing tasks. I address this challenge by focusing on more specific goals than story continuation, and demonstrate that the methods I develop generate coherent, cogent suggestions that writers are able to use in a variety of settings and writing tasks.
In this thesis, I consider writing tasks that are constrained by some external expectation, such as the logic of a metaphor or the details of a technical topic, but also require creativity to write a sentence or paragraph that is novel, surprising, and engaging to read. I introduce a design space, based on the cognitive process model of writing, that reveals how constrained, creative writing tasks are not supported by current writing support tools. I then present methods, embedded in systems, to support two challenging constrained, creative writing tasks.
With `Metaphoria', I present a method to aid in metaphor writing by generating metaphorical connections between two concepts. With `Sparks', I present a method to aid in science writing by generating sentences that make a connection between a technical topic and typical reader interests. These systems demonstrate that computation has the power to support constrained, creative tasks, and outline how they aid in inspiration, translation, and perspective.
Finally, through a qualitative study with a range of creative writers, I uncover the social dynamics that modulate how writers respond to such generative writing support. Collectively, this work demonstrates new methods for using technology to support creative writers, and presents theoretical results that describe both how and why writers make use of such technologies
An interdisciplinary concept for human-centered explainable artificial intelligence - Investigating the impact of explainable AI on end-users
Since the 1950s, Artificial Intelligence (AI) applications have captivated people. However, this fascination has always been accompanied by disillusionment about the limitations of this technology. Today, machine learning methods such as Deep Neural Networks (DNN) are successfully used in various tasks. However, these methods also have limitations: Their complexity makes their decisions no longer comprehensible to humans - they are black-boxes. The research branch of Explainable AI (XAI) has addressed this problem by investigating how to make AI decisions comprehensible. This desire is not new. In the 1970s, developers of intrinsic explainable AI approaches, so-called white-boxes (e.g., rule-based systems), were dealing with AI explanations. Nowadays, with the increased use of AI systems in all areas of life, the design of comprehensible systems has become increasingly important. Developing such systems is part of Human-Centred AI (HCAI) research, which integrates human needs and abilities in the design of AI interfaces. For this, an understanding is needed of how humans perceive XAI and how AI explanations influence the interaction between humans and AI. One of the open questions concerns the investigation of XAI for end-users, i.e., people who have no expertise in AI but interact with such systems or are impacted by the system's decisions.
This dissertation investigates the impact of different levels of interactive XAI of white- and black-box AI systems on end-users perceptions. Based on an interdisciplinary concept presented in this work, it is examined how the content, type, and interface of explanations of DNN (black box) and rule-based systems (white box) are perceived by end-users. How XAI influences end-users mental models, trust, self-efficacy, cognitive workload, and emotional state regarding the AI system is the centre of the investigation. At the beginning of the dissertation, general concepts regarding AI, explanations, and psychological constructs of mental models, trust, self-efficacy, cognitive load, and emotions are introduced. Subsequently, related work regarding the design and investigation of XAI for users is presented. This serves as a basis for the concept of a Human-Centered Explainable AI (HC-XAI) presented in this dissertation, which combines an XAI design approach with user evaluations. The author pursues an interdisciplinary approach that integrates knowledge from the research areas of (X)AI, Human-Computer Interaction, and Psychology.
Based on this interdisciplinary concept, a five-step approach is derived and applied to illustrative surveys and experiments in the empirical part of this dissertation.
To illustrate the first two steps, a persona approach for HC-XAI is presented, and based on that, a template for designing personas is provided. To illustrate the usage of the template, three surveys are presented that ask end-users about their attitudes and expectations towards AI and XAI. The personas generated from the survey data indicate that end-users often lack knowledge of XAI and that their perception of it depends on demographic and personality-related characteristics.
Steps three to five deal with the design of XAI for concrete applications. For this, different levels of interactive XAI are presented and investigated in experiments with end-users. For this purpose, two rule-based systems (i.e., white-box) and four systems based on DNN (i.e., black-box) are used.
These are applied for three purposes: Cooperation & collaboration, education, and medical decision support. Six user studies were conducted for this purpose, which differed in the interactivity of the XAI system used.
The results show that end-users trust and mental models of AI depend strongly on the context of use and the design of the explanation itself. For example, explanations that a virtual agent mediates are shown to promote trust. The content and type of explanations are also perceived differently by users. The studies also show that end-users in different application contexts of XAI feel the desire for interactive explanations.
The dissertation concludes with a summary of the scientific contribution, points out limitations of the presented work, and gives an outlook on possible future research topics to integrate explanations into everyday AI systems and thus enable the comprehensible handling of AI for all people.Seit den 1950er Jahren haben Anwendungen der Künstlichen Intelligenz (KI) die Menschen in ihren Bann gezogen. Diese Faszination wurde jedoch stets von Ernüchterung über die Grenzen dieser Technologie begleitet. Heute werden Methoden des maschinellen Lernens wie Deep Neural Networks (DNN) erfolgreich für verschiedene Aufgaben eingesetzt. Doch auch diese Methoden haben ihre Grenzen: Durch ihre Komplexität sind ihre Entscheidungen für den Menschen nicht mehr nachvollziehbar - sie sind Black-Boxes. Der Forschungszweig der Erklärbaren KI (engl. XAI) hat sich diesem Problem angenommen und untersucht, wie man KI-Entscheidungen nachvollziehbar machen kann. Dieser Wunsch ist nicht neu. In den 1970er Jahren beschäftigten sich die Entwickler von intrinsisch erklärbaren KI-Ansätzen, so genannten White-Boxes (z. B. regelbasierte Systeme), mit KI-Erklärungen. Heutzutage, mit dem zunehmenden Einsatz von KI-Systemen in allen Lebensbereichen, wird die Gestaltung nachvollziehbarer Systeme immer wichtiger. Die Entwicklung solcher Systeme ist Teil der Menschzentrierten KI (engl. HCAI) Forschung, die menschliche Bedürfnisse und Fähigkeiten in die Gestaltung von KI-Schnittstellen integriert. Dafür ist ein Verständnis darüber erforderlich, wie Menschen XAI wahrnehmen und wie KI-Erklärungen die Interaktion zwischen Mensch und KI beeinflussen. Eine der offenen Fragen betrifft die Untersuchung von XAI für Endnutzer, d.h. Menschen, die keine Expertise in KI haben, aber mit solchen Systemen interagieren oder von deren Entscheidungen betroffen sind.
In dieser Dissertation wird untersucht, wie sich verschiedene Stufen interaktiver XAI von White- und Black-Box-KI-Systemen auf die Wahrnehmung der Endnutzer auswirken. Basierend auf einem interdisziplinären Konzept, das in dieser Arbeit vorgestellt wird, wird untersucht, wie der Inhalt, die Art und die Schnittstelle von Erklärungen von DNN (Black-Box) und regelbasierten Systemen (White-Box) von Endnutzern wahrgenommen werden. Wie XAI die mentalen Modelle, das Vertrauen, die Selbstwirksamkeit, die kognitive Belastung und den emotionalen Zustand der Endnutzer in Bezug auf das KI-System beeinflusst, steht im Mittelpunkt der Untersuchung. Zu Beginn der Arbeit werden allgemeine Konzepte zu KI, Erklärungen und psychologische Konstrukte von mentalen Modellen, Vertrauen, Selbstwirksamkeit, kognitiver Belastung und Emotionen vorgestellt. Anschließend werden verwandte Arbeiten bezüglich dem Design und der Untersuchung von XAI für Nutzer präsentiert. Diese dienen als Grundlage für das in dieser Dissertation vorgestellte Konzept einer Menschzentrierten Erklärbaren KI (engl. HC-XAI), das einen XAI-Designansatz mit Nutzerevaluationen kombiniert. Die Autorin verfolgt einen interdisziplinären Ansatz, der Wissen aus den Forschungsbereichen (X)AI, Mensch-Computer-Interaktion und Psychologie integriert.
Auf der Grundlage dieses interdisziplinären Konzepts wird ein fünfstufiger Ansatz abgeleitet und im empirischen Teil dieser Arbeit auf exemplarische Umfragen und Experimente und angewendet.
Zur Veranschaulichung der ersten beiden Schritte wird ein Persona-Ansatz für HC-XAI vorgestellt und darauf aufbauend eine Vorlage für den Entwurf von Personas bereitgestellt. Um die Verwendung der Vorlage zu veranschaulichen, werden drei Umfragen präsentiert, in denen Endnutzer zu ihren Einstellungen und Erwartungen gegenüber KI und XAI befragt werden. Die aus den Umfragedaten generierten Personas zeigen, dass es den Endnutzern oft an Wissen über XAI mangelt und dass ihre Wahrnehmung dessen von demografischen und persönlichkeitsbezogenen Merkmalen abhängt.
Die Schritte drei bis fünf befassen sich mit der Gestaltung von XAI für konkrete Anwendungen. Hierzu werden verschiedene Stufen interaktiver XAI vorgestellt und in Experimenten mit Endanwendern untersucht. Zu diesem Zweck werden zwei regelbasierte Systeme (White-Box) und vier auf DNN basierende Systeme (Black-Box) verwendet.
Diese werden für drei Zwecke eingesetzt: Kooperation & Kollaboration, Bildung und medizinische Entscheidungsunterstützung. Hierzu wurden sechs Nutzerstudien durchgeführt, die sich in der Interaktivität des verwendeten XAI-Systems unterschieden.
Die Ergebnisse zeigen, dass das Vertrauen und die mentalen Modelle der Endnutzer in KI stark vom Nutzungskontext und der Gestaltung der Erklärung selbst abhängen. Es hat sich beispielsweise gezeigt, dass Erklärungen, die von einem virtuellen Agenten vermittelt werden, das Vertrauen fördern. Auch der Inhalt und die Art der Erklärungen werden von den Nutzern unterschiedlich wahrgenommen. Die Studien zeigen zudem, dass Endnutzer in unterschiedlichen Anwendungskontexten von XAI den Wunsch nach interaktiven Erklärungen verspüren.
Die Dissertation schließt mit einer Zusammenfassung des wissenschaftlichen Beitrags, weist auf Grenzen der vorgestellten Arbeit hin und gibt einen Ausblick auf mögliche zukünftige Forschungsthemen, um Erklärungen in alltägliche KI-Systeme zu integrieren und damit den verständlichen Umgang mit KI für alle Menschen zu ermöglichen
Developing Persona Analytics Towards Persona Science
Much of the reported work on personas suffers from the lack of empirical evidence. To address this issue, we introduce Persona Analytics (PA), a system that tracks how users interact with data-driven personas. PA captures users’ mouse and gaze behavior to measure users’ interaction with algorithmically generated personas and use of system features for an interactive persona system. Measuring these activities grants an understanding of the behaviors of a persona user, required for quantitative measurement of persona use to obtain scientifically valid evidence. Conducting a study with 144 participants, we demonstrate how PA can be deployed for remote user studies during exceptional times when physical user studies are difficult, if not impossible.© 2022 Copyright held by the owner/author(s). Publication rights licensed to ACM.fi=vertaisarvioitu|en=peerReviewed
Recommended from our members
User Experience for Elephants: Researching Interactive Enrichment through Design and Craft
This thesis explores the challenge for humans of designing and crafting interactive enrichment systems for elephants housed in captivity.
Captive elephants may have limited opportunity to express a full range of natural behaviours and therefore benefit from well-designed environmental enrichment. We asked whether technology could support the design and development of novel enrichment for elephants and investigated what kinds of technology-enabled systems would hold their interest. Crucially, these systems were designed to provide the elephants with opportunities to make and enact choices – giving them more control over what happened in their environment.
After researching wild elephant lifestyle and characteristics, our fieldwork started with an ethnographic study of captive elephants. We then followed an exploratory approach: Research through Design and Craft. Over several years, a range of interactive systems were crafted for elephants. Each device included embedded technology that enabled elephant interactions to be captured and mapped to associated system outputs. Elephants and their keepers were involved in this cyclical process, and the elephants’ reactions to the devices were noted and interpreted, giving rise to insights that informed the subsequent designs.
Analysis of the design and development of the enrichment systems revealed important interface attributes and design considerations that we describe in this document. Finally, we offer five contributions for the ACI community: (i) Research through Design and Craft methodology, which was developed and tested over several years; (ii) ZooJam workshops, which were organised with colleagues over three years; (iii) six key principles of interaction design for ACI development – consistency, differentiation, graduation, specificity, multiplicity and affordance; (iv) an exploration of More than Human Aesthetics focusing on performative aesthetics; (v) a prototype deck of Concept Craft Cards that share theoretical and practical topics with other designers and developers
Scalable and Quality-Aware Training Data Acquisition for Conversational Cognitive Services
Dialog Systems (or simply bots) have recently become a popular human-computer interface for performing user's tasks, by invoking the appropriate back-end APIs (Application Programming Interfaces) based on the user's request in natural language. Building task-oriented bots, which aim at performing real-world tasks (e.g., booking flights), has become feasible with the continuous advances in Natural Language Processing (NLP), Artificial Intelligence (AI), and the countless number of devices which allow third-party software systems to invoke their back-end APIs.
Nonetheless, bot development technologies are still in their preliminary stages, with several unsolved theoretical and technical challenges stemming from the ambiguous nature of human languages. Given the richness of natural language, supervised models require a large number of user utterances paired with their corresponding tasks -- called intents.
To build a bot, developers need to manually translate APIs to utterances (called canonical utterances) and paraphrase them to obtain a diverse set of utterances. Crowdsourcing has been widely used to obtain such datasets,
by paraphrasing the initial utterances generated by the bot developers for each task. However, there are several unsolved issues. First, generating canonical utterances requires manual efforts, making bot development both expensive and hard to scale. Second, since crowd workers may be anonymous and are asked to provide open-ended text (paraphrases), crowdsourced paraphrases may be noisy and incorrect (not conveying the same intent as the given task).
This thesis first surveys the state-of-the-art approaches for collecting large training utterances for task-oriented bots. Next, we conduct an empirical study to identify quality issues of crowdsourced utterances (e.g., grammatical errors, semantic completeness). Moreover, we propose novel approaches for identifying unqualified crowd workers and eliminating malicious workers from crowdsourcing tasks. Particularly, we propose a novel technique to promote the diversity of crowdsourced paraphrases by dynamically generating word suggestions while crowd workers are paraphrasing a particular utterance. Moreover, we propose a novel technique to automatically translate APIs to canonical utterances. Finally, we present our platform to automatically generate bots out of API specifications. We also conduct thorough experiments to validate the proposed techniques and models
Enhancing explainability and scrutability of recommender systems
Our increasing reliance on complex algorithms for recommendations calls for models and methods for explainable, scrutable, and trustworthy AI. While explainability is required for understanding the relationships between model inputs and outputs, a scrutable system allows us to modify its behavior as desired. These properties help bridge the gap between our expectations and the algorithm’s behavior and accordingly boost our trust in AI. Aiming to cope with information overload, recommender systems play a crucial role in filtering content (such as products, news, songs, and movies) and shaping a personalized experience for their users. Consequently, there has been a growing demand from the information consumers to receive proper explanations for their personalized recommendations. These explanations aim at helping users understand why certain items are recommended to them and how their previous inputs to the system relate to the generation of such recommendations. Besides, in the event of receiving undesirable content, explanations could possibly contain valuable information as to how the system’s behavior can be modified accordingly. In this thesis, we present our contributions towards explainability and scrutability of recommender systems: • We introduce a user-centric framework, FAIRY, for discovering and ranking post-hoc explanations for the social feeds generated by black-box platforms. These explanations reveal relationships between users’ profiles and their feed items and are extracted from the local interaction graphs of users. FAIRY employs a learning-to-rank (LTR) method to score candidate explanations based on their relevance and surprisal. • We propose a method, PRINCE, to facilitate provider-side explainability in graph-based recommender systems that use personalized PageRank at their core. PRINCE explanations are comprehensible for users, because they present subsets of the user’s prior actions responsible for the received recommendations. PRINCE operates in a counterfactual setup and builds on a polynomial-time algorithm for finding the smallest counterfactual explanations. • We propose a human-in-the-loop framework, ELIXIR, for enhancing scrutability and subsequently the recommendation models by leveraging user feedback on explanations. ELIXIR enables recommender systems to collect user feedback on pairs of recommendations and explanations. The feedback is incorporated into the model by imposing a soft constraint for learning user-specific item representations. We evaluate all proposed models and methods with real user studies and demonstrate their benefits at achieving explainability and scrutability in recommender systems.Unsere zunehmende Abhängigkeit von komplexen Algorithmen für maschinelle Empfehlungen erfordert Modelle und Methoden für erklärbare, nachvollziehbare und vertrauenswürdige KI. Zum Verstehen der Beziehungen zwischen Modellein- und ausgaben muss KI erklärbar sein. Möchten wir das Verhalten des Systems hingegen nach unseren Vorstellungen ändern, muss dessen Entscheidungsprozess nachvollziehbar sein. Erklärbarkeit und Nachvollziehbarkeit von KI helfen uns dabei, die Lücke zwischen dem von uns erwarteten und dem tatsächlichen Verhalten der Algorithmen zu schließen und unser Vertrauen in KI-Systeme entsprechend zu stärken. Um ein Übermaß an Informationen zu verhindern, spielen Empfehlungsdienste eine entscheidende Rolle um Inhalte (z.B. Produkten, Nachrichten, Musik und Filmen) zu filtern und deren Benutzern eine personalisierte Erfahrung zu bieten. Infolgedessen erheben immer mehr In- formationskonsumenten Anspruch auf angemessene Erklärungen für deren personalisierte Empfehlungen. Diese Erklärungen sollen den Benutzern helfen zu verstehen, warum ihnen bestimmte Dinge empfohlen wurden und wie sich ihre früheren Eingaben in das System auf die Generierung solcher Empfehlungen auswirken. Außerdem können Erklärungen für den Fall, dass unerwünschte Inhalte empfohlen werden, wertvolle Informationen darüber enthalten, wie das Verhalten des Systems entsprechend geändert werden kann. In dieser Dissertation stellen wir unsere Beiträge zu Erklärbarkeit und Nachvollziehbarkeit von Empfehlungsdiensten vor. • Mit FAIRY stellen wir ein benutzerzentriertes Framework vor, mit dem post-hoc Erklärungen für die von Black-Box-Plattformen generierten sozialen Feeds entdeckt und bewertet werden können. Diese Erklärungen zeigen Beziehungen zwischen Benutzerprofilen und deren Feeds auf und werden aus den lokalen Interaktionsgraphen der Benutzer extrahiert. FAIRY verwendet eine LTR-Methode (Learning-to-Rank), um die Erklärungen anhand ihrer Relevanz und ihres Grads unerwarteter Empfehlungen zu bewerten. • Mit der PRINCE-Methode erleichtern wir das anbieterseitige Generieren von Erklärungen für PageRank-basierte Empfehlungsdienste. PRINCE-Erklärungen sind für Benutzer verständlich, da sie Teilmengen früherer Nutzerinteraktionen darstellen, die für die erhaltenen Empfehlungen verantwortlich sind. PRINCE-Erklärungen sind somit kausaler Natur und werden von einem Algorithmus mit polynomieller Laufzeit erzeugt , um präzise Erklärungen zu finden. • Wir präsentieren ein Human-in-the-Loop-Framework, ELIXIR, um die Nachvollziehbarkeit der Empfehlungsmodelle und die Qualität der Empfehlungen zu verbessern. Mit ELIXIR können Empfehlungsdienste Benutzerfeedback zu Empfehlungen und Erklärungen sammeln. Das Feedback wird in das Modell einbezogen, indem benutzerspezifischer Einbettungen von Objekten gelernt werden. Wir evaluieren alle Modelle und Methoden in Benutzerstudien und demonstrieren ihren Nutzen hinsichtlich Erklärbarkeit und Nachvollziehbarkeit von Empfehlungsdiensten
Impact and key challenges of insider threats on organizations and critical businesses
The insider threat has consistently been identified as a key threat to organizations and governments. Understanding the nature of insider threats and the related threat landscape can help in forming mitigation strategies, including non-technical means. In this paper, we survey and highlight challenges associated with the identification and detection of insider threats in both public and private sector organizations, especially those part of a nation’s critical infrastructure. We explore the utility of the cyber kill chain to understand insider threats, as well as understanding the underpinning human behavior and psychological factors. The existing defense techniques are discussed and critically analyzed, and improvements are suggested, in line with the current state-of-the-art cyber security requirements. Finally, open problems related to the insider threat are identified and future research directions are discussed
Ihmisen ja tietokoneen välinen yhteisluovuus : runoja kirjoittavien yhteisluovien järjestelmien suunnittelu ja arviointi sekä yhteisluovan prosessin mallintaminen
Human-computer co-creativity examines creative collaboration between humans and artificially intelligent computational agents. Human-computer co-creativity researchers assume that instead of using computational systems to merely automate creative tasks, computational creativity methods can be leveraged to design computational collaborators capable of sharing creative responsibility with a human collaborator. This has potential for extending both human and computational creative capability. This thesis focuses on the case of one human and one computational collaborator. More specifically this thesis studies how children collaborate with a computational collaborator called the Poetry Machine in the linguistically creative task of writing poems.
This thesis investigates three topics related to human-computer co-creativity: The design of human-computer co-creative systems, their evaluation and the modelling of human-computer co-creative processes. These topics are approached from two perspectives: an interaction design perspective and a computational creativity perspective. The interaction design perspective provides practical methods for the design and evaluation of interactive systems as well as methodological frameworks for analysing design practices in the field. The computational creativity perspective then again provides a theoretical view to the evaluation and modelling of human-computer co-creativity. The thesis itself consists of five papers.
This thesis starts with an analysis of the interaction design process for computational collaborators. The design process is examined through a review of case studies, and a thorough description of the design process of the Poetry Machine system described in Paper I. The review shows that several researchers in the field have assumed a user-centered design approach, but some good design practices, including the reporting of design decisions, iterative design and early testing with users are not yet fulfilled according to the best standards.
After illustrating the general design process, this thesis examines different approaches to the evaluation of human-computer co-creativity. Two case studies are conducted to evaluate the usability of and user experiences with the Poetry Machine system. The first evaluations are described in Paper II. They produced useful feedback for developing the system further. The second evaluation, described in Papers III and IV, investigates specific metrics for evaluating the co-creative writing experience in more detail. To promote the accumulation of design knowledge, special care is taken to report practical issues related to evaluating co-creative systems. These include, for example, issues related to formulating suitable evaluation tasks.
Finally the thesis considers modelling human-computer co-creativity. Paper V approaches modelling from a computationally creative perspective, by extending the creativity-as-a-search paradigm into co-creative systems. The new model highlights specific issues for interaction designers to be aware of when designing new computational collaborators.Ihmisen ja tietokoneen välinen yhteisluovuus on tutkimusala, joka käsittelee ihmisten ja tekoälyagenttien välistä luovaa yhteistyötä. Tekoälyagenttien perustana toimivat uudet laskennallisen luovuuden metodit. Ne mahdollistavat pelkän luovien tehtävien automatisoinnin sijaan tasapainoisemman vastuunjaon ja vuorovaikutuksen ihmisen ja tekoälyagentin välillä. Tämä tarjoaa sekä ihmisille että laskennallisille agenteille uusia luovia mahdollisuuksia. Väitöskirja keskittyy erityisesti yhden ihmisen ja laskennallisesti luovan agentin yhteistyöhön. Väitöskirja koostuu viidestä erillisestä julkaisusta, ja siihen kuuluvissa tapaustutkimuksissa havainnoidaan lasten ja laskennalliseen kielelliseen luovuuteen perustuvan Runokone–nimisen laskennallisesti luovan agentin yhteistyötä.
Väitöskirjassa käsitellään ihmisen ja tietokoneen välisen yhteisluovuuden kolmea teemaa: yhteisluovien järjestelmien suunnittelua, niiden arviointia ja ihmisen ja tietokoneen välisen yhteisluovan prosessin mallinnusta. Teemojen tutkimiseen käytetään vuorovaikutussuunnittelun ja laskennallisen luovuuden menetelmiä. Vuorovaikutussuunnittelu tarjoaa käytännönläheisiä menetelmiä järjestelmien suunnitteluun ja arviointiin sekä erilaisia teoreettisia näkökulmia alalla vallitsevien suunnittelukäytäntöjen tarkasteluun. Laskennallisen luovuuden tutkimus puolestaan tarjoaa teoreettisen näkökulman yhteisluovien järjestelmien arviointiin ja yhteisluovuuden mallinnukseen.
Ensimmäistä teemaa, yhteisluovien järjestelmien suunnittelua, käsitellään väitöskirjan julkaisussa I. Julkaisussa kuvataan yhteisluovien järjestelmien yleistä vuorovaikutussuunnitteluprosessia tapaustutkimuskatsauksen kautta, ja tarkastellaan Runokoneen suunnitteluprosessia. Tutkimuskatsaus osoittaa alan tutkijoiden usein valitsevan tutkimuksensa lähtökohdaksi käyttäjäkeskeisen suunnittelun. He kuitenkin noudattavat parhaita vuorovaikutussuunnittelun käytäntöjä vain löyhästi. Tiedeyhteisön sisällä tulisikin siksi parantaa erityisesti suunnittelupäätösten dokumentointia, iteratiivista suunnittelua ja varhaista käyttäjätestausta.
Toista teemaa, ihmisen ja koneen välisen yhteisluovuuden arviointia, tarkastellaan väitöksessä kahden tapaustutkimuksen kautta. Niistä ensimmäisessä keskitytään Runokoneen käytettävyyden arviointiin ja toisessa Runokoneen käyttäjien kokemusten arviointiin. Käytettävyyden arviointia on kuvattu tarkemmin julkaisussa II. Arviointi tuotti hyödyllistä palautetta järjestelmän jatkokehitystä varten. Julkaisuissa III ja IV tarkastellaan mittareita, joiden avulla voidaan arvioida tarkemmin käyttäjien käyttäjäkokemuksia erilaisissa yhteisluovan kirjoittamisen prosesseissa. Vuorovaikutussuunnittelun tutkimuksen ja käytännön suunnittelutyön tukemiseksi julkaisuissa paneudutaan erityisesti yhteisluovien järjestelmien arvioinnin käytännön ongelmiin. Näihin kuuluu esimerkiksi sopivien arviointitehtävien muodostaminen.
Lopuksi väitöskirjassa käsitellään ihmisen ja koneen välisen yhteisluovuuden mallinnusta. Julkaisussa V tarkastellaan mallinnusta laskennallisen luovuuden näkökulmasta laajentamalla luovan haun paradigmaa yhteisluoviin järjestelmiin. Luovan haun paradigma kuvaa luomisprosessia sekä esteettisesti miellyttävien että luovaan kohdealaan sopivien artefaktien etsintänä hakuavaruudessa. Kuvatussa laajennuksessa painottuvat vuorovaikutussuunnittelun kannalta oleelliset ristiriitatilanteet, joiden ratkaisutavat vaikuttavat laskennallisesti luovien yhteistyökumppaneiden ominaisuuksiin