8 research outputs found

    Multimodal Accessibility of Documents

    Get PDF

    Mobile Learning Content Authoring Tools (MLCATs): A Systematic Review

    Get PDF
    Mobile learning is currently receiving a lot of attention within the education arena, particularly within electronic learning. This is attributed to the increasing mobile penetration rates and the subsequent increases in university student enrolments. Mobile Learning environments are supported by a number of crucial services such as content creation which require an authoring tool. The last decade or so has witnessed increased attention to tools for authoring mobile learning content for education. This can be seen from the vast number of conference and journal publications devoted to the topic. Therefore, the goal of this paper is to review works that were published, suggest a new classification framework and explore each of the classification features. This paper is based on a systematic review of mobile learning content authoring tools (MLCATs) from 2000 to 2009. The framework is developed based on a number of dimensions such as system type, development context, Tools and Technologies used, tool availability, ICTD relation, support for standards, learning style support, media supported and tool purpose. This paper provides a means for researchers to extract assertions and several important lessons for the choice and implementation of MLCATs

    An Evaluation Framework for Business Intelligence Visualization

    Get PDF
    Nowadays, data visualization is becoming an essential part of data analysis. Business Intelligence Visualization (BIV) is a powerful tool that helps modern business flows faster and smoother than ever before. However, studies on BIV evaluation are severely lacking; most evaluation studies for BIV is guided by general principles of usability, which have limited aspects covered for customers? needs. The purpose of this research is to develop a framework that evaluates BIV, including decision-making experience. First, we did a literature review for good understanding of research progress on related fields, and established a conceptual framework. Second, we performed a user study that implemented this framework with a set of questionnaires to demonstrate how our framework can be used in real business. Our result proved that this framework can catch differences among different designs of BIV from the users? standpoints. This can help design BIV and promote better decision-makings on business affairs

    Automated Learner Classification Through Interface Event Stream And Summary Statistics Analysis

    Full text link
    Reading comprehension is predominately measured through multiple choice examinations. Yet, as we will discuss in this thesis, such exams are often criticized for their inaccuracies. With the advent of big data and the rise of ITS (Intelligent Tutoring Systems), increasing focus will be placed on finding dynamic, automated ways of measuring students\u27 aptitude and progress. This work takes the first step towards automated learner classification based on the application of graphic organizers. We address the following specific problem experimentally: How effectively can we measure task comprehension via human translation of written text into a visual representation on a computer? Can an algorithm employ data from user interface (UI) interaction during the problem solving process, to classify the user\u27s abilities? Specifically, from the data we show machine learning predictions of what a human expert would say about the: 1. integrity of the visual representation produced; 2. level of logical problem solving strategy the user applies to the exercise; 3. level of effort the user gives to the exercise. The core of the experiment is a software system that allows a human subject to read a preselected text and then draw a diagram by manipulating icons on a grid-canvas using standard transforms

    User-created personas: a four case multi-ethnic study of persona artefact co-design in pastoral and Urban Namibia with ovaHerero, Ovambo, ovaHimba and San communities

    Get PDF
    A persona is an artefact widely used in technology design to aid communicational processes between designers, users and other stakeholders involved in projects. Persona originated in the Global North as an interpretative portrayal of a group of users with commonalities. Persona lacks empirical research in the Global South, while projects appearing in the literature are often framed under the philosophy of User-Centred Design –this indicates they are anchored in western epistemologies. This thesis postulates persona depictions are expected to differ across locales, and that studying differences and similarities in such representations is imperative to avoid misrepresentations that in turn can lead to designerly miscommunications, and ultimately to unsuitable technology designs. The importance of this problematic is demonstrated through four exploratory case studies on persona artefacts co-designed with communities from four Namibian ethnicities, namely ovaHerero, ovaHimba, Ovambo and San. Findings reveal diverse self-representations whereby results for each ethnicity materialise in different ways, recounts and storylines: romanticised persona archetypes versus reality with ovaHerero; collective persona representations with ovaHimba; individualised personas with Ovambo, although embedded in narratives of collectivism and interrelatedness with other personas; and renderings of two contradictory personas of their selves with a group of San youth according to either being on their own (i.e. inspiring and aspirational) or mixed with other ethnic groups (i.e. ostracised). This thesis advocates for User-Created Personas (UCP) as a potentially valid tactic and methodology to iteratively pursue conceptualisations of persona artefacts that are capable to communicate localised nuances critical to designing useful and adequate technologies across locales: Methodologies to endow laypeople to co-design persona self-representations and the results and appraisals provided are this thesis’ main contribution to knowledge

    Gesture Based Interface for Asynchronous Video Communication for Deaf People in South Africa

    Get PDF
    The preferred method of communication amongst Deaf people is that of sign language. There are problems with the video quality when using the real-time video communication available on mobile phones. The alternative is to use text-based communication on mobile phones, however findings from other research studies show that Deaf people prefer using sign language to communicate with each other rather than text. This dissertation looks at implementing a gesture-based interface for an asynchronous video communication for Deaf people. The gesture interface was implemented on a store and forward video architecture since this preserves the video quality even when there is low bandwidth. In this dissertation three gesture-based video communication prototypes were designed and implemented using a user centred design approach. These prototypes were implemented on both the computer and mobile devices. The first prototype was computer based and the evaluation of this prototype showed that the gesture based interface improved the usability of sign language video communication. The second prototype is set up on the mobile device and it was tested on several mobile devices but the device limitation made it impossible to support all the features needed in the video communication. The different problems experienced on the dissimilar devices made the task of implementing the prototypes on the mobile platform challenging. The prototype was revised several times before it was tested on a different mobile phone. The final prototype used both the mobile phone and the computer. The computer served to simulate a mobile device with greater processing power. This approach simulated a more powerful future mobile device capable of running the gesture-based interface. The computer was used for video processing but to the user it was as if the whole system was running on the mobile phone. The evaluation process was conducted with ten Deaf users in order to determine the efficiency and usability of the prototype. The results showed that the majority of the users were satisfied with the quality of the video communication. The evaluation also revealed usability problems but the benefits of communicating in sign language outweighed the usability difficulties. Furthermore the users were more interested in the video communication on the mobile devices than on the computer as this was a much more familiar technology and offered the convenience of mobility

    An interdisciplinary concept for human-centered explainable artificial intelligence - Investigating the impact of explainable AI on end-users

    Get PDF
    Since the 1950s, Artificial Intelligence (AI) applications have captivated people. However, this fascination has always been accompanied by disillusionment about the limitations of this technology. Today, machine learning methods such as Deep Neural Networks (DNN) are successfully used in various tasks. However, these methods also have limitations: Their complexity makes their decisions no longer comprehensible to humans - they are black-boxes. The research branch of Explainable AI (XAI) has addressed this problem by investigating how to make AI decisions comprehensible. This desire is not new. In the 1970s, developers of intrinsic explainable AI approaches, so-called white-boxes (e.g., rule-based systems), were dealing with AI explanations. Nowadays, with the increased use of AI systems in all areas of life, the design of comprehensible systems has become increasingly important. Developing such systems is part of Human-Centred AI (HCAI) research, which integrates human needs and abilities in the design of AI interfaces. For this, an understanding is needed of how humans perceive XAI and how AI explanations influence the interaction between humans and AI. One of the open questions concerns the investigation of XAI for end-users, i.e., people who have no expertise in AI but interact with such systems or are impacted by the system's decisions. This dissertation investigates the impact of different levels of interactive XAI of white- and black-box AI systems on end-users perceptions. Based on an interdisciplinary concept presented in this work, it is examined how the content, type, and interface of explanations of DNN (black box) and rule-based systems (white box) are perceived by end-users. How XAI influences end-users mental models, trust, self-efficacy, cognitive workload, and emotional state regarding the AI system is the centre of the investigation. At the beginning of the dissertation, general concepts regarding AI, explanations, and psychological constructs of mental models, trust, self-efficacy, cognitive load, and emotions are introduced. Subsequently, related work regarding the design and investigation of XAI for users is presented. This serves as a basis for the concept of a Human-Centered Explainable AI (HC-XAI) presented in this dissertation, which combines an XAI design approach with user evaluations. The author pursues an interdisciplinary approach that integrates knowledge from the research areas of (X)AI, Human-Computer Interaction, and Psychology. Based on this interdisciplinary concept, a five-step approach is derived and applied to illustrative surveys and experiments in the empirical part of this dissertation. To illustrate the first two steps, a persona approach for HC-XAI is presented, and based on that, a template for designing personas is provided. To illustrate the usage of the template, three surveys are presented that ask end-users about their attitudes and expectations towards AI and XAI. The personas generated from the survey data indicate that end-users often lack knowledge of XAI and that their perception of it depends on demographic and personality-related characteristics. Steps three to five deal with the design of XAI for concrete applications. For this, different levels of interactive XAI are presented and investigated in experiments with end-users. For this purpose, two rule-based systems (i.e., white-box) and four systems based on DNN (i.e., black-box) are used. These are applied for three purposes: Cooperation & collaboration, education, and medical decision support. Six user studies were conducted for this purpose, which differed in the interactivity of the XAI system used. The results show that end-users trust and mental models of AI depend strongly on the context of use and the design of the explanation itself. For example, explanations that a virtual agent mediates are shown to promote trust. The content and type of explanations are also perceived differently by users. The studies also show that end-users in different application contexts of XAI feel the desire for interactive explanations. The dissertation concludes with a summary of the scientific contribution, points out limitations of the presented work, and gives an outlook on possible future research topics to integrate explanations into everyday AI systems and thus enable the comprehensible handling of AI for all people.Seit den 1950er Jahren haben Anwendungen der Künstlichen Intelligenz (KI) die Menschen in ihren Bann gezogen. Diese Faszination wurde jedoch stets von Ernüchterung über die Grenzen dieser Technologie begleitet. Heute werden Methoden des maschinellen Lernens wie Deep Neural Networks (DNN) erfolgreich für verschiedene Aufgaben eingesetzt. Doch auch diese Methoden haben ihre Grenzen: Durch ihre Komplexität sind ihre Entscheidungen für den Menschen nicht mehr nachvollziehbar - sie sind Black-Boxes. Der Forschungszweig der Erklärbaren KI (engl. XAI) hat sich diesem Problem angenommen und untersucht, wie man KI-Entscheidungen nachvollziehbar machen kann. Dieser Wunsch ist nicht neu. In den 1970er Jahren beschäftigten sich die Entwickler von intrinsisch erklärbaren KI-Ansätzen, so genannten White-Boxes (z. B. regelbasierte Systeme), mit KI-Erklärungen. Heutzutage, mit dem zunehmenden Einsatz von KI-Systemen in allen Lebensbereichen, wird die Gestaltung nachvollziehbarer Systeme immer wichtiger. Die Entwicklung solcher Systeme ist Teil der Menschzentrierten KI (engl. HCAI) Forschung, die menschliche Bedürfnisse und Fähigkeiten in die Gestaltung von KI-Schnittstellen integriert. Dafür ist ein Verständnis darüber erforderlich, wie Menschen XAI wahrnehmen und wie KI-Erklärungen die Interaktion zwischen Mensch und KI beeinflussen. Eine der offenen Fragen betrifft die Untersuchung von XAI für Endnutzer, d.h. Menschen, die keine Expertise in KI haben, aber mit solchen Systemen interagieren oder von deren Entscheidungen betroffen sind. In dieser Dissertation wird untersucht, wie sich verschiedene Stufen interaktiver XAI von White- und Black-Box-KI-Systemen auf die Wahrnehmung der Endnutzer auswirken. Basierend auf einem interdisziplinären Konzept, das in dieser Arbeit vorgestellt wird, wird untersucht, wie der Inhalt, die Art und die Schnittstelle von Erklärungen von DNN (Black-Box) und regelbasierten Systemen (White-Box) von Endnutzern wahrgenommen werden. Wie XAI die mentalen Modelle, das Vertrauen, die Selbstwirksamkeit, die kognitive Belastung und den emotionalen Zustand der Endnutzer in Bezug auf das KI-System beeinflusst, steht im Mittelpunkt der Untersuchung. Zu Beginn der Arbeit werden allgemeine Konzepte zu KI, Erklärungen und psychologische Konstrukte von mentalen Modellen, Vertrauen, Selbstwirksamkeit, kognitiver Belastung und Emotionen vorgestellt. Anschließend werden verwandte Arbeiten bezüglich dem Design und der Untersuchung von XAI für Nutzer präsentiert. Diese dienen als Grundlage für das in dieser Dissertation vorgestellte Konzept einer Menschzentrierten Erklärbaren KI (engl. HC-XAI), das einen XAI-Designansatz mit Nutzerevaluationen kombiniert. Die Autorin verfolgt einen interdisziplinären Ansatz, der Wissen aus den Forschungsbereichen (X)AI, Mensch-Computer-Interaktion und Psychologie integriert. Auf der Grundlage dieses interdisziplinären Konzepts wird ein fünfstufiger Ansatz abgeleitet und im empirischen Teil dieser Arbeit auf exemplarische Umfragen und Experimente und angewendet. Zur Veranschaulichung der ersten beiden Schritte wird ein Persona-Ansatz für HC-XAI vorgestellt und darauf aufbauend eine Vorlage für den Entwurf von Personas bereitgestellt. Um die Verwendung der Vorlage zu veranschaulichen, werden drei Umfragen präsentiert, in denen Endnutzer zu ihren Einstellungen und Erwartungen gegenüber KI und XAI befragt werden. Die aus den Umfragedaten generierten Personas zeigen, dass es den Endnutzern oft an Wissen über XAI mangelt und dass ihre Wahrnehmung dessen von demografischen und persönlichkeitsbezogenen Merkmalen abhängt. Die Schritte drei bis fünf befassen sich mit der Gestaltung von XAI für konkrete Anwendungen. Hierzu werden verschiedene Stufen interaktiver XAI vorgestellt und in Experimenten mit Endanwendern untersucht. Zu diesem Zweck werden zwei regelbasierte Systeme (White-Box) und vier auf DNN basierende Systeme (Black-Box) verwendet. Diese werden für drei Zwecke eingesetzt: Kooperation & Kollaboration, Bildung und medizinische Entscheidungsunterstützung. Hierzu wurden sechs Nutzerstudien durchgeführt, die sich in der Interaktivität des verwendeten XAI-Systems unterschieden. Die Ergebnisse zeigen, dass das Vertrauen und die mentalen Modelle der Endnutzer in KI stark vom Nutzungskontext und der Gestaltung der Erklärung selbst abhängen. Es hat sich beispielsweise gezeigt, dass Erklärungen, die von einem virtuellen Agenten vermittelt werden, das Vertrauen fördern. Auch der Inhalt und die Art der Erklärungen werden von den Nutzern unterschiedlich wahrgenommen. Die Studien zeigen zudem, dass Endnutzer in unterschiedlichen Anwendungskontexten von XAI den Wunsch nach interaktiven Erklärungen verspüren. Die Dissertation schließt mit einer Zusammenfassung des wissenschaftlichen Beitrags, weist auf Grenzen der vorgestellten Arbeit hin und gibt einen Ausblick auf mögliche zukünftige Forschungsthemen, um Erklärungen in alltägliche KI-Systeme zu integrieren und damit den verständlichen Umgang mit KI für alle Menschen zu ermöglichen
    corecore