29 research outputs found

    Geographic information extraction from texts

    Get PDF
    A large volume of unstructured texts, containing valuable geographic information, is available online. This information – provided implicitly or explicitly – is useful not only for scientific studies (e.g., spatial humanities) but also for many practical applications (e.g., geographic information retrieval). Although large progress has been achieved in geographic information extraction from texts, there are still unsolved challenges and issues, ranging from methods, systems, and data, to applications and privacy. Therefore, this workshop will provide a timely opportunity to discuss the recent advances, new ideas, and concepts but also identify research gaps in geographic information extraction

    An interdisciplinary concept for human-centered explainable artificial intelligence - Investigating the impact of explainable AI on end-users

    Get PDF
    Since the 1950s, Artificial Intelligence (AI) applications have captivated people. However, this fascination has always been accompanied by disillusionment about the limitations of this technology. Today, machine learning methods such as Deep Neural Networks (DNN) are successfully used in various tasks. However, these methods also have limitations: Their complexity makes their decisions no longer comprehensible to humans - they are black-boxes. The research branch of Explainable AI (XAI) has addressed this problem by investigating how to make AI decisions comprehensible. This desire is not new. In the 1970s, developers of intrinsic explainable AI approaches, so-called white-boxes (e.g., rule-based systems), were dealing with AI explanations. Nowadays, with the increased use of AI systems in all areas of life, the design of comprehensible systems has become increasingly important. Developing such systems is part of Human-Centred AI (HCAI) research, which integrates human needs and abilities in the design of AI interfaces. For this, an understanding is needed of how humans perceive XAI and how AI explanations influence the interaction between humans and AI. One of the open questions concerns the investigation of XAI for end-users, i.e., people who have no expertise in AI but interact with such systems or are impacted by the system's decisions. This dissertation investigates the impact of different levels of interactive XAI of white- and black-box AI systems on end-users perceptions. Based on an interdisciplinary concept presented in this work, it is examined how the content, type, and interface of explanations of DNN (black box) and rule-based systems (white box) are perceived by end-users. How XAI influences end-users mental models, trust, self-efficacy, cognitive workload, and emotional state regarding the AI system is the centre of the investigation. At the beginning of the dissertation, general concepts regarding AI, explanations, and psychological constructs of mental models, trust, self-efficacy, cognitive load, and emotions are introduced. Subsequently, related work regarding the design and investigation of XAI for users is presented. This serves as a basis for the concept of a Human-Centered Explainable AI (HC-XAI) presented in this dissertation, which combines an XAI design approach with user evaluations. The author pursues an interdisciplinary approach that integrates knowledge from the research areas of (X)AI, Human-Computer Interaction, and Psychology. Based on this interdisciplinary concept, a five-step approach is derived and applied to illustrative surveys and experiments in the empirical part of this dissertation. To illustrate the first two steps, a persona approach for HC-XAI is presented, and based on that, a template for designing personas is provided. To illustrate the usage of the template, three surveys are presented that ask end-users about their attitudes and expectations towards AI and XAI. The personas generated from the survey data indicate that end-users often lack knowledge of XAI and that their perception of it depends on demographic and personality-related characteristics. Steps three to five deal with the design of XAI for concrete applications. For this, different levels of interactive XAI are presented and investigated in experiments with end-users. For this purpose, two rule-based systems (i.e., white-box) and four systems based on DNN (i.e., black-box) are used. These are applied for three purposes: Cooperation & collaboration, education, and medical decision support. Six user studies were conducted for this purpose, which differed in the interactivity of the XAI system used. The results show that end-users trust and mental models of AI depend strongly on the context of use and the design of the explanation itself. For example, explanations that a virtual agent mediates are shown to promote trust. The content and type of explanations are also perceived differently by users. The studies also show that end-users in different application contexts of XAI feel the desire for interactive explanations. The dissertation concludes with a summary of the scientific contribution, points out limitations of the presented work, and gives an outlook on possible future research topics to integrate explanations into everyday AI systems and thus enable the comprehensible handling of AI for all people.Seit den 1950er Jahren haben Anwendungen der Künstlichen Intelligenz (KI) die Menschen in ihren Bann gezogen. Diese Faszination wurde jedoch stets von Ernüchterung über die Grenzen dieser Technologie begleitet. Heute werden Methoden des maschinellen Lernens wie Deep Neural Networks (DNN) erfolgreich für verschiedene Aufgaben eingesetzt. Doch auch diese Methoden haben ihre Grenzen: Durch ihre Komplexität sind ihre Entscheidungen für den Menschen nicht mehr nachvollziehbar - sie sind Black-Boxes. Der Forschungszweig der Erklärbaren KI (engl. XAI) hat sich diesem Problem angenommen und untersucht, wie man KI-Entscheidungen nachvollziehbar machen kann. Dieser Wunsch ist nicht neu. In den 1970er Jahren beschäftigten sich die Entwickler von intrinsisch erklärbaren KI-Ansätzen, so genannten White-Boxes (z. B. regelbasierte Systeme), mit KI-Erklärungen. Heutzutage, mit dem zunehmenden Einsatz von KI-Systemen in allen Lebensbereichen, wird die Gestaltung nachvollziehbarer Systeme immer wichtiger. Die Entwicklung solcher Systeme ist Teil der Menschzentrierten KI (engl. HCAI) Forschung, die menschliche Bedürfnisse und Fähigkeiten in die Gestaltung von KI-Schnittstellen integriert. Dafür ist ein Verständnis darüber erforderlich, wie Menschen XAI wahrnehmen und wie KI-Erklärungen die Interaktion zwischen Mensch und KI beeinflussen. Eine der offenen Fragen betrifft die Untersuchung von XAI für Endnutzer, d.h. Menschen, die keine Expertise in KI haben, aber mit solchen Systemen interagieren oder von deren Entscheidungen betroffen sind. In dieser Dissertation wird untersucht, wie sich verschiedene Stufen interaktiver XAI von White- und Black-Box-KI-Systemen auf die Wahrnehmung der Endnutzer auswirken. Basierend auf einem interdisziplinären Konzept, das in dieser Arbeit vorgestellt wird, wird untersucht, wie der Inhalt, die Art und die Schnittstelle von Erklärungen von DNN (Black-Box) und regelbasierten Systemen (White-Box) von Endnutzern wahrgenommen werden. Wie XAI die mentalen Modelle, das Vertrauen, die Selbstwirksamkeit, die kognitive Belastung und den emotionalen Zustand der Endnutzer in Bezug auf das KI-System beeinflusst, steht im Mittelpunkt der Untersuchung. Zu Beginn der Arbeit werden allgemeine Konzepte zu KI, Erklärungen und psychologische Konstrukte von mentalen Modellen, Vertrauen, Selbstwirksamkeit, kognitiver Belastung und Emotionen vorgestellt. Anschließend werden verwandte Arbeiten bezüglich dem Design und der Untersuchung von XAI für Nutzer präsentiert. Diese dienen als Grundlage für das in dieser Dissertation vorgestellte Konzept einer Menschzentrierten Erklärbaren KI (engl. HC-XAI), das einen XAI-Designansatz mit Nutzerevaluationen kombiniert. Die Autorin verfolgt einen interdisziplinären Ansatz, der Wissen aus den Forschungsbereichen (X)AI, Mensch-Computer-Interaktion und Psychologie integriert. Auf der Grundlage dieses interdisziplinären Konzepts wird ein fünfstufiger Ansatz abgeleitet und im empirischen Teil dieser Arbeit auf exemplarische Umfragen und Experimente und angewendet. Zur Veranschaulichung der ersten beiden Schritte wird ein Persona-Ansatz für HC-XAI vorgestellt und darauf aufbauend eine Vorlage für den Entwurf von Personas bereitgestellt. Um die Verwendung der Vorlage zu veranschaulichen, werden drei Umfragen präsentiert, in denen Endnutzer zu ihren Einstellungen und Erwartungen gegenüber KI und XAI befragt werden. Die aus den Umfragedaten generierten Personas zeigen, dass es den Endnutzern oft an Wissen über XAI mangelt und dass ihre Wahrnehmung dessen von demografischen und persönlichkeitsbezogenen Merkmalen abhängt. Die Schritte drei bis fünf befassen sich mit der Gestaltung von XAI für konkrete Anwendungen. Hierzu werden verschiedene Stufen interaktiver XAI vorgestellt und in Experimenten mit Endanwendern untersucht. Zu diesem Zweck werden zwei regelbasierte Systeme (White-Box) und vier auf DNN basierende Systeme (Black-Box) verwendet. Diese werden für drei Zwecke eingesetzt: Kooperation & Kollaboration, Bildung und medizinische Entscheidungsunterstützung. Hierzu wurden sechs Nutzerstudien durchgeführt, die sich in der Interaktivität des verwendeten XAI-Systems unterschieden. Die Ergebnisse zeigen, dass das Vertrauen und die mentalen Modelle der Endnutzer in KI stark vom Nutzungskontext und der Gestaltung der Erklärung selbst abhängen. Es hat sich beispielsweise gezeigt, dass Erklärungen, die von einem virtuellen Agenten vermittelt werden, das Vertrauen fördern. Auch der Inhalt und die Art der Erklärungen werden von den Nutzern unterschiedlich wahrgenommen. Die Studien zeigen zudem, dass Endnutzer in unterschiedlichen Anwendungskontexten von XAI den Wunsch nach interaktiven Erklärungen verspüren. Die Dissertation schließt mit einer Zusammenfassung des wissenschaftlichen Beitrags, weist auf Grenzen der vorgestellten Arbeit hin und gibt einen Ausblick auf mögliche zukünftige Forschungsthemen, um Erklärungen in alltägliche KI-Systeme zu integrieren und damit den verständlichen Umgang mit KI für alle Menschen zu ermöglichen

    LIPIcs, Volume 277, GIScience 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 277, GIScience 2023, Complete Volum

    12th International Conference on Geographic Information Science: GIScience 2023, September 12–15, 2023, Leeds, UK

    Get PDF
    No abstract available

    Energy Data Analytics for Smart Meter Data

    Get PDF
    The principal advantage of smart electricity meters is their ability to transfer digitized electricity consumption data to remote processing systems. The data collected by these devices make the realization of many novel use cases possible, providing benefits to electricity providers and customers alike. This book includes 14 research articles that explore and exploit the information content of smart meter data, and provides insights into the realization of new digital solutions and services that support the transition towards a sustainable energy system. This volume has been edited by Andreas Reinhardt, head of the Energy Informatics research group at Technische Universität Clausthal, Germany, and Lucas Pereira, research fellow at Técnico Lisboa, Portugal

    Artificial Intelligence for Online Review Platforms - Data Understanding, Enhanced Approaches and Explanations in Recommender Systems and Aspect-based Sentiment Analysis

    Get PDF
    The epoch-making and ever faster technological progress provokes disruptive changes and poses pivotal challenges for individuals and organizations. In particular, artificial intelligence (AI) is a disruptive technology that offers tremendous potential for many fields such as information systems and electronic commerce. Therefore, this dissertation contributes to AI for online review platforms aiming at enabling the future for consumers, businesses and platforms by unveiling the potential of AI. To achieve this goal, the dissertation investigates six major research questions embedded in the triad of data understanding of online consumer reviews, enhanced approaches and explanations in recommender systems and aspect-based sentiment analysis

    The Challenges of Big Data - Contributions in the Field of Data Quality and Artificial Intelligence Applications

    Get PDF
    The term "big data" has been characterized by challenges regarding data volume, velocity, variety and veracity. Solving these challenges requires research effort that fits the needs of big data. Therefore, this cumulative dissertation contains five paper aiming at developing and applying AI approaches within the field of big data as well as managing data quality in big data

    4th. International Conference on Advanced Research Methods and Analytics (CARMA 2022)

    Full text link
    Research methods in economics and social sciences are evolving with the increasing availability of Internet and Big Data sources of information. As these sources, methods, and applications become more interdisciplinary, the 4th International Conference on Advanced Research Methods and Analytics (CARMA) is a forum for researchers and practitioners to exchange ideas and advances on how emerging research methods and sources are applied to different fields of social sciences as well as to discuss current and future challenges. Due to the covid pandemic, CARMA 2022 is planned as a virtual and face-to-face conference, simultaneouslyDoménech I De Soria, J.; Vicente Cuervo, MR. (2022). 4th. International Conference on Advanced Research Methods and Analytics (CARMA 2022). Editorial Universitat Politècnica de València. https://doi.org/10.4995/CARMA2022.2022.1595

    Automatic generation of software interfaces for supporting decisionmaking processes. An application of domain engineering & machine learning

    Get PDF
    [EN] Data analysis is a key process to foster knowledge generation in particular domains or fields of study. With a strong informative foundation derived from the analysis of collected data, decision-makers can make strategic choices with the aim of obtaining valuable benefits in their specific areas of action. However, given the steady growth of data volumes, data analysis needs to rely on powerful tools to enable knowledge extraction. Information dashboards offer a software solution to analyze large volumes of data visually to identify patterns and relations and make decisions according to the presented information. But decision-makers may have different goals and, consequently, different necessities regarding their dashboards. Moreover, the variety of data sources, structures, and domains can hamper the design and implementation of these tools. This Ph.D. Thesis tackles the challenge of improving the development process of information dashboards and data visualizations while enhancing their quality and features in terms of personalization, usability, and flexibility, among others. Several research activities have been carried out to support this thesis. First, a systematic literature mapping and review was performed to analyze different methodologies and solutions related to the automatic generation of tailored information dashboards. The outcomes of the review led to the selection of a modeldriven approach in combination with the software product line paradigm to deal with the automatic generation of information dashboards. In this context, a meta-model was developed following a domain engineering approach. This meta-model represents the skeleton of information dashboards and data visualizations through the abstraction of their components and features and has been the backbone of the subsequent generative pipeline of these tools. The meta-model and generative pipeline have been tested through their integration in different scenarios, both theoretical and practical. Regarding the theoretical dimension of the research, the meta-model has been successfully integrated with other meta-model to support knowledge generation in learning ecosystems, and as a framework to conceptualize and instantiate information dashboards in different domains. In terms of the practical applications, the focus has been put on how to transform the meta-model into an instance adapted to a specific context, and how to finally transform this later model into code, i.e., the final, functional product. These practical scenarios involved the automatic generation of dashboards in the context of a Ph.D. Programme, the application of Artificial Intelligence algorithms in the process, and the development of a graphical instantiation platform that combines the meta-model and the generative pipeline into a visual generation system. Finally, different case studies have been conducted in the employment and employability, health, and education domains. The number of applications of the meta-model in theoretical and practical dimensions and domains is also a result itself. Every outcome associated to this thesis is driven by the dashboard meta-model, which also proves its versatility and flexibility when it comes to conceptualize, generate, and capture knowledge related to dashboards and data visualizations

    Designing Data Spaces

    Get PDF
    This open access book provides a comprehensive view on data ecosystems and platform economics from methodical and technological foundations up to reports from practical implementations and applications in various industries. To this end, the book is structured in four parts: Part I “Foundations and Contexts” provides a general overview about building, running, and governing data spaces and an introduction to the IDS and GAIA-X projects. Part II “Data Space Technologies” subsequently details various implementation aspects of IDS and GAIA-X, including eg data usage control, the usage of blockchain technologies, or semantic data integration and interoperability. Next, Part III describes various “Use Cases and Data Ecosystems” from various application areas such as agriculture, healthcare, industry, energy, and mobility. Part IV eventually offers an overview of several “Solutions and Applications”, eg including products and experiences from companies like Google, SAP, Huawei, T-Systems, Innopay and many more. Overall, the book provides professionals in industry with an encompassing overview of the technological and economic aspects of data spaces, based on the International Data Spaces and Gaia-X initiatives. It presents implementations and business cases and gives an outlook to future developments. In doing so, it aims at proliferating the vision of a social data market economy based on data spaces which embrace trust and data sovereignty
    corecore