65 research outputs found
Recommended from our members
Generating Reliable and Responsive Observational Evidence: Reducing Pre-analysis Bias
A growing body of evidence generated from observational data has demonstrated the potential to influence decision-making and improve patient outcomes. For observational evidence to be actionable, however, it must be generated reliably and in a timely manner. Large distributed observational data networks enable research on diverse patient populations at scale and develop new sound methods to improve reproducibility and robustness of real-world evidence. Nevertheless, the problems of generalizability, portability and scalability persist and compound. As analytical methods only partially address bias, reliable observational research (especially in networks) must address the bias at the design stage (i.e., pre-analysis bias) including the strategies for identifying patients of interest and defining comparators.
This thesis synthesizes and enumerates a set of challenges to addressing pre-analysis bias in observational studies and presents mixed-methods approaches and informatics solutions for overcoming a number of those obstacles. We develop frameworks, methods and tools for scalable and reliable phenotyping including data source granularity estimation, comprehensive concept set selection, index date specification, and structured data-based patient review for phenotype evaluation. We cover the research on potential bias in the unexposed comparator definition including systematic background rates estimation and interpretation, and definition and evaluation of the unexposed comparator.
We propose that the use of standardized approaches and methods as described in this thesis not only improves reliability but also increases responsiveness of observational evidence. To test this hypothesis, we designed and piloted a Data Consult Service - a service that generates new on-demand evidence at the bedside. We demonstrate that it is feasible to generate reliable evidence to address cliniciansâ information needs in a robust and timely fashion and provide our analysis of the current limitations and future steps needed to scale such a service
An interdisciplinary concept for human-centered explainable artificial intelligence - Investigating the impact of explainable AI on end-users
Since the 1950s, Artificial Intelligence (AI) applications have captivated people. However, this fascination has always been accompanied by disillusionment about the limitations of this technology. Today, machine learning methods such as Deep Neural Networks (DNN) are successfully used in various tasks. However, these methods also have limitations: Their complexity makes their decisions no longer comprehensible to humans - they are black-boxes. The research branch of Explainable AI (XAI) has addressed this problem by investigating how to make AI decisions comprehensible. This desire is not new. In the 1970s, developers of intrinsic explainable AI approaches, so-called white-boxes (e.g., rule-based systems), were dealing with AI explanations. Nowadays, with the increased use of AI systems in all areas of life, the design of comprehensible systems has become increasingly important. Developing such systems is part of Human-Centred AI (HCAI) research, which integrates human needs and abilities in the design of AI interfaces. For this, an understanding is needed of how humans perceive XAI and how AI explanations influence the interaction between humans and AI. One of the open questions concerns the investigation of XAI for end-users, i.e., people who have no expertise in AI but interact with such systems or are impacted by the system's decisions.
This dissertation investigates the impact of different levels of interactive XAI of white- and black-box AI systems on end-users perceptions. Based on an interdisciplinary concept presented in this work, it is examined how the content, type, and interface of explanations of DNN (black box) and rule-based systems (white box) are perceived by end-users. How XAI influences end-users mental models, trust, self-efficacy, cognitive workload, and emotional state regarding the AI system is the centre of the investigation. At the beginning of the dissertation, general concepts regarding AI, explanations, and psychological constructs of mental models, trust, self-efficacy, cognitive load, and emotions are introduced. Subsequently, related work regarding the design and investigation of XAI for users is presented. This serves as a basis for the concept of a Human-Centered Explainable AI (HC-XAI) presented in this dissertation, which combines an XAI design approach with user evaluations. The author pursues an interdisciplinary approach that integrates knowledge from the research areas of (X)AI, Human-Computer Interaction, and Psychology.
Based on this interdisciplinary concept, a five-step approach is derived and applied to illustrative surveys and experiments in the empirical part of this dissertation.
To illustrate the first two steps, a persona approach for HC-XAI is presented, and based on that, a template for designing personas is provided. To illustrate the usage of the template, three surveys are presented that ask end-users about their attitudes and expectations towards AI and XAI. The personas generated from the survey data indicate that end-users often lack knowledge of XAI and that their perception of it depends on demographic and personality-related characteristics.
Steps three to five deal with the design of XAI for concrete applications. For this, different levels of interactive XAI are presented and investigated in experiments with end-users. For this purpose, two rule-based systems (i.e., white-box) and four systems based on DNN (i.e., black-box) are used.
These are applied for three purposes: Cooperation & collaboration, education, and medical decision support. Six user studies were conducted for this purpose, which differed in the interactivity of the XAI system used.
The results show that end-users trust and mental models of AI depend strongly on the context of use and the design of the explanation itself. For example, explanations that a virtual agent mediates are shown to promote trust. The content and type of explanations are also perceived differently by users. The studies also show that end-users in different application contexts of XAI feel the desire for interactive explanations.
The dissertation concludes with a summary of the scientific contribution, points out limitations of the presented work, and gives an outlook on possible future research topics to integrate explanations into everyday AI systems and thus enable the comprehensible handling of AI for all people.Seit den 1950er Jahren haben Anwendungen der KĂŒnstlichen Intelligenz (KI) die Menschen in ihren Bann gezogen. Diese Faszination wurde jedoch stets von ErnĂŒchterung ĂŒber die Grenzen dieser Technologie begleitet. Heute werden Methoden des maschinellen Lernens wie Deep Neural Networks (DNN) erfolgreich fĂŒr verschiedene Aufgaben eingesetzt. Doch auch diese Methoden haben ihre Grenzen: Durch ihre KomplexitĂ€t sind ihre Entscheidungen fĂŒr den Menschen nicht mehr nachvollziehbar - sie sind Black-Boxes. Der Forschungszweig der ErklĂ€rbaren KI (engl. XAI) hat sich diesem Problem angenommen und untersucht, wie man KI-Entscheidungen nachvollziehbar machen kann. Dieser Wunsch ist nicht neu. In den 1970er Jahren beschĂ€ftigten sich die Entwickler von intrinsisch erklĂ€rbaren KI-AnsĂ€tzen, so genannten White-Boxes (z. B. regelbasierte Systeme), mit KI-ErklĂ€rungen. Heutzutage, mit dem zunehmenden Einsatz von KI-Systemen in allen Lebensbereichen, wird die Gestaltung nachvollziehbarer Systeme immer wichtiger. Die Entwicklung solcher Systeme ist Teil der Menschzentrierten KI (engl. HCAI) Forschung, die menschliche BedĂŒrfnisse und FĂ€higkeiten in die Gestaltung von KI-Schnittstellen integriert. DafĂŒr ist ein VerstĂ€ndnis darĂŒber erforderlich, wie Menschen XAI wahrnehmen und wie KI-ErklĂ€rungen die Interaktion zwischen Mensch und KI beeinflussen. Eine der offenen Fragen betrifft die Untersuchung von XAI fĂŒr Endnutzer, d.h. Menschen, die keine Expertise in KI haben, aber mit solchen Systemen interagieren oder von deren Entscheidungen betroffen sind.
In dieser Dissertation wird untersucht, wie sich verschiedene Stufen interaktiver XAI von White- und Black-Box-KI-Systemen auf die Wahrnehmung der Endnutzer auswirken. Basierend auf einem interdisziplinĂ€ren Konzept, das in dieser Arbeit vorgestellt wird, wird untersucht, wie der Inhalt, die Art und die Schnittstelle von ErklĂ€rungen von DNN (Black-Box) und regelbasierten Systemen (White-Box) von Endnutzern wahrgenommen werden. Wie XAI die mentalen Modelle, das Vertrauen, die Selbstwirksamkeit, die kognitive Belastung und den emotionalen Zustand der Endnutzer in Bezug auf das KI-System beeinflusst, steht im Mittelpunkt der Untersuchung. Zu Beginn der Arbeit werden allgemeine Konzepte zu KI, ErklĂ€rungen und psychologische Konstrukte von mentalen Modellen, Vertrauen, Selbstwirksamkeit, kognitiver Belastung und Emotionen vorgestellt. AnschlieĂend werden verwandte Arbeiten bezĂŒglich dem Design und der Untersuchung von XAI fĂŒr Nutzer prĂ€sentiert. Diese dienen als Grundlage fĂŒr das in dieser Dissertation vorgestellte Konzept einer Menschzentrierten ErklĂ€rbaren KI (engl. HC-XAI), das einen XAI-Designansatz mit Nutzerevaluationen kombiniert. Die Autorin verfolgt einen interdisziplinĂ€ren Ansatz, der Wissen aus den Forschungsbereichen (X)AI, Mensch-Computer-Interaktion und Psychologie integriert.
Auf der Grundlage dieses interdisziplinĂ€ren Konzepts wird ein fĂŒnfstufiger Ansatz abgeleitet und im empirischen Teil dieser Arbeit auf exemplarische Umfragen und Experimente und angewendet.
Zur Veranschaulichung der ersten beiden Schritte wird ein Persona-Ansatz fĂŒr HC-XAI vorgestellt und darauf aufbauend eine Vorlage fĂŒr den Entwurf von Personas bereitgestellt. Um die Verwendung der Vorlage zu veranschaulichen, werden drei Umfragen prĂ€sentiert, in denen Endnutzer zu ihren Einstellungen und Erwartungen gegenĂŒber KI und XAI befragt werden. Die aus den Umfragedaten generierten Personas zeigen, dass es den Endnutzern oft an Wissen ĂŒber XAI mangelt und dass ihre Wahrnehmung dessen von demografischen und persönlichkeitsbezogenen Merkmalen abhĂ€ngt.
Die Schritte drei bis fĂŒnf befassen sich mit der Gestaltung von XAI fĂŒr konkrete Anwendungen. Hierzu werden verschiedene Stufen interaktiver XAI vorgestellt und in Experimenten mit Endanwendern untersucht. Zu diesem Zweck werden zwei regelbasierte Systeme (White-Box) und vier auf DNN basierende Systeme (Black-Box) verwendet.
Diese werden fĂŒr drei Zwecke eingesetzt: Kooperation & Kollaboration, Bildung und medizinische EntscheidungsunterstĂŒtzung. Hierzu wurden sechs Nutzerstudien durchgefĂŒhrt, die sich in der InteraktivitĂ€t des verwendeten XAI-Systems unterschieden.
Die Ergebnisse zeigen, dass das Vertrauen und die mentalen Modelle der Endnutzer in KI stark vom Nutzungskontext und der Gestaltung der ErklĂ€rung selbst abhĂ€ngen. Es hat sich beispielsweise gezeigt, dass ErklĂ€rungen, die von einem virtuellen Agenten vermittelt werden, das Vertrauen fördern. Auch der Inhalt und die Art der ErklĂ€rungen werden von den Nutzern unterschiedlich wahrgenommen. Die Studien zeigen zudem, dass Endnutzer in unterschiedlichen Anwendungskontexten von XAI den Wunsch nach interaktiven ErklĂ€rungen verspĂŒren.
Die Dissertation schlieĂt mit einer Zusammenfassung des wissenschaftlichen Beitrags, weist auf Grenzen der vorgestellten Arbeit hin und gibt einen Ausblick auf mögliche zukĂŒnftige Forschungsthemen, um ErklĂ€rungen in alltĂ€gliche KI-Systeme zu integrieren und damit den verstĂ€ndlichen Umgang mit KI fĂŒr alle Menschen zu ermöglichen
Collected Papers (on Neutrosophics, Plithogenics, Hypersoft Set, Hypergraphs, and other topics), Volume X
This tenth volume of Collected Papers includes 86 papers in English and Spanish languages comprising 972 pages, written between 2014-2022 by the author alone or in collaboration with the following 105 co-authors (alphabetically ordered) from 26 countries: Abu SuïŹan, Ali Hassan, Ali Safaa Sadiq, Anirudha Ghosh, Assia Bakali, Atiqe Ur Rahman, Laura Bogdan, Willem K.M. Brauers, Erick GonzĂĄlez Caballero, Fausto Cavallaro, GavrilÄ Calefariu, T. Chalapathi, Victor Christianto, Mihaela Colhon, Sergiu Boris Cononovici, Mamoni Dhar, Irfan Deli, Rebeca Escobar-Jara, Alexandru Gal, N. Gandotra, Sudipta Gayen, Vassilis C. Gerogiannis, Noel Batista HernĂĄndez, Hongnian Yu, Hongbo Wang, Mihaiela Iliescu, F. Nirmala Irudayam, Sripati Jha, Darjan KarabaĆĄeviÄ, T. Katican, Bakhtawar Ali Khan, Hina Khan, Volodymyr Krasnoholovets, R. Kiran Kumar, Manoranjan Kumar Singh, Ranjan Kumar, M. Lathamaheswari, Yasar Mahmood, Nivetha Martin, Adrian MÄrgean, Octavian Melinte, Mingcong Deng, Marcel Migdalovici, Monika Moga, Sana Moin, Mohamed Abdel-Basset, Mohamed Elhoseny, Rehab Mohamed, Mohamed Talea, Kalyan Mondal, Muhammad Aslam, Muhammad Aslam Malik, Muhammad Ihsan, Muhammad Naveed Jafar, Muhammad Rayees Ahmad, Muhammad Saeed, Muhammad Saqlain, Muhammad Shabir, Mujahid Abbas, Mumtaz Ali, Radu I. Munteanu, Ghulam Murtaza, Munazza Naz, Tahsin Oner, âȘGabrijela PopoviÄâŹâŹâŹâŹâŹ, Surapati Pramanik, R. Priya, S.P. Priyadharshini, Midha Qayyum, Quang-Thinh Bui, Shazia Rana, Akbara Rezaei, JesĂșs Estupiñån Ricardo, Rıdvan Sahin, Saeeda Mirvakili, Said Broumi, A. A. Salama, Flavius Aurelian SĂąrbu, Ganeshsree Selvachandran, Javid Shabbir, Shio Gai Quek, Son Hoang Le, Florentin Smarandache, DragiĆĄa StanujkiÄ, S. Sudha, Taha Yasin Ozturk, Zaigham Tahir, The Houw Iong, Ayse Topal, Alptekin UlutaÈ, Maikel Yelandi Leyva VĂĄzquez, Rizha Vitania, Luige VlÄdÄreanu, Victor VlÄdÄreanu, Ètefan VlÄduÈescu, J. Vimala, Dan Valeriu Voinea, Adem Yolcu, Yongfei Feng, Abd El-Nasser H. Zaied, Edmundas Kazimieras Zavadskas.âŹ
Modern Socio-Technical Perspectives on Privacy
This open access book provides researchers and professionals with a foundational understanding of online privacy as well as insight into the socio-technical privacy issues that are most pertinent to modern information systems, covering several modern topics (e.g., privacy in social media, IoT) and underexplored areas (e.g., privacy accessibility, privacy for vulnerable populations, cross-cultural privacy). The book is structured in four parts, which follow after an introduction to privacy on both a technical and social level: Privacy Theory and Methods covers a range of theoretical lenses through which one can view the concept of privacy. The chapters in this part relate to modern privacy phenomena, thus emphasizing its relevance to our digital, networked lives. Next, Domains covers a number of areas in which privacy concerns and implications are particularly salient, including among others social media, healthcare, smart cities, wearable IT, and trackers. The Audiences section then highlights audiences that have traditionally been ignored when creating privacy-preserving experiences: people from other (non-Western) cultures, people with accessibility needs, adolescents, and people who are underrepresented in terms of their race, class, gender or sexual identity, religion or some combination. Finally, the chapters in Moving Forward outline approaches to privacy that move beyond one-size-fits-all solutions, explore ethical considerations, and describe the regulatory landscape that governs privacy through laws and policies. Perhaps even more so than the other chapters in this book, these chapters are forward-looking by using current personalized, ethical and legal approaches as a starting point for re-conceptualizations of privacy to serve the modern technological landscape. The bookâs primary goal is to inform IT students, researchers, and professionals about both the fundamentals of online privacy and the issues that are most pertinent to modern information systems. Lecturers or teacherscan assign (parts of) the book for a âprofessional issuesâ course. IT professionals may select chapters covering domains and audiences relevant to their field of work, as well as the Moving Forward chapters that cover ethical and legal aspects. Academicswho are interested in studying privacy or privacy-related topics will find a broad introduction in both technical and social aspects
EG-ICE 2021 Workshop on Intelligent Computing in Engineering
The 28th EG-ICE International Workshop 2021 brings together international experts working at the interface between advanced computing and modern engineering challenges. Many engineering tasks require open-world resolutions to support multi-actor collaboration, coping with approximate models, providing effective engineer-computer interaction, search in multi-dimensional solution spaces, accommodating uncertainty, including specialist domain knowledge, performing sensor-data interpretation and dealing with incomplete knowledge. While results from computer science provide much initial support for resolution, adaptation is unavoidable and most importantly, feedback from addressing engineering challenges drives fundamental computer-science research. Competence and knowledge transfer goes both ways
RUNTIME AUDIT OF NEURAL SEQUENCE MODELS FOR NLP
Neural network sequence models have become a fundamental building block for natural language processing (NLP) applications. However, with the increasing performance and widespread adoption of these models, the social effects caused by errors in these models' outputs are also amplified. This thesis aims to mitigate such adverse effects by studying different methods that generate user-interpretable auxiliary signals along with model predictions, thus enabling efficient audits of the model output at runtime.
We will look at two different types of auxiliary signals respectively generated for the input and the output of the model. The first type explains which input tokens are important for a certain prediction (Chapter 3 and 4), while the second estimates the quality of each output token (Chapter 5 and 6). For model explanations, our focus is to establish a comprehensive and quantitative evaluation framework, thus enabling a systematic comparison of different model explanation methods on a diverse set of architectures and configurations. For quality estimations, because there is already a solid evaluation framework in place, we instead focus on improving state of the art by introducing an end-task-oriented pre-training step that is based on a non-autoregressive neural machine translation architecture. Overall, we show that it is possible to generate auxiliary signals of high quality with little to no human supervision, and we also provide some guidance for best practices regarding future applications of these methods to NLP, such as conducting comprehensive quantitative evaluations for the auxiliary signals before deployment, and selecting the appropriate evaluation metric that best suits the user's goal
Representation Challenges
Augmented Reality (AR) and Artificial Intelligence (AI) are technological domains that closely interact with space at architectural and urban scale in the broader ambits of cultural heritage and innovative design. The growing interest is perceivable in many fields of knowledge, supported by the rapid development and advancement of theory and application, software and devices, fueling a pervasive phenomenon within our daily lives. These technologies demonstrate to be best exploited when their application and other information and communication technology (ICT) advancements achieve a continuum. In particular, AR defines an alternative path to observe, analyze and communicate space and artifacts. Besides, AI opens future scenarios in data processing, redefining the relationship between man and computer. In the last few years, the AR/AI expansion and relationship have raised deep transdisciplinary speculation. The research experiences have shown many cross-relations in Architecture and Design domains. Representation studies could arise an international debate as a convergence place of multidisciplinary theoretical and applicative contributions related to architecture, city, environment, tangible and intangible Cultural Heritage. This book collects 66 papers and identify eight lines of research that may guide future developments
Electric Vehicle Efficient Power and Propulsion Systems
Vehicle electrification has been identified as one of the main technology trends in this second decade of the 21st century. Nearly 10% of global car sales in 2021 were electric, and this figure would be 50% by 2030 to reduce the oil import dependency and transport emissions in line with countriesâ climate goals. This book addresses the efficient power and propulsion systems which cover essential topics for research and development on EVs, HEVs and fuel cell electric vehicles (FCEV), including: Energy storage systems (battery, fuel cell, supercapacitors, and their hybrid systems); Power electronics devices and converters; Electric machine drive control, optimization, and design; Energy system advanced management methods Primarily intended for professionals and advanced students who are working on EV/HEV/FCEV power and propulsion systems, this edited book surveys state of the art novel control/optimization techniques for different components, as well as for vehicle as a whole system. New readers may also find valuable information on the structure and methodologies in such an interdisciplinary field. Contributed by experienced authors from different research laboratory around the world, these 11 chapters provide balanced materials from theorical background to methodologies and practical implementation to deal with various issues of this challenging technology. This reprint encourages researchers working in this field to stay actualized on the latest developments on electric vehicle efficient power and propulsion systems, for road and rail, both manned and unmanned vehicles
Human-Computer Interaction
In this book the reader will find a collection of 31 papers presenting different facets of Human Computer Interaction, the result of research projects and experiments as well as new approaches to design user interfaces. The book is organized according to the following main topics in a sequential order: new interaction paradigms, multimodality, usability studies on several interaction mechanisms, human factors, universal design and development methodologies and tools
- âŠ