60,154 research outputs found

    Identifying and addressing adaptability and information system requirements for tactical management

    Get PDF

    Best Practices for Evaluating Flight Deck Interfaces for Transport Category Aircraft with Particular Relevance to Issues of Attention, Awareness, and Understanding CAST SE-210 Output 2 Report 6 of 6

    Get PDF
    Attention, awareness, and understanding of the flight crew are a critical contributor to safety and the flight deck plays a critical role in supporting these cognitive functions. Changes to the flight deck need to be evaluated for whether the changed device provides adequate support for these functions. This report describes a set of diverse evaluation methods. The report recommends designing the interface-evaluation to span the phases of the device development, from early to late, and it provides methods appropriate at each phase. It describes the various ways in which an interface or interface component can fail to support awareness as potential issues to be assessed in evaluation. It summarizes appropriate methods to evaluate different issues concerning inadequate support for these functions, throughout the phases of development

    Designing attention-aware business intelligence and analytics dashboards

    Get PDF
    The design of user interface is known to influence the users’ attention while they are interacting with applications such as Business Intelligence and Analytics (BI&A) dashboards. BI&A dashboards are considered as critical because they contain a lot of compressed information and managers only spend a little time to process the provided information. Thereby, they need to manage their visual attention properly due to inattentional blindness and change blindness issues. We propose to investigate the design of BI&A dashboards that are sensitive to the users’ attention. So called attention-aware BI&A dashboards are of utmost importance in the field of BI&A systems since attention is known to play a major role in constructing decisions. We motivate our research project and present the initial design of attention-aware BI&A dashboards. Especially the inclusion of eye-tracking technology is an important aspect of our proposed design

    Opinion mining and sentiment analysis in marketing communications: a science mapping analysis in Web of Science (1998–2018)

    Get PDF
    Opinion mining and sentiment analysis has become ubiquitous in our society, with applications in online searching, computer vision, image understanding, artificial intelligence and marketing communications (MarCom). Within this context, opinion mining and sentiment analysis in marketing communications (OMSAMC) has a strong role in the development of the field by allowing us to understand whether people are satisfied or dissatisfied with our service or product in order to subsequently analyze the strengths and weaknesses of those consumer experiences. To the best of our knowledge, there is no science mapping analysis covering the research about opinion mining and sentiment analysis in the MarCom ecosystem. In this study, we perform a science mapping analysis on the OMSAMC research, in order to provide an overview of the scientific work during the last two decades in this interdisciplinary area and to show trends that could be the basis for future developments in the field. This study was carried out using VOSviewer, CitNetExplorer and InCites based on results from Web of Science (WoS). The results of this analysis show the evolution of the field, by highlighting the most notable authors, institutions, keywords, publications, countries, categories and journals.The research was funded by Programa Operativo FEDER Andalucía 2014‐2020, grant number “La reputación de las organizaciones en una sociedad digital. Elaboración de una Plataforma Inteligente para la Localización, Identificación y Clasificación de Influenciadores en los Medios Sociales Digitales (UMA18‐ FEDERJA‐148)” and The APC was funded by the same research gran

    Designing Attention-aware Business Intelligence and Analytics Dashboards to Support Task Resumption

    Get PDF
    External interruptions are a common phenomenon in today’s working environment. Specifically, attentional shifts in working environments lead to task resumption failures that refer to the improper resuming of a primary task after an interruption and negatively influencing the individual performance of employees. Business Intelligence & Analytics (BI&A) systems are well recognized as an essential concept to support decision making of employees. One important and frequently used BI&A system component are dashboards. BI&A dashboards enable collecting, summarizing, and presenting business information from different resources to decision makers. When working with BI&A dashboards, interruptions and resulting task resumption failures have negative consequences on decision-making processes. This research in progress paper addresses this problem and provides design knowledge for attention-aware BI&A dashboards that support users during task resumption. We follow a Design Science Research (DSR) approach and derive theory-grounded design principles for task resumption support on BI&A dashboards. Moreover, to evaluate the suggested principles, an instantiation is realized. In our instantiation, real-time tracking of eye-movement data is used to capture visual attention of the users and provide visual feedback after task resumption. We introduce testable hypotheses and present preliminary results of a pre-test lab experiment

    Visual Representation of Explainable Artificial Intelligence Methods: Design and Empirical Studies

    Get PDF
    Explainability is increasingly considered a critical component of artificial intelligence (AI) systems, especially in high-stake domains where AI systems’ decisions can significantly impact individuals. As a result, there has been a surge of interest in explainable artificial intelligence (XAI) to increase the transparency of AI systems by explaining their decisions to end-users. In particular, extensive research has focused on developing “local model-agnostic” explainable methods that generate explanations of individual predictions for any predictive model. While these explanations can support end-users in the use of AI systems through increased transparency, three significant challenges have hindered their design, implementation, and large-scale adoption in real applications. First, there is a lack of understanding of how end-users evaluate explanations. There are many critiques that explanations are based on researchers’ intuition instead of end-users’ needs. Furthermore, there is insufficient evidence on whether end-users understand these explanations or trust XAI systems. Second, it is unclear which effect explanations have on trust when they disclose different biases on AI systems’ decisions. Prior research investigating biased decisions has found conflicting evidence on explanations’ effects. Explanations can either increase trust through perceived transparency or decrease trust as end-users perceive the system as biased. Moreover, it is unclear how contingency factors influence these opposing effects. Third, most XAI methods deliver static explanations that offer end-users limited information, resulting in an insufficient understanding of how AI systems make decisions and, in turn, lower trust. Furthermore, research has found that end-users perceive static explanations as not transparent enough, as these do not allow them to investigate the factors that influence a given decision. This dissertation addresses these challenges across three studies by focusing on the overarching research question of how to design visual representations of local model-agnostic XAI methods to increase end-users’ understanding and trust. The first challenge is addressed through an iterative design process that refines the representations of explanations from four well-established model-agnostic XAI methods and a subsequent evaluation with end-users using eye-tracking technology and interviews. Afterward, a research study that takes a psychological contract violation (PCV) theory and social identity theory perspective to investigate the contingency factors of the opposing effects of explanations on end-users’ trust addresses the second challenge. Specifically, this study investigates how end-users evaluate explanations of a gender-biased AI system while controlling for their awareness of gender discrimination in society. Finally, the third challenge is addressed through a design science research project to design an interactive XAI system for end-users to increase their understanding and trust. This dissertation makes several contributions to the ongoing research on improving the transparency of AI systems by explicitly emphasizing the end-user perspective on XAI. First, it contributes to practice by providing insights that help to improve the design of explanations of AI systems’ decisions. Additionally, this dissertation provides significant theoretical contributions by contextualizing the PCV theory to gender-biased XAI systems and the contingency factors that determine whether end-users experience a PCV. Moreover, it provides insights into how end-users cognitively evaluate explanations and extends the current understanding of the impact of explanations on trust. Finally, this dissertation contributes to the design knowledge of XAI systems by proposing guidelines for designing interactive XAI systems that give end-users more control over the information they receive to help them better understand how AI systems make decisions
    • 

    corecore