57 research outputs found

    Designing Attentive Information Dashboards with Eye Tracking Technology

    Get PDF

    Designing Attention-aware Business Intelligence and Analytics Dashboards to Support Task Resumption

    Get PDF
    External interruptions are a common phenomenon in today’s working environment. Specifically, attentional shifts in working environments lead to task resumption failures that refer to the improper resuming of a primary task after an interruption and negatively influencing the individual performance of employees. Business Intelligence & Analytics (BI&A) systems are well recognized as an essential concept to support decision making of employees. One important and frequently used BI&A system component are dashboards. BI&A dashboards enable collecting, summarizing, and presenting business information from different resources to decision makers. When working with BI&A dashboards, interruptions and resulting task resumption failures have negative consequences on decision-making processes. This research in progress paper addresses this problem and provides design knowledge for attention-aware BI&A dashboards that support users during task resumption. We follow a Design Science Research (DSR) approach and derive theory-grounded design principles for task resumption support on BI&A dashboards. Moreover, to evaluate the suggested principles, an instantiation is realized. In our instantiation, real-time tracking of eye-movement data is used to capture visual attention of the users and provide visual feedback after task resumption. We introduce testable hypotheses and present preliminary results of a pre-test lab experiment

    Designing Attentive Information Dashboards

    Get PDF
    Information dashboards are a critical capability in contemporary business intelligence and analytics systems. Despite their strong potential to support better decision-making, the massive amount of information they provide challenges users performing data exploration tasks. Accordingly, dashboard users face difficulties in managing their limited attentional resources when processing the presented information on dashboards. Also, studies have shown that the amount of concentrated time humans can spend on a task has dramatically decreased in recent years; thus, there is a need for designing user interfaces that support users attention management. In this design science research project, we propose attentive information dashboards that provide individualized visual attention feedback (VAF) as an innovative artifact to solve this problem. We articulate theoretically grounded design principles and instantiate a software artifact leveraging users eye movement data in real time to provide individualized VAF. We evaluated the instantiated artifact in a controlled lab experiment with 92 participants. The results from analyzing users eye movement after receiving individualized VAF reveal that our proposed design has a positive effect on users attentional resource allocation, attention shift rate, and attentional resource management. We contribute a system architecture for attentive information dashboards that support data exploration and two theoretically grounded design principles that provide prescriptive knowledge on how to provide individualized VAF. Practitioners can leverage the prescriptive knowledge derived from our research to design innovative systems that support users information processing by managing their limited attentional resources

    Rethinking Productivity in Software Engineering

    Get PDF
    Get the most out of this foundational reference and improve the productivity of your software teams. This open access book collects the wisdom of the 2017 "Dagstuhl" seminar on productivity in software engineering, a meeting of community leaders, who came together with the goal of rethinking traditional definitions and measures of productivity. The results of their work, Rethinking Productivity in Software Engineering, includes chapters covering definitions and core concepts related to productivity, guidelines for measuring productivity in specific contexts, best practices and pitfalls, and theories and open questions on productivity. You'll benefit from the many short chapters, each offering a focused discussion on one aspect of productivity in software engineering. Readers in many fields and industries will benefit from their collected work. Developers wanting to improve their personal productivity, will learn effective strategies for overcoming common issues that interfere with progress. Organizations thinking about building internal programs for measuring productivity of programmers and teams will learn best practices from industry and researchers in measuring productivity. And researchers can leverage the conceptual frameworks and rich body of literature in the book to effectively pursue new research directions. What You'll Learn Review the definitions and dimensions of software productivity See how time management is having the opposite of the intended effect Develop valuable dashboards Understand the impact of sensors on productivity Avoid software development waste Work with human-centered methods to measure productivity Look at the intersection of neuroscience and productivity Manage interruptions and context-switching Who Book Is For Industry developers and those responsible for seminar-style courses that include a segment on software developer productivity. Chapters are written for a generalist audience, without excessive use of technical terminology. ; Collects the wisdom of software engineering thought leaders in a form digestible for any developer Shares hard-won best practices and pitfalls to avoid An up to date look at current practices in software engineering productivit

    Rethinking Productivity in Software Engineering

    Get PDF
    Get the most out of this foundational reference and improve the productivity of your software teams. This open access book collects the wisdom of the 2017 "Dagstuhl" seminar on productivity in software engineering, a meeting of community leaders, who came together with the goal of rethinking traditional definitions and measures of productivity. The results of their work, Rethinking Productivity in Software Engineering, includes chapters covering definitions and core concepts related to productivity, guidelines for measuring productivity in specific contexts, best practices and pitfalls, and theories and open questions on productivity. You'll benefit from the many short chapters, each offering a focused discussion on one aspect of productivity in software engineering. Readers in many fields and industries will benefit from their collected work. Developers wanting to improve their personal productivity, will learn effective strategies for overcoming common issues that interfere with progress. Organizations thinking about building internal programs for measuring productivity of programmers and teams will learn best practices from industry and researchers in measuring productivity. And researchers can leverage the conceptual frameworks and rich body of literature in the book to effectively pursue new research directions. What You'll Learn Review the definitions and dimensions of software productivity See how time management is having the opposite of the intended effect Develop valuable dashboards Understand the impact of sensors on productivity Avoid software development waste Work with human-centered methods to measure productivity Look at the intersection of neuroscience and productivity Manage interruptions and context-switching Who Book Is For Industry developers and those responsible for seminar-style courses that include a segment on software developer productivity. Chapters are written for a generalist audience, without excessive use of technical terminology. ; Collects the wisdom of software engineering thought leaders in a form digestible for any developer Shares hard-won best practices and pitfalls to avoid An up to date look at current practices in software engineering productivit

    Decomposing responses to mobile notifications

    Get PDF
    Notifications from mobile devices frequently prompt us with information, either to merely inform us or to elicit a reaction. This has led to increasing research interest in considering an individual’s interruptibility prior to issuing notifications, in order for them to be positively received. To achieve this, predictive models need to be built from previous response behaviour where the individual’s interruptibility is known. However, there are several degrees of freedom in achieving this, from different definitions in what it means to be interruptible and a notification to be successful, to various methods for collecting data, and building predictive models. The primary focus of this thesis is to improve upon the typical convention used for labelling interruptibility, an area which has had limited direct attention. This includes the proposal of a flexible framework, called the decision-on-information-gain model, which passively observes response behaviour in order to support various interruptibility definitions. In contrast, previous studies have largely surrounded the investigation of influential contextual factors on predicting interruptibility, using a broad labelling convention that relies on notifications being responded to fully and potentially a survey needing to be completed. The approach is supported through two in-the-wild studies of Android notifications, one with 11,000 notifications across 90 users, and another with 32,000,000 across 3000 users. Analysis of these datasets shows that: a) responses to notifications is a decisionmaking process, whereby individuals can be reachable but not receptive to their content, supporting the premise of the approach; b) the approach is implementable on typical Android devices and capable of adapting to different notification designs and user preferences; and c) the different labels produced by the model are predictable using data sources that do not require invasive permissions or persistent background monitoring; however there are notable performance differences between different machine learning strategies for training and evaluation

    Post-pandemic Recommendations: COVID-19 Continuity of Court Operations During a Public Health Emergency Workgroup

    Get PDF
    In this report, the COVID-19 Continuity of Court Operations During a Public Health Emergency Workgroup (Plan B Workgroup) makes recommendations about best practices and technologies that should be retained or adapted post-pandemic. The recommendations in this final Plan B Workgroup whitepaper are based on experience and feedback from Arizona’s courts addressing pandemic and post-pandemic practices. Although the original report, issued on June 2, 2021, included a May 2021 Survey of Arizona’s Courts, this updated report also includes information from a July 2021 State Bar of Arizona Survey and a September 2021 State of Arizona Public Opinion Survey addressing those practices. The workgroup’s findings and recommendations, which remain unchanged, can be summarized in five major categories: (1) Increasing Access to Justice, (2) Expanding Use of Technology, (3) Jury and Trial Management, (4) Communication Strategies and Disaster Preparedness, and (5) Health, Safety, and Security Protocols

    Can head teacher autonomy mitigate the effects of COVID-19 school closures in India?

    Get PDF
    This paper uses data from India to examine how school leaders have reacted to school closures due to COVID-19. We consider how differences in the decision-making autonomy of school leaders affect their confidence and coping strategies, and explore how this may help mitigate the otherwise unequalising effects of the pandemic
    • …
    corecore