607 research outputs found

    Human-computer collaboration for skin cancer recognition

    Get PDF
    The rapid increase in telemedicine coupled with recent advances in diagnostic artificial intelligence (AI) create the imperative to consider the opportunities and risks of inserting AI-based support into new paradigms of care. Here we build on recent achievements in the accuracy of image-based AI for skin cancer diagnosis to address the effects of varied representations of AI-based support across different levels of clinical expertise and multiple clinical workflows. We find that good quality AI-based support of clinical decision-making improves diagnostic accuracy over that of either AI or physicians alone, and that the least experienced clinicians gain the most from AI-based support. We further find that AI-based multiclass probabilities outperformed content-based image retrieval (CBIR) representations of AI in the mobile technology environment, and AI-based support had utility in simulations of second opinions and of telemedicine triage. In addition to demonstrating the potential benefits associated with good quality AI in the hands of non-expert clinicians, we find that faulty AI can mislead the entire spectrum of clinicians, including experts. Lastly, we show that insights derived from AI class-activation maps can inform improvements in human diagnosis. Together, our approach and findings offer a framework for future studies across the spectrum of image-based diagnostics to improve human-computer collaboration in clinical practice

    The poem will resemble you: A human-computer collaboration

    Get PDF
    Senior Project submitted to The Division of Languages and Literature of Bard College

    Human-Computer Collaboration for Visual Analytics: an Agent-based Framework

    Full text link
    The visual analytics community has long aimed to understand users better and assist them in their analytic endeavors. As a result, numerous conceptual models of visual analytics aim to formalize common workflows, techniques, and goals leveraged by analysts. While many of the existing approaches are rich in detail, they each are specific to a particular aspect of the visual analytic process. Furthermore, with an ever-expanding array of novel artificial intelligence techniques and advances in visual analytic settings, existing conceptual models may not provide enough expressivity to bridge the two fields. In this work, we propose an agent-based conceptual model for the visual analytic process by drawing parallels from the artificial intelligence literature. We present three examples from the visual analytics literature as case studies and examine them in detail using our framework. Our simple yet robust framework unifies the visual analytic pipeline to enable researchers and practitioners to reason about scenarios that are becoming increasingly prominent in the field, namely mixed-initiative, guided, and collaborative analysis. Furthermore, it will allow us to characterize analysts, visual analytic settings, and guidance from the lenses of human agents, environments, and artificial agents, respectively

    Design Principles for Human-Computer Collaboration

    Get PDF

    An Affordance-Based Framework for Human Computation and Human-Computer Collaboration

    Get PDF
    Visual Analytics is “the science of analytical reasoning facilitated by visual interactive interfaces” [70]. The goal of this field is to develop tools and methodologies for approaching problems whose size and complexity render them intractable without the close coupling of both human and machine analysis. Researchers have explored this coupling in many venues: VAST, Vis, InfoVis, CHI, KDD, IUI, and more. While there have been myriad promising examples of human-computer collaboration, there exists no common language for comparing systems or describing the benefits afforded by designing for such collaboration. We argue that this area would benefit significantly from consensus about the design attributes that define and distinguish existing techniques. In this work, we have reviewed 1,271 papers from many of the top-ranking conferences in visual analytics, human-computer interaction, and visualization. From these, we have identified 49 papers that are representative of the study of human-computer collaborative problem-solving, and provide a thorough overview of the current state-of-the-art. Our analysis has uncovered key patterns of design hinging on human- and machine-intelligence affordances, and also indicates unexplored avenues in the study of this area. The results of this analysis provide a common framework for understanding these seemingly disparate branches of inquiry, which we hope will motivate future work in the field

    Conveying intentions through haptics in human-computer collaboration

    Get PDF
    Haptics has been used as a natural way for humans to communicate with computers in collaborative virtual environments. Human-computer collaboration is typically achieved by sharing control of the task between a human and a computer operator. An important research challenge in the field addresses the need to realize intention recognition and response, which involves a decision making process between the partners. In an earlier study, we implemented a dynamic role exchange mechanism, which realizes decision making by means of trading the parties' control levels on the task. This mechanism proved to show promise of a more intuitive and comfortable communication. Here, we extend our earlier work to further investigate the utility of a role exchange mechanism in dynamic collaboration tasks. An experiment with 30 participants was conducted to compare the utility of a role exchange mechanism with that of a shared control scheme where the human and the computer share control equally at all times. A no guidance condition is considered as a base case to present the benefits of these two guidance schemes more clearly. Our experiment show that the role exchange scheme maximizes the efficiency of the user, which is the ratio of the work done by the user within the task to the energy spent by her. Furthermore, we explored the added benefits of explicitly displaying the control state by embedding visual and vibrotactile sensory cues on top of the role exchange scheme. We observed that such cues decrease performance slightly, probably because they introduce an extra cognitive load, yet they improve the users' sense of collaboration and interaction with the computer. These cues also create a stronger sense of trust for the user towards her partner's control over the task

    Human Computer Collaboration to Improve Annotations in Semantic Wikis

    Get PDF
    International audienceSemantic wikis are very promising tools for producing structured and unstructured data. However, they suffer from a lack of user provided semantic annotations, resulting in a loss of efficiency, despite of their high potential. This paper focuses on an original way to encourage users to annotate semantically pages. We propose a system that suggests automatically computed annotations to users. Users thus only have to validate, complete, modify, refuse or ignore these suggested annotations. We assume that as the annotation task becomes easier, more users will provide annotations. The system we propose is based on collaborative filtering recommender systems, it does not exploit the content of the pages but the usage made on these pages by the users: annotations are deduced from the usage of the pages and the annotations previously provided. The resulting semantic wikis contain several kinds of annotations that are differentiated by their status: human provided annotations, computer provided annotations (suggested by the system), human-computed interactions (suggested by the system and validated by the users) and refused annotations (suggested by the system and refused by the user). Navigation and (semantic) search will thus be facilitated and more efficient

    Balancing Human and Machine Contributions in Human Computation Systems

    Get PDF
    Many interesting and successful human computation systems leverage the complementary computational strengths of both humans and machines to solve these problems. In this chapter, we examine Human Computation as a type of Human-Computer Collaboration—collaboration involving at least one human and at least one computational agent. We discuss recent advances in the open area of function allocation, and explore how to balance the contributions of humans and machines in computational systems. We then explore how human-computer collaborative strategies can be used to solve problems that are difficult or computationally infeasible for computers or humans alone

    Dynamic human-computer collaboration in real-time unmanned vehicle scheduling

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2010.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 123-127).Advances in autonomy have made it possible to invert the operator-to-vehicle ratio so that a single operator can control multiple heterogeneous Unmanned Vehicles (UVs). This autonomy will reduce the need for the operator to manually control each vehicle, enabling the operator to focus on higher-level goal setting and decision-making. Computer optimization algorithms that can be used in UV path-planning and task allocation usually have an a priori coded objective function that only takes into account pre-determined variables with set weightings. Due to the complex, time-critical, and dynamic nature of command and control missions, brittleness due to a static objective function could cause higher workload as the operator manages the automation. Increased workload during critical decision-making could lead to lower system performance which, in turn, could result in a mission or life-critical failure. This research proposes a method of collaborative multiple UV control that enables operators to dynamically modify the weightings within the objective function of an automated planner during a mission. After a review of function allocation literature, an appropriate taxonomy was used to evaluate the likely impact of human interaction with a dynamic objective function. This analysis revealed a potential reduction in the number of cognitive steps required to evaluate and select a plan, by aligning the objectives of the operator with the automated planner. A multiple UV simulation testbed was modified to provide two types of dynamic objective functions. The operator could either choose one quantity or choose any combination of equally weighted quantities for the automated planner to use in evaluating mission plans. To compare the performance and workload of operators using these dynamic objective functions against operators using a static objective function, an experiment was conducted where 30 participants performed UV missions in a synthetic environment. Two scenarios were designed, one in which the Rules of Engagement (ROEs) remained the same throughout the scenario and one in which the ROEs changed. The experimental results showed that operators rated their performance and confidence highest when using the dynamic objective function with multiple objectives. Allowing the operator to choose multiple objectives resulted in fewer modifications to the objective function, enhanced situational awareness (SA), and increased spare mental capacity. Limiting the operator to choosing a single objective for the automated planner led to superior performance for individual mission goals such as finding new targets, while also causing some violations of ROEs, such as destroying a target without permission. Although there were no significant differences in system performance or workload between the dynamic and static objective 4 functions, operators had superior performance and higher SA during the mission with changing ROEs. While these results suggest that a dynamic objective function could be beneficial, further research is required to explore the impact of dynamic objective functions and changing mission goals on human performance and workload in multiple UV control.by Andrew S. Clare.S.M
    • …
    corecore