269 research outputs found

    Effects of Sensemaking Translucence on Distributed Collaborative Analysis

    Full text link
    Collaborative sensemaking requires that analysts share their information and insights with each other, but this process of sharing runs the risks of prematurely focusing the investigation on specific suspects. To address this tension, we propose and test an interface for collaborative crime analysis that aims to make analysts more aware of their sensemaking processes. We compare our sensemaking translucence interface to a standard interface without special sensemaking features in a controlled laboratory study. We found that the sensemaking translucence interface significantly improved clue finding and crime solving performance, but that analysts rated the interface lower on subjective measures than the standard interface. We conclude that designing for distributed sensemaking requires balancing task performance vs. user experience and real-time information sharing vs. data accuracy.Comment: ACM SIGCHI CSCW 201

    INVESTIGATING THE IMPACT OF ONLINE HUMAN COLLABORATION IN EXPLANATION OF AI SYSTEMS

    Get PDF
    An important subdomain in research on Human-Artificial Intelligence interaction is Explainable AI (XAI). XAI aims to improve human understanding and trust in machine intelligence and automation by providing users with visualizations and other information explaining the AI’s decisions, actions, or plans and thereby to establish justified trust and reliance. XAI systems have primarily used algorithmic approaches designed to generate explanations automatically that help understanding underlying information about decisions and establish justified trust and reliance, but an alternate that may augment these systems is to take advantage of the fact that user understanding of AI systems often develops through self-explanation (Mueller et al., 2021). Users attempt to piece together different sources of information and develop a clearer understanding, but these self-explanations are often lost if not shared with others. This thesis research demonstrated how this ‘Self-Explanation’ could be shared collaboratively via a system that is called collaborative XAI (CXAI). It is akin to a Social Q&A platform (Oh, 2018) such as StackExchange. A web-based system was built and evaluated formatively and via user studies. Formative evaluation will show how explanations in an XAI system, especially collaborative explanations, can be assessed based on ‘goodness criteria’ (Mueller et al., 2019). This thesis also investigated how the users performed with the explanations from this type of XAI system. Lastly, the research investigated whether the users of CXAI system are satisfied with the human-generated explanations generated in the system and check if the users can trust this type of explanation

    ENHANCING EXPRESSIVITY OF DOCUMENT-CENTERED COLLABORATION WITH MULTIMODAL ANNOTATIONS

    Full text link
    As knowledge work moves online, digital documents have become a staple of human collaboration. To communicate beyond the constraints of time and space, remote and asynchronous collaborators create digital annotations over documents, substituting face-to-face meetings with online conversations. However, existing document annotation interfaces depend primarily on text commenting, which is not as expressive or nuanced as in-person communication where interlocutors can speak and gesture over physical documents. To expand the communicative capacity of digital documents, we need to enrich annotation interfaces with face-to-face-like multimodal expressions (e.g., talking and pointing over texts). This thesis makes three major contributions toward multimodal annotation interfaces for enriching collaboration around digital documents. The first contribution is a set of design requirements for multimodal annotations drawn from our user studies and explorative literature surveys. We found that the major challenges were to support lightweight access to recorded voice, to control visual occlusions of graphically rich audio interfaces, and to reduce speech anxiety in voice comment production. Second, to address these challenges, we present RichReview, a novel multimodal annotation system. RichReview is designed to capture natural communicative expressions in face-to-face document descriptions as the combination of multimodal user inputs (e.g., speech, pen-writing, and deictic pen-hovering). To balance the consumption and production of speech comments, the system employs (1) cross-modal indexing interfaces for faster audio navigation, (2) fluid document-annotation layout for reduced visual clutter, and (3) voice synthesis-based speech editing for reduced speech anxiety. The third contribution is a series of evaluations that examines the effectiveness of our design solutions. Results of our lab studies show that RichReview can successfully address the above mentioned interface problems of multimodal annotations. A subsequent series of field deployment studies test the real-world efficacy of RichReview by deploying the system for document-centered conversation activities in classrooms, such as instructor feedback for student assignments and peer discussions about course material. The results suggest that using rich annotation helps students better understand the instructor’s comments, and makes them feel more valued as a person. From the results of the peer-discussion study, we learned that retaining the richness of original speech is the key to the success of speech commenting. What follows is the discussion on the benefits, challenges, and future of multimodal annotation interfaces, and technical innovations required to realize the vision

    Facilitating Asynchronous Collaboration in Scientific Workflow Composition Using Provenance

    Get PDF
    Recent advances in various domains have led to a data explosion, which has created many significant scientific discovery opportunities. Therefore, researchers need systems that allow them to analyze data efficiently. Scientific Workflow Management Systems (SWfMS) such as Galaxy, Taverna, Kepler and, VizSciFlow are popular software among researchers for data-intensive experiments. Advances in other domains have led to the increasing complexity of the experiments and the demand for collaboration between scientists. Many scientific experiments require scientists from different domains to work collaboratively toward addressing a problem. Very few of the existing SWfMSs such as ProveDB, SciWorCS, Workspace, support collaboration but in many cases, their method are not efficient. Researchers can share their work in existing collaborative data analysis systems, meaning all the collaborators must work on a single version of the workflow, which increases the chance of potential interference as the number of collaborators grows. Furthermore, when collaborators join an experiment, to contribute effectively, they require information about the project’s status, such as the history of its changes and current problems. Existing SWfMSs neither offer this insight nor provide group awareness in an asynchronous setting. The first contribution of this work is that we provide tools to facilitate collaborative workflow composition in the context of SWfMS. With this aim, we simulated some standard concepts of version control systems (VCS e.g., Github), such as branching and versioning in SWfMSs. As a proof of concept of collaborative features, we developed an API capable of capturing the provenance information and managing the branches and versions of the workflow. As the second contribution, we propose a set of visualizations and reports in order to provide the information collaborators require when joining a project or continuing to collaborate with added efficiency. We capture the system event’s log, also known as provenance information, during workflow composition and execution phases, and using such data, we generate the visualizations and reports. Before implementing the visualizations, we created a demo of our work and surveyed potential users to discover how much our proposed visualizations could contribute to group awareness. Moreover, we asked to what extent the proposed version control system could help address shortcomings in collaborative experiments. We invited programmers and researchers who had experience using SWfMSs, and domain specialists from associated areas to participate in our study. We selected particular roles due to the relevance of their experience to our research topic. Twelve individuals participated in the survey. They provided valuable feedback about improving the proposed collaborative tools and what other kinds of visualizations they would need as potential users. 70% of the participants found the proposed tools are beneficial for collaborative workflow composition

    Improving User Involvement Through Live Collaborative Creation

    Full text link
    Creating an artifact - such as writing a book, developing software, or performing a piece of music - is often limited to those with domain-specific experience or training. As a consequence, effectively involving non-expert end users in such creative processes is challenging. This work explores how computational systems can facilitate collaboration, communication, and participation in the context of involving users in the process of creating artifacts while mitigating the challenges inherent to such processes. In particular, the interactive systems presented in this work support live collaborative creation, in which artifact users collaboratively participate in the artifact creation process with creators in real time. In the systems that I have created, I explored liveness, the extent to which the process of creating artifacts and the state of the artifacts are immediately and continuously perceptible, for applications such as programming, writing, music performance, and UI design. Liveness helps preserve natural expressivity, supports real-time communication, and facilitates participation in the creative process. Live collaboration is beneficial for users and creators alike: making the process of creation visible encourages users to engage in the process and better understand the final artifact. Additionally, creators can receive immediate feedback in a continuous, closed loop with users. Through these interactive systems, non-expert participants help create such artifacts as GUI prototypes, software, and musical performances. This dissertation explores three topics: (1) the challenges inherent to collaborative creation in live settings, and computational tools that address them; (2) methods for reducing the barriers of entry to live collaboration; and (3) approaches to preserving liveness in the creative process, affording creators more expressivity in making artifacts and affording users access to information traditionally only available in real-time processes. In this work, I showed that enabling collaborative, expressive, and live interactions in computational systems allow the broader population to take part in various creative practices.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/145810/1/snaglee_1.pd

    Making Maps Available for Play: Analyzing the Design of Game Cartography Interfaces.

    Get PDF
    Maps in video games have grown into complex interactive systems alongside video games themselves. What map systems have done and currently do have not been cataloged or evaluated. We trace the history of game map interfaces from their paper-based inspiration to their current smart phone-like appearance. Read- only map interfaces enable players to consume maps, which is sufficient for wayfinding. Game cartography interfaces enable players to persistently modify maps, expanding the range of activity to support planning and coordination. We employ thematic analysis on game cartography interfaces, contributing a near-exhaustive catalog of games featuring such interfaces, a set of properties to describe and design such interfaces, a collection of play activities that relate to cartography, and a framework to identify what properties promote the activities. We expect that designers will find the contributions enable them to promote desired play experiences through game map interface design

    Parallel architectures and runtime systems co-design for task-based programming models

    Get PDF
    The increasing parallelism levels in modern computing systems has extolled the need for a holistic vision when designing multiprocessor architectures taking in account the needs of the programming models and applications. Nowadays, system design consists of several layers on top of each other from the architecture up to the application software. Although this design allows to do a separation of concerns where it is possible to independently change layers due to a well-known interface between them, it is hampering future systems design as the Law of Moore reaches to an end. Current performance improvements on computer architecture are driven by the shrinkage of the transistor channel width, allowing faster and more power efficient chips to be made. However, technology is reaching physical limitations were the transistor size will not be able to be reduced furthermore and requires a change of paradigm in systems design. This thesis proposes to break this layered design, and advocates for a system where the architecture and the programming model runtime system are able to exchange information towards a common goal, improve performance and reduce power consumption. By making the architecture aware of runtime information such as a Task Dependency Graph (TDG) in the case of dataflow task-based programming models, it is possible to improve power consumption by exploiting the critical path of the graph. Moreover, the architecture can provide hardware support to create such a graph in order to reduce the runtime overheads and making possible the execution of fine-grained tasks to increase the available parallelism. Finally, the current status of inter-node communication primitives can be exposed to the runtime system in order to perform a more efficient communication scheduling, and also creates new opportunities of computation and communication overlap that were not possible before. An evaluation of the proposals introduced in this thesis is provided and a methodology to simulate and characterize the application behavior is also presented.El aumento del paralelismo proporcionado por los sistemas de cómputo modernos ha provocado la necesidad de una visión holística en el diseño de arquitecturas multiprocesador que tome en cuenta las necesidades de los modelos de programación y las aplicaciones. Hoy en día el diseño de los computadores consiste en diferentes capas de abstracción con una interfaz bien definida entre ellas. Las limitaciones de esta aproximación junto con el fin de la ley de Moore limitan el potencial de los futuros computadores. La mayoría de las mejoras actuales en el diseño de los computadores provienen fundamentalmente de la reducción del tamaño del canal del transistor, lo cual permite chips más rápidos y con un consumo eficiente sin apenas cambios fundamentales en el diseño de la arquitectura. Sin embargo, la tecnología actual está alcanzando limitaciones físicas donde no será posible reducir el tamaño de los transistores motivando así un cambio de paradigma en la construcción de los computadores. Esta tesis propone romper este diseño en capas y abogar por un sistema donde la arquitectura y el sistema de tiempo de ejecución del modelo de programación sean capaces de intercambiar información para alcanzar una meta común: La mejora del rendimiento y la reducción del consumo energético. Haciendo que la arquitectura sea consciente de la información disponible en el modelo de programación, como puede ser el grafo de dependencias entre tareas en los modelos de programación dataflow, es posible reducir el consumo energético explotando el camino critico del grafo. Además, la arquitectura puede proveer de soporte hardware para crear este grafo con el objetivo de reducir el overhead de construir este grado cuando la granularidad de las tareas es demasiado fina. Finalmente, el estado de las comunicaciones entre nodos puede ser expuesto al sistema de tiempo de ejecución para realizar una mejor planificación de las comunicaciones y creando nuevas oportunidades de solapamiento entre cómputo y comunicación que no eran posibles anteriormente. Esta tesis aporta una evaluación de todas estas propuestas, así como una metodología para simular y caracterizar el comportamiento de las aplicacionesPostprint (published version

    Beyond Orality and Literacy: Letters and Organizational Communication

    Get PDF
    We draw on communication theories to study organizational communication from a literacy perspective. We suggest that the current debate over the capability of new media to foster the sharing and development of ideas and allow the expression of emotions, which presupposes face-to-face communication as the ideal form of communication, disappears once we switch the focus from the medium to the modality – written versus oral communication. An analysis of personal and organizational letters illustrates the role played by written communication throughout human history, in exchanging ideas and supporting emotionalOrality and Literacy; Online Interactions; Communicative Practices; Letters; Organizational Communication

    Factors Influencing Faculty Use of Screencasting for Feedback

    Get PDF
    This study explored faculty concerns in using screencasting to give feedback, why they choose to adopt it, and what training and support would benefit them in the adoption of such a method. This is a single embedded case study using a stages of concern questionnaire, semistructured and open-ended interviews, as well as media comment reviews as data collection methods. Some 21 professors from a southwestern private university participated in the research, representing 51 potential participants who have been exposed to screencasting for feedback through software ownership, training, or coaching. After the completion of this questionnaire, 16 participants were interviewed in depth, and five of them provided examples of their media feedback. A finding was that screencasting holds promise to give feedback in a residential university setting as it could enrich the cognitive and affective content of feedback. Faculty members were concerned mostly with the personal aspects of using screencasting feedback, such as time demand. Another finding was that professors make sophisticated choices when deciding modalities to give feedback; such choices depend on class size, the nature of content, the rules they use, and the division of labor. Recommendations include greater use by faculty and improved training by faculty developers to assist faculty in using screencasting to give feedback
    corecore