2,523 research outputs found

    Understanding Collaborative Sensemaking for System Design — An Investigation of Musicians\u27 Practice

    Get PDF
    There is surprisingly little written in information science and technology literature about the design of tools used to support the collaboration of creators. Understanding collaborative sensemaking through the use of language has been traditionally applied to non-work domains, but this method is also well-suited for informing hypotheses about the design collaborative systems. The presence of ubiquitous, mobile technology, and development of multi-user virtual spaces invites investigation of design which is based on naturalistic, real world, creative group behaviors, including the collaborative work of musicians. This thesis is considering the co-construction of new (musical) knowledge by small groups. Co-construction of new knowledge is critical to the definition of an information system because it emphasizes coordination and resource sharing among group members (versus individual members independently doing their own tasks and only coming together to collate their contributions as a final product). This work situates the locus of creativity on the process itself, rather than on the output (the musical result) or the individuals (members of the band). This thesis describes a way to apply quantitative observations to inform qualitative assessment of the characteristics of collaborative sensemaking in groups. Conversational data were obtained from nine face-to-face collaborative composing sessions, involving three separate bands producing 18 hours of recorded interactions. Topical characteristics of the discussion, namely objects, plans, properties and performance; as well as emergent patterns of generative, evaluative, revision, and management conversational acts within the group were seen as indicative of knowledge construction. The findings report the use of collaborative pathways: iterative cycles of generation, evaluation and revision of temporary solutions used to move the collaboration forward. In addition, bracketing of temporary solutions served to help collaborators reuse content and offload attentional resources. Ambiguity in language, evaluation criteria, goal formation, and group awareness meant that existing knowledge representations were insufficient in making sense of incoming data and necessitated reformulating those representations. Further, strategic use of affective language was found to be instrumental in bridging knowledge gaps. Based on these findings, features of a collaborative system are proposed to help in facilitating sensemaking routines at various stages of a creative task. This research contributes to the theoretical understanding of collaborative sensemaking during non-work, creative activities in order to inform the design of systems for supporting these activities. By studying an environment which forms a potential microcosm of virtual interaction between groups, it provides a framework for understanding and automating collaborative discussion content in terms of the features of dialogue

    Supporting Human-AI Collaboration in Auditing LLMs with LLMs

    Full text link
    Large language models are becoming increasingly pervasive and ubiquitous in society via deployment in sociotechnical systems. Yet these language models, be it for classification or generation, have been shown to be biased and behave irresponsibly, causing harm to people at scale. It is crucial to audit these language models rigorously. Existing auditing tools leverage either or both humans and AI to find failures. In this work, we draw upon literature in human-AI collaboration and sensemaking, and conduct interviews with research experts in safe and fair AI, to build upon the auditing tool: AdaTest (Ribeiro and Lundberg, 2022), which is powered by a generative large language model (LLM). Through the design process we highlight the importance of sensemaking and human-AI communication to leverage complementary strengths of humans and generative models in collaborative auditing. To evaluate the effectiveness of the augmented tool, AdaTest++, we conduct user studies with participants auditing two commercial language models: OpenAI's GPT-3 and Azure's sentiment analysis model. Qualitative analysis shows that AdaTest++ effectively leverages human strengths such as schematization, hypothesis formation and testing. Further, with our tool, participants identified a variety of failures modes, covering 26 different topics over 2 tasks, that have been shown before in formal audits and also those previously under-reported.Comment: 21 pages, 3 figure

    Elaborating the frames of data-frame theory

    Get PDF
    As an explanation of sensemaking, data-frame theory has proven to be popular, influential and useful. Despite its strengths however, we propose some weaknesses in the way that the concept of a ‘frame’ could be interpreted. The weaknesses relate to a need to clearly contrast what we refer to as ‘generic’ vs. ‘situation-specific’ belief structures and the idea that multiple generic belief structures may be utilized in the construction of embedded situation-specific beliefs. Neither weakness is insurmountable, and we propose a model of sensemaking based on the idea of spreading activation through associative networks as a concept that provides a solution to this. We explore the application of this idea using the notion of activation to differentiate generic from situation specific beliefs

    Cohere: Towards Web 2.0 Argumentation

    Get PDF
    Students, researchers and professional analysts lack effective tools to make personal and collective sense of problems while working in distributed teams. Central to this work is the process of sharing–and contesting–interpretations via different forms of argument. How does the 'Web 2.0' paradigm challenge us to deliver useful, usable tools for online argumentation? This paper reviews the current state of the art in Web Argumentation, describes key features of the Web 2.0 orientation, and identifies some of the tensions that must be negotiated in bringing these worlds together. It then describes how these design principles are interpreted in Cohere, a web tool for social bookmarking, idea-linking, and argument visualization

    Contested Collective Intelligence: rationale, technologies, and a human-machine annotation study

    Get PDF
    We propose the concept of Contested Collective Intelligence (CCI) as a distinctive subset of the broader Collective Intelligence design space. CCI is relevant to the many organizational contexts in which it is important to work with contested knowledge, for instance, due to different intellectual traditions, competing organizational objectives, information overload or ambiguous environmental signals. The CCI challenge is to design sociotechnical infrastructures to augment such organizational capability. Since documents are often the starting points for contested discourse, and discourse markers provide a powerful cue to the presence of claims, contrasting ideas and argumentation, discourse and rhetoric provide an annotation focus in our approach to CCI. Research in sensemaking, computer-supported discourse and rhetorical text analysis motivate a conceptual framework for the combined human and machine annotation of texts with this specific focus. This conception is explored through two tools: a social-semantic web application for human annotation and knowledge mapping (Cohere), plus the discourse analysis component in a textual analysis software tool (Xerox Incremental Parser: XIP). As a step towards an integrated platform, we report a case study in which a document corpus underwent independent human and machine analysis, providing quantitative and qualitative insight into their respective contributions. A promising finding is that significant contributions were signalled by authors via explicit rhetorical moves, which both human analysts and XIP could readily identify. Since working with contested knowledge is at the heart of CCI, the evidence that automatic detection of contrasting ideas in texts is possible through rhetorical discourse analysis is progress towards the effective use of automatic discourse analysis in the CCI framework

    Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our technologies is still barely visible. McLuhan’s predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The combination of this expertise, and the time and space afforded the consortium by the IRC structure, suggested the opportunity for a concerted effort to develop an approach to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to the knowledge management services AKT tries to provide. As a medium for the semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing ontologies to create a third). Ontology mapping, and the elimination of conflicts of reference, will be important tasks. All of these issues are discussed along with our proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which semantic hygiene prevails interesting enough to reason in? These and many other questions need to be addressed if we are to provide effective knowledge technologies for our content on the web

    Evaluation methodology for visual analytics software

    Get PDF
    O desafio do Visual Analytics (VA) é produzir visualizaçÔes que ajudem os utilizadores a concentrarem-se no aspecto mais relevante ou mais interessante dos dados apresentados. A sociedade actual enfrenta uma quantidade de dados que aumenta rapidamente. Assim, os utilizadores de informação em todos os domínios acabam por ter mais informação do que aquela com que podem lidar. O software VA deve suportar interacçÔes intuitivas para que os analistas possam concentrar-se na informação que estão a manipular, e não na técnica de manipulação em si. Os ambientes de VA devem procurar minimizar a carga de trabalho cognitivo global dos seus utilizadores, porque se tivermos de pensar menos nas interacçÔes em si, teremos mais tempo para pensar na anålise propriamente dita. Tendo em conta os benefícios que as aplicaçÔes VA podem trazer e a confusão que ainda existe ao identificar tais aplicaçÔes no mercado, propomos neste trabalho uma nova metodologia de avaliação baseada em heurísticas. A nossa metodologia destina-se a avaliar aplicaçÔes através de testes de usabilidade considerando as funcionalidades e características desejåveis em sistemas de VA. No entanto, devido à sua natureza quatitativa, pode ser naturalmente utilizada para outros fins, tais como comparação para decisão entre aplicaçÔes de VA do mesmo contexto. Além disso, seus critérios poderão servir como fonte de informação para designers e programadores fazerem escolhas apropriadas durante a concepção e desenvolvimento de sistemas de VA

    Making Sense of Document Collections with Map-Based Visualizations

    Get PDF
    As map-based visualizations of documents become more ubiquitous, there is a greater need for them to support intellectual and creative high-level cognitive activities with collections of non-cartographic materials -- documents. This dissertation concerns the conceptualization of map-based visualizations as tools for sensemaking and collection understanding. As such, map-based visualizations would help people use georeferenced documents to develop understanding, gain insight, discover knowledge, and construct meaning. This dissertation explores the role of graphical representations (such as maps, Kohonen maps, pie charts, and other) and interactions with them for developing map-based visualizations capable of facilitating sensemaking activities such as collection understanding. While graphical representations make document collections more perceptually and cognitively accessible, interactions allow users to adapt representations to users’ contextual needs. By interacting with representations of documents or collections and being able to construct representations of their own, people are better able to make sense of information, comprehend complex structures, and integrate new information into their existing mental models. In sum, representations and interactions may reduce cognitive load and consequently expedite the overall time necessary for completion of sensemaking activities, which typically take much time to accomplish. The dissertation proceeds in three phases. The first phase develops a conceptual framework for translating ontological properties of collections to representations and for supporting visual tasks by means of graphical representations. The second phase concerns the cognitive benefits of interaction. It conceptualizes how interactions can help people during complex sensemaking activities. Although the interactions are explained on the example of a prototype built with Google Maps, they are independent iv of Google Maps and can be applicable to various other technologies. The third phase evaluates the utility, analytical capabilities and usability of the additional representations when users interact with a visualization prototype – VIsual COLlection EXplorer. The findings suggest that additional representations can enhance understanding of map-based visualizations of library collections: specifically, they can allow users to see trends, gaps, and patterns in ontological properties of collections
    • 

    corecore