23 research outputs found

    End-users publishing structured information on the web: an observational study of what, why, and how

    Get PDF
    End-users are accustomed to filtering and browsing styled collections of data on professional web sites, but they have few ways to create and publish such information architectures for themselves. This paper presents a full-lifecycle analysis of the Exhibit framework - an end-user tool which provides such functionality - to understand the needs, capabilities, and practices of this class of users. We include interviews, as well as analysis of over 1,800 visualizations and 200,000 web interactions with these visualizations. Our analysis reveals important findings about this user population which generalize to the task of providing better end-user structured content publication tools.Intel Science & Technology Center for Big Dat

    Prompting for Discovery: Flexible Sense-Making for AI Art-Making with Dreamsheets

    Full text link
    Design space exploration (DSE) for Text-to-Image (TTI) models entails navigating a vast, opaque space of possible image outputs, through a commensurately vast input space of hyperparameters and prompt text. Minor adjustments to prompt input can surface unexpectedly disparate images. How can interfaces support end-users in reliably steering prompt-space explorations towards interesting results? Our design probe, DreamSheets, supports exploration strategies with LLM-based functions for assisted prompt construction and simultaneous display of generated results, hosted in a spreadsheet interface. The flexible layout and novel generative functions enable experimentation with user-defined workflows. Two studies, a preliminary lab study and a longitudinal study with five expert artists, revealed a set of strategies participants use to tackle the challenges of TTI design space exploration, and the interface features required to support them - like using text-generation to define local "axes" of exploration. We distill these insights into a UI mockup to guide future interfaces.Comment: 13 pages, 14 figures, currently under revie

    The Impact of Visibility in Innovation Tournaments: Evidence From Field Experiments

    Get PDF
    Contests have a long history of driving innovation, and web-based information technology has opened up new possibilities for managing tournaments. One such possibility is the visibility of entries – some web-based platforms now allow participants to observe others’ submissions while the contest is live. Seeing other entries could broaden or limit idea exploration, redirect or anchor searches, or inspire or stifle creativity. Using a unique data set from a series of field experiments, we examine whether entry visibility helps or hurts innovation contest outcomes and (in the process) also address the common problem of how to deal with opt-in participation. Our eight contests resulted in 665 contest entries for which we have 11,380 quality ratings. Based on analysis of this data set and additional observational data, we provide evidence that entry visibility influences the outcome of tournaments via two pathways: (1) changing the likelihood of entry from an agent and (2) shifting the quality characteristics of entries. For the first, we show that entry visibility generates more entries by increasing the number of participants. For the second, we find the effect of entry visibility depends on the setting. Seeing other entries results in more similar submissions early in a contest. For single-entry participants, entry quality “ratchets up” with the best entry previously submitted by other contestants if that entry is visible, while moving in the opposite direction if it’s not. However, for participants who submit more than once, those with better prior submissions improve more when they cannot see the work of others. The variance in quality of entries also increases when entries are not visible, usually a desirable property of tournament submissions

    AI-Augmented Brainwriting: Investigating the use of LLMs in group ideation

    Full text link
    The growing availability of generative AI technologies such as large language models (LLMs) has significant implications for creative work. This paper explores twofold aspects of integrating LLMs into the creative process - the divergence stage of idea generation, and the convergence stage of evaluation and selection of ideas. We devised a collaborative group-AI Brainwriting ideation framework, which incorporated an LLM as an enhancement into the group ideation process, and evaluated the idea generation process and the resulted solution space. To assess the potential of using LLMs in the idea evaluation process, we design an evaluation engine and compared it to idea ratings assigned by three expert and six novice evaluators. Our findings suggest that integrating LLM in Brainwriting could enhance both the ideation process and its outcome. We also provide evidence that LLMs can support idea evaluation. We conclude by discussing implications for HCI education and practice.Comment: Conditionally Accepted to CHI24. 27 page

    Toward collaborative ideation at scale: Leveraging ideas from others to generate more creative and diverse ideas

    Get PDF
    ABSTRACT A growing number of large collaborative idea generation platforms promise that by generating ideas together, people can create better ideas than any would have alone. But how might these platforms best leverage the number and diversity of contributors to help each contributor generate even better ideas? Prior research suggests that seeing particularly creative or diverse ideas from others can inspire you, but few scalable mechanisms exist to assess diversity. We contribute a new scalable crowd-powered method for evaluating the diversity of sets of ideas. The method relies on similarity comparisons (is idea A more similar to B or C?) generated by non-experts to create an abstract spatial idea map. Our validation study reveals that human raters agree with the estimates of dissimilarity derived from our idea map as much or more than they agree with each other. People seeing the diverse sets of examples from our idea map generate more diverse ideas than those seeing randomly selected examples. Our results also corroborate findings from prior research showing that people presented with creative examples generated more creative ideas than those who saw a set of random examples. We see this work as a step toward building more effective online systems for supporting large scale collective ideation
    corecore