3 research outputs found

    Crea.Blender: A Neural Network-Based Image Generation Game to Assess Creativity

    Get PDF
    We present a pilot study on crea.blender, a novel co-creative game designed for large-scale, systematic assessment of distinct constructs of human creativity. Co-creative systems are systems in which humans and computers (often with Machine Learning) collaborate on a creative task. This human-computer collaboration raises questions about the relevance and level of human creativity and involvement in the process. We expand on, and explore aspects of these questions in this pilot study. We observe participants play through three different play modes in crea.blender, each aligned with established creativity assessment methods. In these modes, players "blend" existing images into new images under varying constraints. Our study indicates that crea.blender provides a playful experience, affords players a sense of control over the interface, and elicits different types of player behavior, supporting further study of the tool for use in a scalable, playful, creativity assessment.Comment: 4 page, 6 figures, CHI Pla

    Measuring Cognitive Abilities in the Wild: Validating a Population-Scale Game-Based Cognitive Assessment

    No full text
    Rapid individual cognitive phenotyping holds the potential to revolutionize domains as wide-ranging as personalized learning, employment practices, and precision psychiatry. Going beyond limitations imposed by traditional lab-based experiments, new efforts have been underway towards greater ecological validity and participant diversity to capture the full range of individual differences in cognitive abilities and behaviors across the general population. Building on this, we developed Skill Lab, a novel game-based tool that simultaneously assesses a broad suite of cognitive abilities while providing an engaging narrative. Skill Lab consists of six mini-games as well as 14 established cognitive ability tasks. Using a popular citizen science platform (N = 10725), we conducted a comprehensive validation in the wild of a game-based cognitive assessment suite. Based on the game and validation task data, we constructed reliable models to simultaneously predict eight cognitive abilities based on the users’ in-game behavior. Follow-up validation tests revealed that the models can discriminate nuances contained within each separate cognitive ability as well as capture a shared main factor of generalized cognitive ability. Our game-based measures are five times faster to complete than the equivalent task-based measures and replicate previous findings on the decline of certain cognitive abilities with age in our large cross-sectional population sample (N = 6369). Taken together, our results demonstrate the feasibility of rapid in-the-wild systematic assessment of cognitive abilities as a promising first step towards population-scale benchmarking and individualized mental health diagnostics
    corecore