5 research outputs found

    How Does Embodiment Affect the Human Perception of Computational Creativity? An Experimental Study Framework

    Get PDF
    Which factors influence the human assessment of creativity exhibited by a computational system is a core question of computational creativity (CC) research. Recently, the system’s embodiment has been put forward as such a factor, but empirical studies of its effect are lacking. To this end, we propose an experimental framework which isolates the effect of embodiment on the perception of creativity from its effect on creativity per se. We manipulate not only the system’s embodiment but also the human perception of creativity, which we factorise into the assessment of creativity, and the perceptual evidence that feeds into that assessment. We motivate the core framework with embodiment and perceptual evidence as independent and the creative process as a controlled variable, and we provide recommendations on measuring the assessment of creativity as a dependent variable. We propose three types of perceptual evidence with respect to the creative system, the creative process and the creative artefact, borrowing from the popular four perspectives on creativity. We hope the framework will inspire and guide others to study the human perception of embodied CC in a principled manner.Peer reviewe

    How Does Embodiment Affect the Human Perception of Computational Creativity? An Experimental Study Framework

    Get PDF
    Which factors influence the human assessment of creativity exhibited by a computational system is a core question of computational creativity (CC) research. Recently, the system’s embodiment has been put forward as such a factor, but empirical studies of its effect are lacking. To this end, we propose an experimental framework which isolates the effect of embodiment on the perception of creativity from its effect on creativity per se. We manipulate not only the system’s embodiment but also the human perception of creativity, which we factorise into the assessment of creativity, and the perceptual evidence that feeds into that assessment. We motivate the core framework with embodiment and perceptual evidence as independent and the creative process as a controlled variable, and we provide recommendations on measuring the assessment of creativity as a dependent variable. We propose three types of perceptual evidence with respect to the creative system, the creative process and the creative artefact, borrowing from the popular four perspectives on creativity. We hope the framework will inspire and guide others to study the human perception of embodied CC in a principled manner.Peer reviewe

    Crowd score: a method for the evaluation of jokes using Large Language Model AI voters as judges

    Get PDF
    This paper presents the Crowd Score, a novel method to assess the funniness of jokes using large language models (LLMs) as AI judges. Our method relies on inducing different personalities into the LLM and aggregating the votes of the AI judges into a single score to rate jokes. We validate the votes using an auditing technique that checks if the explanation for a particular vote is reasonable using the LLM. We tested our methodology on 52 jokes in a crowd of four AI voters with different humour types: affiliative, self-enhancing, aggressive and self-defeating. Our results show that few-shot prompting leads to better results than zero-shot for the voting question. Personality induction showed that aggressive and self-defeating voters are significantly more inclined to find more jokes funny of a set of aggressive/self-defeating jokes than the affiliative and self-enhancing voters. The Crowd Score follows the same trend as human judges by assigning higher scores to jokes that are also considered funnier by human judges. We believe that our methodology could be applied to other creative domains such as story, poetry, slogans, etc. It could both help the adoption of a flexible and accurate standard approach to compare different work in the CC community under a common metric and by minimizing human participation in assessing creative artefacts, it could accelerate the prototyping of creative artefacts and reduce the cost of hiring human participants to rate creative artefacts

    Co-creativity and perceptions of computational agents in co-creativity

    Get PDF
    How are computers typically perceived in co-creativity scenarios? And how does this affect how we evaluate computational creativity research systems that use cocreativity? Recent research within computational creativity considers how to attribute creativity to computational agents within co-creative scenarios. Human evaluation forms a key part of such attribution or evaluation of creative contribution. The use of human opinion to evaluate computational creativity, however, runs the risk of being distorted by conscious or subconscious bias. The case study in this paper shows people are significantly less confident at evaluating the creativity of a whole co-creative system involving computational and human participants, compared to the (already tricky) task of evaluating individual creative agents in isolation. To progress co-creativity research, we should combine the use of co-creative computational models with the findings of computational creativity evaluation research into what contributes to software creativity

    TwitSong: A current events computer poet and the thorny problem of assessment.

    Get PDF
    This thesis is driven by the question of how computers can generate poetry, and how that poetry can be evaluated. We survey existing work on computer-generated poetry and interdisciplinary work on how to evaluate this type of computer-generated creative product. We perform experiments illuminating issues in evaluation which are specific to poetry. Finally, we produce and evaluate three versions of our own generative poetry system, TwitSong, which generates poetry based on the news, evaluates the desired qualities of the lines that it chooses, and, in its final form, can make targeted and goal-directed edits to its own work. While TwitSong does not turn out to produce poetry comparable to that of a human, it represents an advancement on the state of the art in its genre of computer-generated poetry, particularly in its ability to edit for qualities like topicality and emotion
    corecore