6,463 research outputs found

    A Standardised Procedure for Evaluating Creative Systems: Computational Creativity Evaluation Based on What it is to be Creative

    Get PDF
    Computational creativity is a flourishing research area, with a variety of creative systems being produced and developed. Creativity evaluation has not kept pace with system development with an evident lack of systematic evaluation of the creativity of these systems in the literature. This is partially due to difficulties in defining what it means for a computer to be creative; indeed, there is no consensus on this for human creativity, let alone its computational equivalent. This paper proposes a Standardised Procedure for Evaluating Creative Systems (SPECS). SPECS is a three-step process: stating what it means for a particular computational system to be creative, deriving and performing tests based on these statements. To assist this process, the paper offers a collection of key components of creativity, identified empirically from discussions of human and computational creativity. Using this approach, the SPECS methodology is demonstrated through a comparative case study evaluating computational creativity systems that improvise music

    Information and communication in a networked infosphere: a review of concepts and application in social branding

    Get PDF
    This paper aims at providing a contribution to the comprehensive review of the impact of information and communication, and their supporting technologies, in the current transformation of human life in the infosphere. The paper also offers an ex- ample of the power of new social approaches to the use of information and commu- nication technologies to foster new working models in organizations by presenting the main outcomes of a research project on social branding. A discussion about some trends of the future impact of new information and communication technologies in the infosphere is also included

    Design Considerations for Real-Time Collaboration with Creative Artificial Intelligence

    Get PDF
    Machines incorporating techniques from artificial intelligence and machine learning can work with human users on a moment-to-moment, real-time basis to generate creative outcomes, performances and artefacts. We define such systems collaborative, creative AI systems, and in this article, consider the theoretical and practical considerations needed for their design so as to support improvisation, performance and co-creation through real-time, sustained, moment-to-moment interaction. We begin by providing an overview of creative AI systems, examining strengths, opportunities and criticisms in order to draw out the key considerations when designing AI for human creative collaboration. We argue that the artistic goals and creative process should be first and foremost in any design. We then draw from a range of research that looks at human collaboration and teamwork, to examine features that support trust, cooperation, shared awareness and a shared information space. We highlight the importance of understanding the scope and perception of two-way communication between human and machine agents in order to support reflection on conflict, error, evaluation and flow. We conclude with a summary of the range of design challenges for building such systems in provoking, challenging and enhancing human creative activity through their creative agency

    Evaluating Design Solutions Using Crowds

    Get PDF
    Crowds can be used to generate and evaluate design solutions. To increase a crowdsourcing system’s effectiveness, we propose and compare two evaluation methods, one using five-point Likert scale rating and the other prediction voting. Our results indicate that although the two evaluation methods correlate, they have different goals: whereas prediction voting focuses evaluators on identifying the very best solutions, the rating focuses evaluators on the entire range of solutions. Thus, prediction voting is appropriate when there are many poor quality solutions that need to be filtered out, and rating is suited when all ideas are reasonable and distinctions need to be made across all solutions. The crowd prefers participating in prediction voting. The results have pragmatic implications, suggesting that evaluation methods should be assigned in relation to the distribution of quality present at each stage of crowdsourcing
    corecore