31,460 research outputs found

    Between Sense and Sensibility: Declarative narrativisation of mental models as a basis and benchmark for visuo-spatial cognition and computation focussed collaborative cognitive systems

    Full text link
    What lies between `\emph{sensing}' and `\emph{sensibility}'? In other words, what kind of cognitive processes mediate sensing capability, and the formation of sensible impressions ---e.g., abstractions, analogies, hypotheses and theory formation, beliefs and their revision, argument formation--- in domain-specific problem solving, or in regular activities of everyday living, working and simply going around in the environment? How can knowledge and reasoning about such capabilities, as exhibited by humans in particular problem contexts, be used as a model and benchmark for the development of collaborative cognitive (interaction) systems concerned with human assistance, assurance, and empowerment? We pose these questions in the context of a range of assistive technologies concerned with \emph{visuo-spatial perception and cognition} tasks encompassing aspects such as commonsense, creativity, and the application of specialist domain knowledge and problem-solving thought processes. Assistive technologies being considered include: (a) human activity interpretation; (b) high-level cognitive rovotics; (c) people-centred creative design in domains such as architecture & digital media creation, and (d) qualitative analyses geographic information systems. Computational narratives not only provide a rich cognitive basis, but they also serve as a benchmark of functional performance in our development of computational cognitive assistance systems. We posit that computational narrativisation pertaining to space, actions, and change provides a useful model of \emph{visual} and \emph{spatio-temporal thinking} within a wide-range of problem-solving tasks and application areas where collaborative cognitive systems could serve an assistive and empowering function.Comment: 5 pages, research statement summarising recent publication

    A Standardised Procedure for Evaluating Creative Systems: Computational Creativity Evaluation Based on What it is to be Creative

    Get PDF
    Computational creativity is a flourishing research area, with a variety of creative systems being produced and developed. Creativity evaluation has not kept pace with system development with an evident lack of systematic evaluation of the creativity of these systems in the literature. This is partially due to difficulties in defining what it means for a computer to be creative; indeed, there is no consensus on this for human creativity, let alone its computational equivalent. This paper proposes a Standardised Procedure for Evaluating Creative Systems (SPECS). SPECS is a three-step process: stating what it means for a particular computational system to be creative, deriving and performing tests based on these statements. To assist this process, the paper offers a collection of key components of creativity, identified empirically from discussions of human and computational creativity. Using this approach, the SPECS methodology is demonstrated through a comparative case study evaluating computational creativity systems that improvise music

    The longer term value of creativity judgements in computational creativity

    Get PDF
    During research to develop the Standardised Procedure for Evaluating Creative Systems (SPECS) methodology for evaluat- ing the creativity of ‘creative’ systems, in 2011 an evaluation case study was carried out. The case study investigated how we can make a ‘snapshot’ decision, in a short space of time, on the creativity of systems in various domains. The systems to be evaluated were presented at the International Computational Creativity Conference in 2011. Evaluation was performed by people whose domain expertise ranges from expert to novice, depending on the system. The SPECS methodology was used for evaluation, and was compared to two other creativity evaluation methods (Ritchie’s criteria and Colton’s Creative Tripod) and to results from surveying people’s opinion on the creativity of the systems under investigation. Here, we revisit those results, considering them in the context of what these systems have contributed to computational creativity development. Five years on, we now have data on how influential these systems were within computational creativity, and to what extent the work in these systems has influenced further developments in computational creativity research. This paper investigates whether the evaluations of creativity of these systems have been helpful in predicting which systems will be more influential in computational creativity (as measured by paper citations and further development within later computational systems). While a direct correlation between evaluative results and longer term impact is not discovered (and perhaps too simplistic an aim, given the factors at play in determining research impact), some interesting alignments are noted between the 2011 results and the impact of papers five years on
    corecore