17 research outputs found

    Artificial intelligence approaches for the generation and assessment of believable human-like behaviour in virtual characters

    Get PDF
    Having artificial agents to autonomously produce human-like behaviour is one of the most ambitious original goals of Artificial Intelligence (AI) and remains an open problem nowadays. The imitation game originally proposed by Turing constitute a very effective method to prove the indistinguishability of an artificial agent. The behaviour of an agent is said to be indistinguishable from that of a human when observers (the so-called judges in the Turing test) can not tell apart humans and non-human agents. Different environments, testing protocols, scopes and problem domains can be established to develop limited versions or variants of the original Turing test. In this paper we use a specific version of the Turing test, based on the international BotPrize competition, built in a First-Person Shooter video game, where both human players and non-player characters interact in complex virtual environments. Based on our past experience both in the BotPrize competition and other robotics and computer game AI applications we have developed three new more advanced controllers for believable agents: two based on a combination of the CERA-CRANIUM and SOAR cognitive architectures and other based on ADANN, a system for the automatic evolution and adaptation of artificial neural networks. These two new agents have been put to the test jointly with CCBot3, the winner of BotPrize 2010 competition [1], and have showed a significant improvement in the humanness ratio. Additionally, we have confronted all these bots to both First-person believability assessment (BotPrize original judging protocol) and Third-person believability assess- ment, demonstrating that the active involvement of the judge has a great impact in the recognition of human-like behaviour.MICINN -Ministerio de Ciencia e Innovación(FCT-13-7848

    Ethics of brain emulations

    No full text

    Validating the Creature Believability Scale for Videogames

    No full text
    Part 2: Short PapersInternational audienceWe present the validation of a scale to assess creature believability in videogames. We define Creatures as all zoomorphic entities not qualifying as fundamentally human-like, whether possessing or not anthropomorphic features. The scale, derived from a previous research, contains 26 items in 4 dimensions – Biological/Social Plausability, Relationship with the Environment, Adaptation, and Expression. The results of a Confirmatory Factor Analysis, using 19 subjects, originated a model with 4 factors, a CFI of 0.795, and RMSEA of 0.111. While not a good fit, it is very close to a mediocre fit, which is a potentially promising result. Further validation is needed with more subjects in the future

    Modality in the MGLAIR Architecture

    No full text
    Abstract The MGLAIR cognitive agent architecture includes a general model of modality and support for concurrent multimodal perception and action. It provides afferent and efferent modalities as instantiable objects used in agent implementations. Each modality is defined by a set of properties that govern its use and its integration with reasoning and acting. This paper presents the MGLAIR model of modalities and mechanisms for their use in computational cognitive agents
    corecore