9,260 research outputs found

    The Challenge of Believability in Video Games: Definitions, Agents Models and Imitation Learning

    Full text link
    In this paper, we address the problem of creating believable agents (virtual characters) in video games. We consider only one meaning of believability, ``giving the feeling of being controlled by a player'', and outline the problem of its evaluation. We present several models for agents in games which can produce believable behaviours, both from industry and research. For high level of believability, learning and especially imitation learning seems to be the way to go. We make a quick overview of different approaches to make video games' agents learn from players. To conclude we propose a two-step method to develop new models for believable agents. First we must find the criteria for believability for our application and define an evaluation method. Then the model and the learning algorithm can be designed

    An evolutionary behavioral model for decision making

    Get PDF
    For autonomous agents the problem of deciding what to do next becomes increasingly complex when acting in unpredictable and dynamic environments pursuing multiple and possibly conflicting goals. One of the most relevant behavior-based model that tries to deal with this problem is the one proposed by Maes, the Bbehavior Network model. This model proposes a set of behaviors as purposive perception-action units which are linked in a nonhierarchical network, and whose behavior selection process is orchestrated by spreading activation dynamics. In spite of being an adaptive model (in the sense of self-regulating its own behavior selection process), and despite the fact that several extensions have been proposed in order to improve the original model adaptability, there is not a robust model yet that can self-modify adaptively both the topological structure and the functional purpose\ud of the network as a result of the interaction between the agent and its environment. Thus, this work proffers an innovative hybrid model driven by gene expression programming, which makes two main contributions: (1) given an initial set of meaningless and unconnected units, the evolutionary mechanism is able to build well-defined and robust behavior networks which are adapted and specialized to concrete internal agent's needs and goals; and (2)\ud the same evolutionary mechanism is able to assemble quite\ud complex structures such as deliberative plans (which operate in the long-term) and problem-solving strategies

    Artificial Intelligence and Systems Theory: Applied to Cooperative Robots

    Full text link
    This paper describes an approach to the design of a population of cooperative robots based on concepts borrowed from Systems Theory and Artificial Intelligence. The research has been developed under the SocRob project, carried out by the Intelligent Systems Laboratory at the Institute for Systems and Robotics - Instituto Superior Tecnico (ISR/IST) in Lisbon. The acronym of the project stands both for "Society of Robots" and "Soccer Robots", the case study where we are testing our population of robots. Designing soccer robots is a very challenging problem, where the robots must act not only to shoot a ball towards the goal, but also to detect and avoid static (walls, stopped robots) and dynamic (moving robots) obstacles. Furthermore, they must cooperate to defeat an opposing team. Our past and current research in soccer robotics includes cooperative sensor fusion for world modeling, object recognition and tracking, robot navigation, multi-robot distributed task planning and coordination, including cooperative reinforcement learning in cooperative and adversarial environments, and behavior-based architectures for real time task execution of cooperating robot teams

    Logic programming for deliberative robotic task planning

    Get PDF
    Over the last decade, the use of robots in production and daily life has increased. With increasingly complex tasks and interaction in different environments including humans, robots are required a higher level of autonomy for efficient deliberation. Task planning is a key element of deliberation. It combines elementary operations into a structured plan to satisfy a prescribed goal, given specifications on the robot and the environment. In this manuscript, we present a survey on recent advances in the application of logic programming to the problem of task planning. Logic programming offers several advantages compared to other approaches, including greater expressivity and interpretability which may aid in the development of safe and reliable robots. We analyze different planners and their suitability for specific robotic applications, based on expressivity in domain representation, computational efficiency and software implementation. In this way, we support the robotic designer in choosing the best tool for his application

    Higher Education Exchange: 2009

    Get PDF
    This annual publication serves as a forum for new ideas and dialogue between scholars and the larger public. Essays explore ways that students, administrators, and faculty can initiate and sustain an ongoing conversation about the public life they share.The Higher Education Exchange is founded on a thought articulated by Thomas Jefferson in 1820: "I know no safe depository of the ultimate powers of the society but the people themselves; and if we think them not enlightened enough to exercise their control with a wholesome discretion, the remedy is not to take it from them, but to inform their discretion by education."In the tradition of Jefferson, the Higher Education Exchange agrees that a central goal of higher education is to help make democracy possible by preparing citizens for public life. The Higher Education Exchange is part of a movement to strengthen higher education's democratic mission and foster a more democratic culture throughout American society.Working in this tradition, the Higher Education Exchange publishes interviews, case studies, analyses, news, and ideas about efforts within higher education to develop more democratic societies

    Deliberative Evolution in Multi-Agent Systems

    Get PDF
    Item does not contain fulltextEvolution of automated systems, in particular evolution of automated agents based on agent deliberation, is the topic of this paper. Evolution is not a merely material process, it requires interaction within and between individuals, their environments and societies of agents. An architecture for an individual agent capable of (1) deliberation about the creation of new agents, and (2) (run-time) creation of a new agent on the basis of this, is presented. The agent architecture is based on an existing generic agent model, and includes explicit formal conceptual representations of both design structures of agents and (behavioural) properties of agents. The process of deliberation is based on an existing generic reasoning model of design. The architecture has been designed using the compositional development method DESIRE, and has been tested in a prototype implementation
    • 

    corecore