13 research outputs found

    Credit assignment in multiple goal embodied visuomotor behavior

    Get PDF
    The intrinsic complexity of the brain can lead one to set aside issues related to its relationships with the body, but the field of embodied cognition emphasizes that understanding brain function at the system level requires one to address the role of the brain-body interface. It has only recently been appreciated that this interface performs huge amounts of computation that does not have to be repeated by the brain, and thus affords the brain great simplifications in its representations. In effect the brain’s abstract states can refer to coded representations of the world created by the body. But even if the brain can communicate with the world through abstractions, the severe speed limitations in its neural circuitry mean that vast amounts of indexing must be performed during development so that appropriate behavioral responses can be rapidly accessed. One way this could happen would be if the brain used a decomposition whereby behavioral primitives could be quickly accessed and combined. This realization motivates our study of independent sensorimotor task solvers, which we call modules, in directing behavior. The issue we focus on herein is how an embodied agent can learn to calibrate such individual visuomotor modules while pursuing multiple goals. The biologically plausible standard for module programming is that of reinforcement given during exploration of the environment. However this formulation contains a substantial issue when sensorimotor modules are used in combination: The credit for their overall performance must be divided amongst them. We show that this problem can be solved and that diverse task combinations are beneficial in learning and not a complication, as usually assumed. Our simulations show that fast algorithms are available that allot credit correctly and are insensitive to measurement noise

    A Methodology for Requirements Analysis of AI Architecture Authoring Tools

    Get PDF
    Authoring embodied, highly interactive virtual agents (IVAs) for robust experiences is an extremely difficult task. Current architectures for creating those agents are so complex that it takes enormous amounts of effort to craft even short experiences, with lengthier, polished experiences (e.g., Facade, Ada and Grace) often requiring person-years of effort by expert authors. However, each architecture is challenging in vastly different ways; it is impossible to propose a universal authoring solution without being too general to provide significant leverage. Instead, we present our analysis of the System-Specific Step (SSS) in the IVA authoring process, encapsulated in the case studies of three different architectures tackling a simple scenario. The case studies revealed distinctly different behaviors by each team in their SSS, resulting in the need for different authoring solutions. We iteratively proposed and discussed each team’s SSS Components and potential authoring support strategies to identify actionable software improvements. Our expectation is that other teams can perform similar analyses of their own systems ’ SSS and make authoring improvements where they are most needed. Further, our case-study approach provides a methodology for detailed comparison of the authoring affordances of different IVA architectures, providing a lens for understanding the similarities, differences and tradeoffs between architectures

    Extended ramp goal module:Low-cost behaviour arbitration for real-time controllers based on biological models of dopamine cells

    Get PDF
    This version is made available in accordance with publisher policies. Please cite only the published version using the reference above

    Governance of Autonomous Agents on the Web: Challenges and Opportunities

    Get PDF
    International audienceThe study of autonomous agents has a long tradition in the Multiagent System and the Semantic Web communities, with applications ranging from automating business processes to personal assistants. More recently, the Web of Things (WoT), which is an extension of the Internet of Things (IoT) with metadata expressed in Web standards, and its community provide further motivation for pushing the autonomous agents research agenda forward. Although representing and reasoning about norms, policies and preferences is crucial to ensuring that autonomous agents act in a manner that satisfies stakeholder requirements, normative concepts, policies and preferences have yet to be considered as first-class abstractions in Web-based multiagent systems. Towards this end, this paper motivates the need for alignment and joint research across the Multiagent Systems, Semantic Web, and WoT communities, introduces a conceptual framework for governance of autonomous agents on the Web, and identifies several research challenges and opportunities

    Facilitating the Creation of Advanced Agents within NetLogo by Allowing Specification and Control Using the Behaviour Oriented Design Methodology

    Get PDF

    Facilitating the Creation of Advanced Agents within NetLogo by Allowing Specification and Control Using the Behaviour Oriented Design Methodology

    Get PDF
    NetLogo (Wilensky, 1999) is a very popular agent based modelling platform that is commonly used in a wide range of scientific fields. Behaviour Oriented Design (Bryson, 2003a) is a development methodology for creating complex agents, it uses a form of action selection known as POSH (Parallel-Rooted, Ordered Slip-Stack Hierarchical) as an arbitrator to control the ‘external’ actions of an agent. This project aims to allow the creation of BOD agents within NetLogo by implementing POSH for NetLogo and providing an example of the design methodology. The final product of the project is BODNetLogo a program whichsuccessfully allows the specification of BOD agents which can then be run inside a NetLogo simulation

    Embodied object schemas for grounding language use

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2007.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 139-146).This thesis presents the Object Schema Model (OSM) for grounded language interaction. Dynamic representations of objects are used as the central point of coordination between actions, sensations, planning, and language use. Objects are modeled as object schemas -- sets of multimodal, object-directed behavior processes -- each of which can make predictions, take actions, and collate sensations, in the modalities of touch, vision, and motor control. This process-centered view allows the system to respond continuously to real-world activity, while still viewing objects as stabilized representations for planning and speech interaction. The model can be described from four perspectives, each organizing and manipulating behavior processes in a different way. The first perspective views behavior processes like thread objects, running concurrently to carry out their respective functions. The second perspective organizes the behavior processes into object schemas. The third perspective organizes the behavior processes into plan hierarchies to coordinate actions. The fourth perspective creates new behavior processes in response to language input.(cont.) Results from interactions with objects are used to update the object schemas, which then influence subsequent plans and actions. A continuous planning algorithm examines the current object schemas to choose between candidate processes according to a set of primary motivations, such as responding to collisions, exploring objects, and interacting with the human. An instance of the model has been implemented using a physical robotic manipulator. The implemented system is able to interpret basic speech acts that relate to perception of, and actions upon, objects in the robot's physical environment.by Kai-yuh Hsiao.Ph.D
    corecore