375,064 research outputs found

    Computational intelligence based architecture for cognitive agents

    Get PDF
    AbstractWe discuss some limitations of reflexive agents to motivate the need to develop cognitive agents and propose a hierarchical, layered, architecture for cognitive agents. Our examples often involve the discussion of cognitive agents in highway traffic models. A cognitive agent is an agent capable of performing cognitive acts, i.e. a sequence of the following activities: “Perceiving” information in the environment and provided by other agents, “Reasoning” about this information using existing knowledge, “Judging” the obtained information using existing knowledge, “Responding” to other cognitive agents or to the external environment, as it may be required, and “Learning”, i.e. changing (and, hopefully augmenting) the existing knowledge if the newly acquired information allows it. We describe how computational intelligence techniques (e.g., fuzzy logic, neural networks, genetic algorithms, etc) allow mimicking to a certain extent the cognitive acts performed by human beings. The order with which the cognitive actions take place is important and so is the order with which the various computational intelligence techniques are applied. We believe that a hierarchical layered model should be defined for the generic cognitive agents in a style akin to the hierarchical OSI 7 layer model used in data communication. We outline in broad sense such a reference model

    From Manifesta to Krypta: The Relevance of Categories for Trusting Others

    No full text
    In this paper we consider the special abilities needed by agents for assessing trust based on inference and reasoning. We analyze the case in which it is possible to infer trust towards unknown counterparts by reasoning on abstract classes or categories of agents shaped in a concrete application domain. We present a scenario of interacting agents providing a computational model implementing different strategies to assess trust. Assuming a medical domain, categories, including both competencies and dispositions of possible trustees, are exploited to infer trust towards possibly unknown counterparts. The proposed approach for the cognitive assessment of trust relies on agents' abilities to analyze heterogeneous information sources along different dimensions. Trust is inferred based on specific observable properties (Manifesta), namely explicitly readable signals indicating internal features (Krypta) regulating agents' behavior and effectiveness on specific tasks. Simulative experiments evaluate the performance of trusting agents adopting different strategies to delegate tasks to possibly unknown trustees, while experimental results show the relevance of this kind of cognitive ability in the case of open Multi Agent Systems

    Reasoning with Categories for Trusting Strangers: a Cognitive Architecture

    No full text
    A crucial issue for agents in open systems is the ability to filter out information sources in order to build an image of their counterparts, upon which a subjective evaluation of trust as a promoter of interactions can be assessed. While typical solutions discern relevant information sources by relying on previous experiences or reputational images, this work presents an alternative approach based on the cognitive ability to: (i) analyze heterogeneous information sources along different dimensions; (ii) ascribe qualities to unknown counterparts based on reasoning over abstract classes or categories; and, (iii) learn a series of emergent relationships between particular properties observable on other agents and their effective abilities to fulfill tasks. A computational architecture is presented allowing cognitive agents to dynamically assess trust based on a limited set of observable properties, namely explicitly readable signals (Manifesta) through which it is possible to infer hidden properties and capabilities (Krypta), which finally regulate agents' behavior in concrete work environments. Experimental evaluation discusses the effectiveness of trustor agents adopting different strategies to delegate tasks based on categorization

    Signatures of the neurocognitive basis of culture wars found in moral psychology data\ud

    Get PDF
    Moral Foundation Theory (MFT) states that groups of different observers may rely on partially dissimilar sets of moral foundations, thereby reaching different moral valuations on a subset of issues. With the introduction of functional imaging techniques, a wealth of new data on neurocognitive processes has rapidly mounted and it has\ud become increasingly more evident that this type of data should provide an adequate basis for modeling social systems. In particular, it has been shown that there is a spectrum of cognitive styles with respect to the differential handling of novel or corroborating information.\ud Furthermore this spectrum is correlated to political affiliation. Here we use methods of statistical mechanics to characterize the collective behavior of an agent-based model society whose interindividual interactions due to information exchange in the form of opinions, are in qualitative agreement with neurocognitive and psychological data. The main conclusion derived from the model is\ud that the existence of diversity in the cognitive strategies yields different statistics for the sets of moral foundations and that these arise from the cognitive interactions of the agents. Thus a simple interacting agent model, whose interactions are in accord with empirical data about moral dynamics, presents statistical signatures\ud consistent with those that characterize opinions of conservatives and liberals. The higher the difference in the treatment of novel and corroborating information the more agents correlate to liberals.\u

    Integration of Action and Language Knowledge: A Roadmap for Developmental Robotics

    Get PDF
    “This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder." “Copyright IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.”This position paper proposes that the study of embodied cognitive agents, such as humanoid robots, can advance our understanding of the cognitive development of complex sensorimotor, linguistic, and social learning skills. This in turn will benefit the design of cognitive robots capable of learning to handle and manipulate objects and tools autonomously, to cooperate and communicate with other robots and humans, and to adapt their abilities to changing internal, environmental, and social conditions. Four key areas of research challenges are discussed, specifically for the issues related to the understanding of: 1) how agents learn and represent compositional actions; 2) how agents learn and represent compositional lexica; 3) the dynamics of social interaction and learning; and 4) how compositional action and language representations are integrated to bootstrap the cognitive system. The review of specific issues and progress in these areas is then translated into a practical roadmap based on a series of milestones. These milestones provide a possible set of cognitive robotics goals and test scenarios, thus acting as a research roadmap for future work on cognitive developmental robotics.Peer reviewe

    認知環境の更新に関する妥当な計算手法について

    Get PDF
    When we acquire the right cognitive environment, we cannot search for every relevant piece of information because our cognitive ability is not limitless. Cognitive agents, however, must have proper cognitive enviroments from partial information structure. In order to achieve this purpose, we need to continue updating cognitive environments through our experience in real world. This paper proposes a valid algorithm for updating the environments on the basis of estimated degree of belief, and discusses the qualitive feature of our innate cognitive ability

    A cognitive hierarchy model of learning in networks

    Get PDF
    This paper proposes a method for estimating a hierarchical model of bounded rationality in games of learning in networks. A cognitive hierarchy comprises a set of cognitive types whose behavior ranges from random to substantively rational. SpeciÖcally, each cognitive type in the model corresponds to the number of periods in which economic agents process new information. Using experimental data, we estimate type distributions in a variety of task environments and show how estimated distributions depend on the structural properties of the environments. The estimation results identify signiÖcant levels of behavioral hetero-geneity in the experimental data and overall conÖrm comparative static conjectures on type distributions across task environments. Surprisingly, the model replicates the aggregate pat-terns of the behavior in the data quite well. Finally, we found that the dominant type in the data is closely related to Bayes-rational behavior
    corecore