10,003 research outputs found

    Integration of Action and Language Knowledge: A Roadmap for Developmental Robotics

    Get PDF
    “This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder." “Copyright IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.”This position paper proposes that the study of embodied cognitive agents, such as humanoid robots, can advance our understanding of the cognitive development of complex sensorimotor, linguistic, and social learning skills. This in turn will benefit the design of cognitive robots capable of learning to handle and manipulate objects and tools autonomously, to cooperate and communicate with other robots and humans, and to adapt their abilities to changing internal, environmental, and social conditions. Four key areas of research challenges are discussed, specifically for the issues related to the understanding of: 1) how agents learn and represent compositional actions; 2) how agents learn and represent compositional lexica; 3) the dynamics of social interaction and learning; and 4) how compositional action and language representations are integrated to bootstrap the cognitive system. The review of specific issues and progress in these areas is then translated into a practical roadmap based on a series of milestones. These milestones provide a possible set of cognitive robotics goals and test scenarios, thus acting as a research roadmap for future work on cognitive developmental robotics.Peer reviewe

    A framework for Thinking about Distributed Cognition

    Get PDF
    As is often the case when scientific or engineering fields emerge, new concepts are forged or old ones are adapted. When this happens, various arguments rage over what ultimately turns out to be conceptual misunderstandings. At that critical time, there is a need for an explicit reflection on the meaning of the concepts that define the field. In this position paper, we aim to provide a reasoned framework in which to think about various issues in the field of distributed cognition. We argue that both relevant concepts, distribution and cognition, must be understood as continuous. As it is used in the context of distributed cognition, the concept of distribution is essentially fuzzy, and we will link it to the notion of emergence of system-level properties. The concept of cognition must also be seen as fuzzy, but for different a reason: due its origin as an anthropocentric concept, no one has a clear handle on its meaning in a distributed setting. As the proposed framework forms a space, we then explore its geography and (re)visit famous landmarks

    A Machine Learning Based Analytical Framework for Semantic Annotation Requirements

    Full text link
    The Semantic Web is an extension of the current web in which information is given well-defined meaning. The perspective of Semantic Web is to promote the quality and intelligence of the current web by changing its contents into machine understandable form. Therefore, semantic level information is one of the cornerstones of the Semantic Web. The process of adding semantic metadata to web resources is called Semantic Annotation. There are many obstacles against the Semantic Annotation, such as multilinguality, scalability, and issues which are related to diversity and inconsistency in content of different web pages. Due to the wide range of domains and the dynamic environments that the Semantic Annotation systems must be performed on, the problem of automating annotation process is one of the significant challenges in this domain. To overcome this problem, different machine learning approaches such as supervised learning, unsupervised learning and more recent ones like, semi-supervised learning and active learning have been utilized. In this paper we present an inclusive layered classification of Semantic Annotation challenges and discuss the most important issues in this field. Also, we review and analyze machine learning applications for solving semantic annotation problems. For this goal, the article tries to closely study and categorize related researches for better understanding and to reach a framework that can map machine learning techniques into the Semantic Annotation challenges and requirements

    Mapping Design Contributions in Information Systems Research: The Design Research Activity Framework

    Get PDF
    Despite growing interest in design science research in information systems, our understanding about what constitutes a design contribution and the range of research activities that can produce design contributions remains limited. We propose the design research activity (DRA) framework for classifying design contributions based on the type of statements researchers use to express knowledge contributions and the researcher role with respect to the artifact. These dimensions combine to produce a DRA framework that contains four quadrants: construction, manipulation, deployment, and elucidation. We use the framework in two ways. First, we classify design contributions that the Journal of the Association for Information Systems (JAIS) published from 2007 to 2019 and show that the journal published a broad range of design research across all four quadrants. Second, we show how one can use our framework to analyze the maturity of design-oriented knowledge in a specific field as reflected in the degree of activity across the different quadrants. The DRA framework contributes by showing that design research encompasses both design science research and design-oriented behavioral research. The framework can help authors and reviewers assess research with design implications and help researchers position and understand design research as a journey through the four quadrants

    Macro action selection with deep reinforcement learning in StarCraft

    Full text link
    StarCraft (SC) is one of the most popular and successful Real Time Strategy (RTS) games. In recent years, SC is also widely accepted as a challenging testbed for AI research because of its enormous state space, partially observed information, multi-agent collaboration, and so on. With the help of annual AIIDE and CIG competitions, a growing number of SC bots are proposed and continuously improved. However, a large gap remains between the top-level bot and the professional human player. One vital reason is that current SC bots mainly rely on predefined rules to select macro actions during their games. These rules are not scalable and efficient enough to cope with the enormous yet partially observed state space in the game. In this paper, we propose a deep reinforcement learning (DRL) framework to improve the selection of macro actions. Our framework is based on the combination of the Ape-X DQN and the Long-Short-Term-Memory (LSTM). We use this framework to build our bot, named as LastOrder. Our evaluation, based on training against all bots from the AIIDE 2017 StarCraft AI competition set, shows that LastOrder achieves an 83% winning rate, outperforming 26 bots in total 28 entrants

    The implications of embodiment for behavior and cognition: animal and robotic case studies

    Full text link
    In this paper, we will argue that if we want to understand the function of the brain (or the control in the case of robots), we must understand how the brain is embedded into the physical system, and how the organism interacts with the real world. While embodiment has often been used in its trivial meaning, i.e. 'intelligence requires a body', the concept has deeper and more important implications, concerned with the relation between physical and information (neural, control) processes. A number of case studies are presented to illustrate the concept. These involve animals and robots and are concentrated around locomotion, grasping, and visual perception. A theoretical scheme that can be used to embed the diverse case studies will be presented. Finally, we will establish a link between the low-level sensory-motor processes and cognition. We will present an embodied view on categorization, and propose the concepts of 'body schema' and 'forward models' as a natural extension of the embodied approach toward first representations.Comment: Book chapter in W. Tschacher & C. Bergomi, ed., 'The Implications of Embodiment: Cognition and Communication', Exeter: Imprint Academic, pp. 31-5

    Distinguishing Emergent and Sequential Processes by Learning Emergent Second-Order Features

    Get PDF
    abstract: Emergent processes can roughly be defined as processes that self-arise from interactions without a centralized control. People have many robust misconceptions in explaining emergent process concepts such as natural selection and diffusion. This is because they lack a proper categorical representation of emergent processes and often misclassify these processes into the sequential processes category that they are more familiar with. The two kinds of processes can be distinguished by their second-order features that describe how one interaction relates to another interaction. This study investigated if teaching emergent second-order features can help people more correctly categorize new processes, it also compared different instructional methods in teaching emergent second-order features. The prediction was that learning emergent features should help more than learning sequential features because what most people lack is the representation of emergent processes. Results confirmed this by showing participants who generated emergent features and got correct features as feedback were better at distinguishing two kinds of processes compared to participants who rewrote second-order sequential features. Another finding was that participants who generated emergent features followed by reading correct features as feedback did better in distinguishing the processes than participants who only attempted to generate the emergent features without feedback. Finally, switching the order of instruction by teaching emergent features and then asking participants to explain the difference between emergent and sequential features resulted in equivalent learning gain as the experimental group that received feedback. These results proved teaching emergent second-order features helps people categorize processes and demonstrated the most efficient way to teach them.Dissertation/ThesisMasters Thesis Psychology 201

    A Survey on Reinforcement Learning Security with Application to Autonomous Driving

    Full text link
    Reinforcement learning allows machines to learn from their own experience. Nowadays, it is used in safety-critical applications, such as autonomous driving, despite being vulnerable to attacks carefully crafted to either prevent that the reinforcement learning algorithm learns an effective and reliable policy, or to induce the trained agent to make a wrong decision. The literature about the security of reinforcement learning is rapidly growing, and some surveys have been proposed to shed light on this field. However, their categorizations are insufficient for choosing an appropriate defense given the kind of system at hand. In our survey, we do not only overcome this limitation by considering a different perspective, but we also discuss the applicability of state-of-the-art attacks and defenses when reinforcement learning algorithms are used in the context of autonomous driving
    • …
    corecore