6,228 research outputs found

    Evolutionary robotics and neuroscience

    Get PDF
    No description supplie

    Neuroethology, Computational

    No full text
    Over the past decade, a number of neural network researchers have used the term computational neuroethology to describe a specific approach to neuroethology. Neuroethology is the study of the neural mechanisms underlying the generation of behavior in animals, and hence it lies at the intersection of neuroscience (the study of nervous systems) and ethology (the study of animal behavior); for an introduction to neuroethology, see Simmons and Young (1999). The definition of computational neuroethology is very similar, but is not quite so dependent on studying animals: animals just happen to be biological autonomous agents. But there are also non-biological autonomous agents such as some types of robots, and some types of simulated embodied agents operating in virtual worlds. In this context, autonomous agents are self-governing entities capable of operating (i.e., coordinating perception and action) for extended periods of time in environments that are complex, uncertain, and dynamic. Thus, computational neuroethology can be characterised as the attempt to analyze the computational principles underlying the generation of behavior in animals and in artificial autonomous agents

    Robotic control based on the human nervous system

    Get PDF
    This article presents a model of robotic control system inspired by the human neuroregulatory system. This model allows the application of functional and organizational principles of biological systems to robotic systems. It also proposes appropriate technologies to implement this proposal, in particular the services. To illustrate the proposal, we implemented a control system for mobile robots in dynamic open environments, demonstrating the viability of both the model and the technologies chosen for implementation

    On Neuromechanical Approaches for the Study of Biological Grasp and Manipulation

    Full text link
    Biological and robotic grasp and manipulation are undeniably similar at the level of mechanical task performance. However, their underlying fundamental biological vs. engineering mechanisms are, by definition, dramatically different and can even be antithetical. Even our approach to each is diametrically opposite: inductive science for the study of biological systems vs. engineering synthesis for the design and construction of robotic systems. The past 20 years have seen several conceptual advances in both fields and the quest to unify them. Chief among them is the reluctant recognition that their underlying fundamental mechanisms may actually share limited common ground, while exhibiting many fundamental differences. This recognition is particularly liberating because it allows us to resolve and move beyond multiple paradoxes and contradictions that arose from the initial reasonable assumption of a large common ground. Here, we begin by introducing the perspective of neuromechanics, which emphasizes that real-world behavior emerges from the intimate interactions among the physical structure of the system, the mechanical requirements of a task, the feasible neural control actions to produce it, and the ability of the neuromuscular system to adapt through interactions with the environment. This allows us to articulate a succinct overview of a few salient conceptual paradoxes and contradictions regarding under-determined vs. over-determined mechanics, under- vs. over-actuated control, prescribed vs. emergent function, learning vs. implementation vs. adaptation, prescriptive vs. descriptive synergies, and optimal vs. habitual performance. We conclude by presenting open questions and suggesting directions for future research. We hope this frank assessment of the state-of-the-art will encourage and guide these communities to continue to interact and make progress in these important areas

    Can Science Explain Consciousness?

    Get PDF
    For diverse reasons, the problem of phenomenal consciousness is persistently challenging. Mental terms are characteristically ambiguous, researchers have philosophical biases, secondary qualities are excluded from objective description, and philosophers love to argue. Adhering to a regime of efficient causes and third-person descriptions, science as it has been defined has no place for subjectivity or teleology. A solution to the “hard problem” of consciousness will require a radical approach: to take the point of view of the cognitive system itself. To facilitate this approach, a concept of agency is introduced along with a different understanding of intentionality. Following this approach reveals that the autopoietic cognitive system constructs phenomenality through acts of fiat, which underlie perceptual completion effects and “filling in”—and, by implication, phenomenology in general. It creates phenomenality much as we create meaning in language, through the use of symbols that it assigns meaning in the context of an embodied evolutionary history that is the source of valuation upon which meaning depends. Phenomenality is a virtual representation to itself by an executive agent (the conscious self) tasked with monitoring the state of the organism and its environment, planning future action, and coordinating various sub- agencies. Consciousness is not epiphenomenal, but serves a function for higher organisms that is distinct from that of unconscious processing. While a strictly scientific solution to the hard problem is not possible for a science that excludes the subjectivity it seeks to explain, there is hope to at least psychologically bridge the explanatory gulf between mind and matter, and perhaps hope for a broader definition of science

    Scientific requirements for an engineered model of consciousness

    Get PDF
    The building of a non-natural conscious system requires more than the design of physical or virtual machines with intuitively conceived abilities, philosophically elucidated architecture or hardware homologous to an animal’s brain. Human society might one day treat a type of robot or computing system as an artificial person. Yet that would not answer scientific questions about the machine’s consciousness or otherwise. Indeed, empirical tests for consciousness are impossible because no such entity is denoted within the theoretical structure of the science of mind, i.e. psychology. However, contemporary experimental psychology can identify if a specific mental process is conscious in particular circumstances, by theory-based interpretation of the overt performance of human beings. Thus, if we are to build a conscious machine, the artificial systems must be used as a test-bed for theory developed from the existing science that distinguishes conscious from non-conscious causation in natural systems. Only such a rich and realistic account of hypothetical processes accounting for observed input/output relationships can establish whether or not an engineered system is a model of consciousness. It follows that any research project on machine consciousness needs a programme of psychological experiments on the demonstration systems and that the programme should be designed to deliver a fully detailed scientific theory of the type of artificial mind being developed – a Psychology of that Machine

    The dynamic neural field approach to cognitive robotics

    Get PDF
    This tutorial presents an architecture for autonomous robots to generate behavior in joint action tasks. To efficiently interact with another agent in solving a mutual task, a robot should be endowed with cognitive skills such as memory, decision making, action understanding and prediction. The proposed architecture is strongly inspired by our current understanding of the processing principles and the neuronal circuitr underlying these functionalities in the primate brain. As a mathematical framework, we use a coupled system of dynamic neural fields, each representing the basic functionality of neuronal populations in different brain areas. It implements goal-directed behavior in joint action as a continuous process that builds on the interpretation of observed movements in terms of the partner’s action goal. We validate the architecture in two experimental paradigms: (1) a joint search task; (2) a reproduction of an observed or inferred end state of a grasping–placing sequence. We also review some of the mathematical results about dynamic neural fields that are important for the implementation work.European Commission fp6-IST2, project no. 00374

    SEAI: Social Emotional Artificial Intelligence Based on Damasio's Theory of Mind

    Get PDF
    A socially intelligent robot must be capable to extract meaningful information in real-time from the social environment and react accordingly with coherent human-like behaviour. Moreover, it should be able to internalise this information, to reason on it at a higher abstract level, build its own opinions independently and then automatically bias the decision-making according to its unique experience. In the last decades, neuroscience research highlighted the link between the evolution of such complex behaviour and the evolution of a certain level of consciousness, which cannot leave out of a body that feels emotions as discriminants and prompters. In order to develop cognitive systems for social robotics with greater human-likeliness, we used an "understanding by building" approach to model and implement a well-known theory of mind in the form of an artificial intelligence, and we tested it on a sophisticated robotic platform. The name of the presented system is SEAI (Social Emotional Artificial Intelligence), a cognitive system specifically conceived for social and emotional robots. It is designed as a bio-inspired, highly modular, hybrid system with emotion modelling and high-level reasoning capabilities. It follows the deliberative/reactive paradigm where a knowledge-based expert system is aimed at dealing with the high-level symbolic reasoning, while a more conventional reactive paradigm is deputed to the low-level processing and control. The SEAI system is also enriched by a model which simulate the Damasio's theory of consciousness and the theory of Somatic Markers. After a review of similar bio-inspired cognitive systems, we present the scientific foundations and their computational formalisation at the basis of the SEAI framework. Then, a deeper technical description of the architecture is disclosed underlining the numerous parallelisms with the human cognitive system. Finally, the influence of artificial emotions and feelings, and their link with the robot's beliefs and decisions have been tested in a physical humanoid involved in Human-Robot Interaction (HRI)

    Integration of Action and Language Knowledge: A Roadmap for Developmental Robotics

    Get PDF
    “This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder." “Copyright IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.”This position paper proposes that the study of embodied cognitive agents, such as humanoid robots, can advance our understanding of the cognitive development of complex sensorimotor, linguistic, and social learning skills. This in turn will benefit the design of cognitive robots capable of learning to handle and manipulate objects and tools autonomously, to cooperate and communicate with other robots and humans, and to adapt their abilities to changing internal, environmental, and social conditions. Four key areas of research challenges are discussed, specifically for the issues related to the understanding of: 1) how agents learn and represent compositional actions; 2) how agents learn and represent compositional lexica; 3) the dynamics of social interaction and learning; and 4) how compositional action and language representations are integrated to bootstrap the cognitive system. The review of specific issues and progress in these areas is then translated into a practical roadmap based on a series of milestones. These milestones provide a possible set of cognitive robotics goals and test scenarios, thus acting as a research roadmap for future work on cognitive developmental robotics.Peer reviewe
    corecore