1,695 research outputs found

    Information and Meaning in Life, Humans and Robots

    Get PDF
    Information and meaning exist around us and within ourselves, and the same information can correspond to different meanings. This is true for humans and animals, and is becoming true for robots. We propose here an overview of this subject by using a systemic tool related to meaning generation that has already been published (C. Menant, Entropy 2003). The Meaning Generator System (MGS) is a system submitted to a constraint that generates a meaningful information when it receives an incident information that has a relation with the constraint. The content of the meaningful information is explicited, and its function is to trigger an action that will be used to satisfy the constraint of the system. The MGS has been introduced in the case of basic life submitted to a "stay alive" constraint. We propose here to see how the usage of the MGS can be extended to more complex living systems, to humans and to robots by introducing new types of constraints, and integrating the MGS into higher level systems. The application of the MGS to humans is partly based on a scenario relative to the evolution of body self-awareness toward self-consciousness that has already been presented (C. Menant, Biosemiotics 2003, and TSC 2004). The application of the MGS to robots is based on the definition of the MGS applied to robots functionality, taking into account the origins of the constraints. We conclude with a summary of this overview and with themes that can be linked to this systemic approach on meaning generation

    Evolution of Prehension Ability in an Anthropomorphic Neurorobotic Arm

    Get PDF
    In this paper we show how a simulated anthropomorphic robotic arm controlled by an artificial neural network can develop effective reaching and grasping behaviour through a trial and error process in which the free parameters encode the control rules which regulate the fine-grained interaction between the robot and the environment and variations of the free parameters are retained or discarded on the basis of their effects at the level of the global behaviour exhibited by the robot situated in the environment. The obtained results demonstrate how the proposed methodology allows the robot to produce effective behaviours thanks to its ability to exploit the morphological properties of the robot’s body (i.e. its anthropomorphic shape, the elastic properties of its muscle-like actuators, and the compliance of its actuated joints) and the properties which arise from the physical interaction between the robot and the environment mediated by appropriate control rules

    Fast, invariant representation for human action in the visual system

    Get PDF
    Humans can effortlessly recognize others' actions in the presence of complex transformations, such as changes in viewpoint. Several studies have located the regions in the brain involved in invariant action recognition, however, the underlying neural computations remain poorly understood. We use magnetoencephalography (MEG) decoding and a dataset of well-controlled, naturalistic videos of five actions (run, walk, jump, eat, drink) performed by different actors at different viewpoints to study the computational steps used to recognize actions across complex transformations. In particular, we ask when the brain discounts changes in 3D viewpoint relative to when it initially discriminates between actions. We measure the latency difference between invariant and non-invariant action decoding when subjects view full videos as well as form-depleted and motion-depleted stimuli. Our results show no difference in decoding latency or temporal profile between invariant and non-invariant action recognition in full videos. However, when either form or motion information is removed from the stimulus set, we observe a decrease and delay in invariant action decoding. Our results suggest that the brain recognizes actions and builds invariance to complex transformations at the same time, and that both form and motion information are crucial for fast, invariant action recognition

    Unmasking Clever Hans Predictors and Assessing What Machines Really Learn

    Full text link
    Current learning machines have successfully solved hard application problems, reaching high accuracy and displaying seemingly "intelligent" behavior. Here we apply recent techniques for explaining decisions of state-of-the-art learning machines and analyze various tasks from computer vision and arcade games. This showcases a spectrum of problem-solving behaviors ranging from naive and short-sighted, to well-informed and strategic. We observe that standard performance evaluation metrics can be oblivious to distinguishing these diverse problem solving behaviors. Furthermore, we propose our semi-automated Spectral Relevance Analysis that provides a practically effective way of characterizing and validating the behavior of nonlinear learning machines. This helps to assess whether a learned model indeed delivers reliably for the problem that it was conceived for. Furthermore, our work intends to add a voice of caution to the ongoing excitement about machine intelligence and pledges to evaluate and judge some of these recent successes in a more nuanced manner.Comment: Accepted for publication in Nature Communication

    'Obsessed with goals': functions and mechanisms of teleological interpretation of actions in humans

    Get PDF
    Humans show a strong and early inclination to interpret observed behaviours of others as goal-directed actions. We identify two main epistemic functions that this ‘teleological obsession’ serves: on-line prediction and social learning. We show how teleological action interpretations can serve these functions by drawing on two kinds of inference (‘action-to-goal’ or ‘goal-to-action’), and argue that both types of teleological inference constitute inverse problems that can only be solved by further assumptions. We pinpoint the assumptions that the three currently proposed mechanisms of goal attribution (action-effect associations, simulation procedures, and teleological reasoning) imply, and contrast them with the functions they are supposed to fulfil. We argue that while action-effect associations and simulation procedures are generally well suited to serve on-line action monitoring and prediction, social learning of new means actions and artefact functions requires the inferential productivity of teleological reasoning

    The Compositional Nature of Verb and Argument Representations in the Human Brain

    Get PDF
    How does the human brain represent simple compositions of objects, actors,and actions? We had subjects view action sequence videos during neuroimaging (fMRI) sessions and identified lexical descriptions of those videos by decoding (SVM) the brain representations based only on their fMRI activation patterns. As a precursor to this result, we had demonstrated that we could reliably and with high probability decode action labels corresponding to one of six action videos (dig, walk, etc.), again while subjects viewed the action sequence during scanning (fMRI). This result was replicated at two different brain imaging sites with common protocols but different subjects, showing common brain areas, including areas known for episodic memory (PHG, MTL, high level visual pathways, etc.,i.e. the 'what' and 'where' systems, and TPJ, i.e. 'theory of mind'). Given these results, we were also able to successfully show a key aspect of language compositionality based on simultaneous decoding of object class and actor identity. Finally, combining these novel steps in 'brain reading' allowed us to accurately estimate brain representations supporting compositional decoding of a complex event composed of an actor, a verb, a direction, and an object.Comment: 11 pages, 6 figure

    Integration of Action and Language Knowledge: A Roadmap for Developmental Robotics

    Get PDF
    “This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder." “Copyright IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.”This position paper proposes that the study of embodied cognitive agents, such as humanoid robots, can advance our understanding of the cognitive development of complex sensorimotor, linguistic, and social learning skills. This in turn will benefit the design of cognitive robots capable of learning to handle and manipulate objects and tools autonomously, to cooperate and communicate with other robots and humans, and to adapt their abilities to changing internal, environmental, and social conditions. Four key areas of research challenges are discussed, specifically for the issues related to the understanding of: 1) how agents learn and represent compositional actions; 2) how agents learn and represent compositional lexica; 3) the dynamics of social interaction and learning; and 4) how compositional action and language representations are integrated to bootstrap the cognitive system. The review of specific issues and progress in these areas is then translated into a practical roadmap based on a series of milestones. These milestones provide a possible set of cognitive robotics goals and test scenarios, thus acting as a research roadmap for future work on cognitive developmental robotics.Peer reviewe
    • 

    corecore