221 research outputs found

    The RobotCub Approach to the Development of Cognition

    Get PDF
    This paper elaborates on the workplan of an initiative in embodied cognition: RobotCub. Our goal here is to provide background and to motivate our long-term plan of empirical research including brain and robotic sciences following the principles of epigenetic robotics

    Robust visual servoing in 3d reaching tasks

    Get PDF
    This paper describes a novel approach to the problem of reaching an object in space under visual guidance. The approach is characterized by a great robustness to calibration errors, such that virtually no calibration is required. Servoing is based on binocular vision: a continuous measure of the end-effector motion field, derived from real-time computation of the binocular optical flow over the stereo images, is compared with the actual position of the target and the relative error in the end-effector trajectory is continuously corrected. The paper outlines the general framework of the approach, shows how visual measures are obtained and discusses the synthesis of the controller along with its stability analysis. Real-time experiments are presented to show the applicability of the approach in real 3-D applications

    Young children do not integrate visual and haptic information

    Get PDF
    Several studies have shown that adults integrate visual and haptic information (and information from other modalities) in a statistically optimal fashion, weighting each sense according to its reliability. To date no studies have investigated when this capacity for cross-modal integration develops. Here we show that prior to eight years of age, integration of visual and haptic spatial information is far from optimal, with either vision or touch dominating totally, even in conditions where the dominant sense is far less precise than the other (assessed by discrimination thresholds). For size discrimination, haptic information dominates in determining both perceived size and discrimination thresholds, while for orientation discrimination vision dominates. By eight-ten years, the integration becomes statistically optimal, like adults. We suggest that during development, perceptual systems require constant recalibration, for which cross-sensory comparison is important. Using one sense to calibrate the other precludes useful combination of the two sources

    Prospection in cognition: the case for joint episodic-procedural memory in cognitive robotics

    Get PDF
    Prospection lies at the core of cognition: it is the means by which an agent \u2013 a person or a cognitive robot \u2013 shifts its perspective from immediate sensory experience to anticipate future events, be they the actions of other agents or the outcome of its own actions. Prospection, accomplished by internal simulation, requires mechanisms for both perceptual imagery and motor imagery. While it is known that these two forms of imagery are tightly entwined in the mirror neuron system, we do not yet have an effective model of the mentalizing network which would provide a framework to integrate declarative episodic and procedural memory systems and to combine experiential knowledge with skillful know-how. Such a framework would be founded on joint perceptuo-motor representations. In this paper, we examine the case for this form of representation, contrasting sensory-motor theory with ideo-motor theory, and we discuss how such a framework could be realized by joint episodic-procedural memory. We argue that such a representation framework has several advantages for cognitive robots. Since episodic memory operates by recombining imperfectly recalled past experience, this allows it to simulate new or unexpected events. Furthermore, by virtue of its associative nature, joint episodic-procedural memory allows the internal simulation to be conditioned by current context, semantic memory, and the agent\u2019s value system. Context and semantics constrain the combinatorial explosion of potential perception-action associations and allow effective action selection in the pursuit of goals, while the value system provides the motives that underpin the agent\u2019s autonomy and cognitive development. This joint episodic-procedural memory framework is neutral regarding the final implementation of these episodic and procedural memories, which can be configured sub-symbolically as associative networks or symbolically as content-addressable image databases and databases of motor-control scripts

    Cross-modal facilitation of visual and tactile motion

    Get PDF
    Robust and versatile perception of the world is augmented considerably when information from our five separate sensory systems is combined. Much recent evidence has demonstrated near-optimal integration across senses, but it remains unclear at what level the integration occurs, at a "sensory" or "decisional" level. Here we show that non-informative "pedestal" motion stimuli in one sensory modality (vision or touch) selectively lowers thresholds in the other, to the same degree as pedestals in the same modality: strong evidence for functionally important cross-sensory integration at early levels of sensory processing

    Multi-subject/daily-life activity EMG-based control of mechanical hands

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Forearm surface electromyography (EMG) has been in use since the Sixties to feed-forward control active hand prostheses in a more and more refined way. Recent research shows that it can be used to control even a dexterous polyarticulate hand prosthesis such as Touch Bionics's i-LIMB, as well as a multifingered, multi-degree-of-freedom mechanical hand such as the DLR II. In this paper we extend previous work and investigate the robustness of such fine control possibilities, in two ways: firstly, we conduct an analysis on data obtained from 10 healthy subjects, trying to assess the general applicability of the technique; secondly, we compare the baseline controlled condition (arm relaxed and still on a table) with a "Daily-Life Activity" (DLA) condition in which subjects walk, raise their hands and arms, sit down and stand up, etc., as an experimental proxy of what a patient is supposed to do in real life. We also propose a cross-subject model analysis, i.e., training a model on a subject and testing it on another one. The use of pre-trained models could be useful in shortening the time required by the subject/patient to become proficient in using the hand.</p> <p>Results</p> <p>A standard machine learning technique was able to achieve a real-time grip posture classification rate of about 97% in the baseline condition and 95% in the DLA condition; and an average correlation to the target of about 0.93 (0.90) while reconstructing the required force. Cross-subject analysis is encouraging although not definitive in its present state.</p> <p>Conclusion</p> <p>Performance figures obtained here are in the same order of magnitude of those obtained in previous work about healthy subjects in controlled conditions and/or amputees, which lets us claim that this technique can be used by reasonably any subject, and in DLA situations. Use of previously trained models is not fully assessed here, but more recent work indicates it is a promising way ahead.</p

    A Vision-Based Learning Method for Pushing Manipulation

    Get PDF
    We describe an unsupervised on-line method for learning of manipulative actions that allows a robot to push an object connected to it with a rotational point contact to a desired point in image-space. By observing the results of its actions on the object\u27s orientation in image-space, the system forms a predictive forward empirical model. This acquired model is used on-line for manipulation planning and control as it improves. Rather than explicitly inverting the forward model to achieve trajectory control, a stochastic action selection technique [Moore, 1990] is used to select the most informative and promising actions, thereby integrating active perception and learning by combining on-line improvement, task-directed exploration, and model exploitation. Simulation and experimental results of the approach are presented

    Gaze Tracking for Human Robot Interaction

    Get PDF

    Cross-Sensory Facilitation Reveals Neural Interactions between Visual and Tactile Motion in Humans

    Get PDF
    Many recent studies show that the human brain integrates information across the different senses and that stimuli of one sensory modality can enhance the perception of other modalities. Here we study the processes that mediate cross-modal facilitation and summation between visual and tactile motion. We find that while summation produced a generic, non-specific improvement of thresholds, probably reflecting higher-order interaction of decision signals, facilitation reveals a strong, direction-specific interaction, which we believe reflects sensory interactions. We measured visual and tactile velocity discrimination thresholds over a wide range of base velocities and conditions. Thresholds for both visual and tactile stimuli showed the characteristic “dipper function,” with the minimum thresholds occurring at a given “pedestal speed.” When visual and tactile coherent stimuli were combined (summation condition) the thresholds for these multisensory stimuli also showed a “dipper function” with the minimum thresholds occurring in a similar range to that for unisensory signals. However, the improvement of multisensory thresholds was weak and not directionally specific, well predicted by the maximum-likelihood estimation model (agreeing with previous research). A different technique (facilitation) did, however, reveal direction-specific enhancement. Adding a non-informative “pedestal” motion stimulus in one sensory modality (vision or touch) selectively lowered thresholds in the other, by the same amount as pedestals in the same modality. Facilitation did not occur for neutral stimuli like sounds (that would also have reduced temporal uncertainty), nor for motion in opposite direction, even in blocked trials where the subjects knew that the motion was in the opposite direction showing that the facilitation was not under subject control. Cross-sensory facilitation is strong evidence for functionally relevant cross-sensory integration at early levels of sensory processing
    • 

    corecore