260,958 research outputs found

    Glasgow's Stereo Image Database of Garments

    Full text link
    To provide insight into cloth perception and manipulation with an active binocular robotic vision system, we compiled a database of 80 stereo-pair colour images with corresponding horizontal and vertical disparity maps and mask annotations, for 3D garment point cloud rendering has been created and released. The stereo-image garment database is part of research conducted under the EU-FP7 Clothes Perception and Manipulation (CloPeMa) project and belongs to a wider database collection released through CloPeMa (www.clopema.eu). This database is based on 16 different off-the-shelve garments. Each garment has been imaged in five different pose configurations on the project's binocular robot head. A full copy of the database is made available for scientific research only at https://sites.google.com/site/ugstereodatabase/.Comment: 7 pages, 6 figure, image databas

    How active perception and attractor dynamics shape perceptual categorization: A computational model

    Get PDF
    We propose a computational model of perceptual categorization that fuses elements of grounded and sensorimotor theories of cognition with dynamic models of decision-making. We assume that category information consists in anticipated patterns of agent–environment interactions that can be elicited through overt or covert (simulated) eye movements, object manipulation, etc. This information is firstly encoded when category information is acquired, and then re-enacted during perceptual categorization. The perceptual categorization consists in a dynamic competition between attractors that encode the sensorimotor patterns typical of each category; action prediction success counts as ‘‘evidence’’ for a given category and contributes to falling into the corresponding attractor. The evidence accumulation process is guided by an active perception loop, and the active exploration of objects (e.g., visual exploration) aims at eliciting expected sensorimotor patterns that count as evidence for the object category. We present a computational model incorporating these elements and describing action prediction, active perception, and attractor dynamics as key elements of perceptual categorizations. We test the model in three simulated perceptual categorization tasks, and we discuss its relevance for grounded and sensorimotor theories of cognition.Peer reviewe

    The effect of object contact on pre-reaching infants\u27 causal perception.

    Get PDF
    The Sticky Mittens (SM) paradigm is an object manipulation task that provides infants the opportunity to explore objects through active experience before they have the necessary motor skills to do so on their own. Positive cognitive outcomes like increased attention to objects, object engagement, object exploration, and causal perception have been shown to result from active SM experience (Libertus & Needham, 2010; Rakison & Krogh, 2012). Researchers are interested in understanding which aspects of SM training are important for infant learning. Although there have been many SM studies looking at different variables, such as active vs. passive experience and parent encouragement, the role of infant contact with the toys has received little focus. The present study investigates the role of infant contact with toys during the SM experience. Holt (2016) investigated the effects of active vs. passive SM experience and parent encouragement vs. no parent encouragement on infants’ learning and found that infants in the active, no parent encouragement group exhibited causal perception whereas infants in the active, parent encouragement condition did not. The present study includes a secondary analysis of Holt (2016), comparing infants’ physical contact during the active SM sessions. I hypothesized that infants in the active, no parent encouragement condition exhibited causal perception due to a longer duration of physical contact with the toys. Videos from Holt’s (2016) active, parent encouragement and active, no parent encouragement conditions were coded to compare the overall proportion of object contact across conditions. No difference was found between the two conditions for proportion of object contact, suggesting that other factors in the SM training led to infants’ learning

    Towards sensor-based manipulation of flexible objects

    Get PDF
    International audience— This paper presents the FLEXBOT project, a joint LIRMM-QUT effort to develop (in the near future) novel methodologies for robotic manipulation of flexible and deformable objects. To tackle this problem, and based on our past experiences, we propose to merge vision and force for manipulation control, and to rely on Model Predictive Control (MPC) and constrained optimization to program the object future shape. Index Terms— Control for object manipulation, learning from human demonstration, sensor fusion based on tactile, force and vision feedback. I. CONTEXT This abstract does not present experimental results, but aims at giving some preliminary hints on how flexible robot manipulation should be realized in the near future, particularly in the context of the FLEXBOT project, jointly submitted to the PHC FASIC Program 1 by LIRMM and QUT researchers. The objective of FLEXBOT is to solve one of the most challenging open problems in robotics. In fact, we aim at developing novel methodologies enabling robotic manipulation of flexible and deformable objects. The motivation comes from numerous applications, including the domestic, industrial, and medical examples 2 shown in Fig. 1. Many difficulties emerge when dealing with flexible manipulation. In the first place, the object deformation model (involving elasticity or plasticity) must be known, to derive the robot control inputs required for reconfiguring its shape. Ideally, this model should be derived online, while manipulating , with a simultaneous estimation and control approach, as commonly done in active perception and visual servoing. Hence perception, particularly from vision and force, will be indispensable. This leads to a second major difficulty: deformable object visual tracking. In fact, most current visual object tracking algorithms rely on rigidity, an assumption that is not valid here. A third challenge will consist in generating control inputs that comply with the shape the object is expected to have in the near future. In the next section, we provide a brief survey of the state of art on flexible object manipulation. We then conclude by proposing some novel methodologies for addressing the problem

    Learning to recognize objects through curiosity-driven manipulation with the iCub humanoid robot

    Get PDF
    International audienceIn this paper we address the problem of learning to recognize objects by manipulation in a developmental robotics scenario. In a life-long learning perspective, a humanoid robot should be capable of improving its knowledge of objects with active perception. Our approach stems from the cognitive devel- opment of infants, exploiting active curiosity-driven manipulation to improve perceptual learning of objects. These functionalities are implemented as perception, control and active exploration modules as part of the Cognitive Architecture of the MACSi project. In this paper we integrate these functionalities into an ac- tive perception system which learns to recognise objects through manipulation. Our work in this paper integrates a bottom-up vision system, a control system of a complex robot system and a top-down interactive exploration method, which actively chooses an exploration method to collect data and whether interacting with humans is profitable or not. Experimental results show that the humanoid robot iCub can learn to recognize 3D objects by manipulation and in interaction with teachers by choosing the adequate exploration strategy to enhance competence progress and by focusing its efforts on the most complex tasks. Thus the learner can learn interactively with humans by actively self- regulating its requests for help

    User control and task authenticity for spatial learning in 3D environments

    Get PDF
    This paper describes two empirical studies which investigated the importance for spatial learning of view control and object manipulation within 3D environments. A 3D virtual chemistry laboratory was used as the research instrument. Subjects, who were university undergraduate students (34 in the first study and 80 in the second study), undertook tasks in the virtual laboratory and were tested on their spatial knowledge through written tests. The results of the study indicate that view control and object manipulation enhance spatial learning but only if the learner undertakes authentic tasks that require this learning. These results have implications for educational designers making a choice between video or animation and interactive 3D technologies. The results are discussed within the framework of Piaget\u27s theories on active learning and Gibson\u27s ecological theory of perception and action

    Tactile Perception And Visuotactile Integration For Robotic Exploration

    Get PDF
    As the close perceptual sibling of vision, the sense of touch has historically received less than deserved attention in both human psychology and robotics. In robotics, this may be attributed to at least two reasons. First, it suffers from the vicious cycle of immature sensor technology, which causes industry demand to be low, and then there is even less incentive to make existing sensors in research labs easy to manufacture and marketable. Second, the situation stems from a fear of making contact with the environment, avoided in every way so that visually perceived states do not change before a carefully estimated and ballistically executed physical interaction. Fortunately, the latter viewpoint is starting to change. Work in interactive perception and contact-rich manipulation are on the rise. Good reasons are steering the manipulation and locomotion communities’ attention towards deliberate physical interaction with the environment prior to, during, and after a task. We approach the problem of perception prior to manipulation, using the sense of touch, for the purpose of understanding the surroundings of an autonomous robot. The overwhelming majority of work in perception for manipulation is based on vision. While vision is a fast and global modality, it is insufficient as the sole modality, especially in environments where the ambient light or the objects therein do not lend themselves to vision, such as in darkness, smoky or dusty rooms in search and rescue, underwater, transparent and reflective objects, and retrieving items inside a bag. Even in normal lighting conditions, during a manipulation task, the target object and fingers are usually occluded from view by the gripper. Moreover, vision-based grasp planners, typically trained in simulation, often make errors that cannot be foreseen until contact. As a step towards addressing these problems, we present first a global shape-based feature descriptor for object recognition using non-prehensile tactile probing alone. Then, we investigate in making the tactile modality, local and slow by nature, more efficient for the task by predicting the most cost-effective moves using active exploration. To combine the local and physical advantages of touch and the fast and global advantages of vision, we propose and evaluate a learning-based method for visuotactile integration for grasping

    Mechanisms responsible for the development of causal perception in infancy.

    Get PDF
    The aim of the current dissertation was to investigate the mechanisms that contribute to the emergence of causal perception in infancy. Previous research suggests that the experience of self-produced causal action may be necessary to promote the development of causal perception (Rakison & Krogh, 2012). The goal of the current study was two-fold: (1) to further explore the roles of self-produced action, haptic, proprioceptive and visual information, and parental interaction on young infants’ understanding of causality. To assess the impact of these factors on infants’ causal learning, 4Âœ-month-olds were randomly assigned to one four conditions. Three of the conditions (Active with Parent Interaction, Active Without Parent Interaction, and Passive with Parent Interaction) provided infants with object-manipulation training in which infants wore “sticky mittens” that allowed them to manipulate Velcro-covered toys. The fourth condition was a no-training control condition. Following training, infants’ ability to perceive the difference between causal and non-causal versions of simple collision events (one ball colliding with another) was tested. It was hypothesized that both of the active training conditions would facilitate infants’ causal perception, while passive training would produce no effects relative to the control condition. Results demonstrated that 4Âœ-month-old infants who received no training, and same-aged infants who received passive training that controlled for perceptual aspects of self-produced causal action experience (haptic, proprioceptive, and visual information), did not show evidence of causal perception. As hypothesized, active training experience facilitated causal perception in 4Âœ-month-olds. However, surprisingly, active training only facilitated learning in the condition in which parents were instructed not to interact with their infants. Comparisons of the two active training groups (with and without parent interaction) revealed that the groups did not differ on a number of infant characteristics and behaviors. The results of this study suggest: (1) self-produced causal actions constitute a mechanism by which causal perception arises in infancy, and (2) parental interactions during infants’ object explorations may interfere with learning
    • 

    corecore