13,384 research outputs found

    Data-Driven Grasp Synthesis - A Survey

    Full text link
    We review the work on data-driven grasp synthesis and the methodologies for sampling and ranking candidate grasps. We divide the approaches into three groups based on whether they synthesize grasps for known, familiar or unknown objects. This structure allows us to identify common object representations and perceptual processes that facilitate the employed data-driven grasp synthesis technique. In the case of known objects, we concentrate on the approaches that are based on object recognition and pose estimation. In the case of familiar objects, the techniques use some form of a similarity matching to a set of previously encountered objects. Finally for the approaches dealing with unknown objects, the core part is the extraction of specific features that are indicative of good grasps. Our survey provides an overview of the different methodologies and discusses open problems in the area of robot grasping. We also draw a parallel to the classical approaches that rely on analytic formulations.Comment: 20 pages, 30 Figures, submitted to IEEE Transactions on Robotic

    PUTTING CRAVING INTO CONTEXT: EFFECTS OF PERCEIVED SMOKING OPPORTUNITY ON THE NEURAL RESPONSE TO CIGARETTE CUE EXPOSURE

    Get PDF
    Recent years have seen the emergence of research applying functional neuroimaging to the study of cue-elicited drug craving. This research has begun to identify a distributed system of brain activity during drug craving. Functional magnetic resonance imaging (fMRI) was used to examine the effects of smoking expectancy on the neural response to neutral (e.g., roll of tape) and smoking-related (holding a cigarette) stimuli in male cigarette smokers deprived of nicotine for 8 hours. As predicted, several brain regions exhibited differential activation during cigarette versus neutral cue exposure. Moreover, instructions about smoking opportunity affected cue-elicited activation in several regions. These results highlight the importance of perceived drug availability in the neurobiological response to drug cues

    The evolution of a visual-to-auditory sensory substitution device using interactive genetic algorithms

    Get PDF
    Sensory Substitution is a promising technique for mitigating the loss of a sensory modality. Sensory Substitution Devices (SSDs) work by converting information from the impaired sense (e.g. vision) into another, intact sense (e.g. audition). However, there are a potentially infinite number of ways of converting images into sounds and it is important that the conversion takes into account the limits of human perception and other user-related factors (e.g. whether the sounds are pleasant to listen to). The device explored here is termed “polyglot” because it generates a very large set of solutions. Specifically, we adapt a procedure that has been in widespread use in the design of technology but has rarely been used as a tool to explore perception – namely Interactive Genetic Algorithms. In this procedure, a very large range of potential sensory substitution devices can be explored by creating a set of ‘genes’ with different allelic variants (e.g. different ways of translating luminance into loudness). The most successful devices are then ‘bred’ together and we statistically explore the characteristics of the selected-for traits after multiple generations. The aim of the present study is to produce design guidelines for a better SSD. In three experiments we vary the way that the fitness of the device is computed: by asking the user to rate the auditory aesthetics of different devices (Experiment 1), by measuring the ability of participants to match sounds to images (Experiment 2) and the ability to perceptually discriminate between two sounds derived from similar images (Experiment 3). In each case the traits selected for by the genetic algorithm represent the ideal SSD for that task. Taken together, these traits can guide the design of a better SSD

    The multisensory attentional consequences of tool use : a functional magnetic resonance imaging study

    Get PDF
    Background: Tool use in humans requires that multisensory information is integrated across different locations, from objects seen to be distant from the hand, but felt indirectly at the hand via the tool. We tested the hypothesis that using a simple tool to perceive vibrotactile stimuli results in the enhanced processing of visual stimuli presented at the distal, functional part of the tool. Such a finding would be consistent with a shift of spatial attention to the location where the tool is used. Methodology/Principal Findings: We tested this hypothesis by scanning healthy human participants’ brains using functional magnetic resonance imaging, while they used a simple tool to discriminate between target vibrations, accompanied by congruent or incongruent visual distractors, on the same or opposite side to the tool. The attentional hypothesis was supported: BOLD response in occipital cortex, particularly in the right hemisphere lingual gyrus, varied significantly as a function of tool position, increasing contralaterally, and decreasing ipsilaterally to the tool. Furthermore, these modulations occurred despite the fact that participants were repeatedly instructed to ignore the visual stimuli, to respond only to the vibrotactile stimuli, and to maintain visual fixation centrally. In addition, the magnitude of multisensory (visual-vibrotactile) interactions in participants’ behavioural responses significantly predicted the BOLD response in occipital cortical areas that were also modulated as a function of both visual stimulus position and tool position. Conclusions/Significance: These results show that using a simple tool to locate and to perceive vibrotactile stimuli is accompanied by a shift of spatial attention to the location where the functional part of the tool is used, resulting in enhanced processing of visual stimuli at that location, and decreased processing at other locations. This was most clearly observed in the right hemisphere lingual gyrus. Such modulations of visual processing may reflect the functional importance of visuospatial information during human tool use

    Can't touch this: the first-person perspective provides privileged access to predictions of sensory action outcomes.

    Get PDF
    RCUK Open Access funded. ESRC ES/J019178/1Previous studies have shown that viewing others in pain activates cortical somatosensory processing areas and facilitates the detection of tactile targets. It has been suggested that such shared representations have evolved to enable us to better understand the actions and intentions of others. If this is the case, the effects of observing others in pain should be obtained from a range of viewing perspectives. Therefore, the current study examined the behavioral effects of observed grasps of painful and nonpainful objects from both a first- and third-person perspective. In the first-person perspective, a participant was faster to detect a tactile target delivered to their own hand when viewing painful grasping actions, compared with all nonpainful actions. However, this effect was not revealed in the third-person perspective. The combination of action and object information to predict the painful consequences of another person's actions when viewed from the first-person perspective, but not the third-person perspective, argues against a mechanism ostensibly evolved to understand the actions of others

    Stochastic Resonance Can Drive Adaptive Physiological Processes

    Get PDF
    Stochastic resonance (SR) is a concept from the physics and engineering communities that has applicability to both systems physiology and other living systems. In this paper, it will be argued that stochastic resonance plays a role in driving behavior in neuromechanical systems. The theory of stochastic resonance will be discussed, followed by a series of expected outcomes, and two tests of stochastic resonance in an experimental setting. These tests are exploratory in nature, and provide a means to parameterize systems that couple biological and mechanical components. Finally, the potential role of stochastic resonance in adaptive physiological systems will be discussed

    Data-Driven Grasp Synthesis—A Survey

    Get PDF
    We review the work on data-driven grasp synthesis and the methodologies for sampling and ranking candidate grasps. We divide the approaches into three groups based on whether they synthesize grasps for known, familiar, or unknown objects. This structure allows us to identify common object representations and perceptual processes that facilitate the employed data-driven grasp synthesis technique. In the case of known objects, we concentrate on the approaches that are based on object recognition and pose estimation. In the case of familiar objects, the techniques use some form of a similarity matching to a set of previously encountered objects. Finally, for the approaches dealing with unknown objects, the core part is the extraction of specific features that are indicative of good grasps. Our survey provides an overview of the different methodologies and discusses open problems in the area of robot grasping. We also draw a parallel to the classical approaches that rely on analytic formulations

    Vpliv aktivne vizualizacije na sposobnost pomnjenja besedne definicije pri dijakih

    Get PDF
    The era of visual communication influences the cognitive strategies of the individual. Education, too, must adjust to these changes, which raises questions regarding the use of visualisation in teaching. In the present study, we examine the impact of visualisation on the ability of high school students to memorise text. In the theoretical part of the research, we first clarify the concept of visualisation. We define the concept of active visualisation and visualisation as a means of acquiring and conveying knowledge, and we describe the different kinds of visualisation (appearance-based analogies and form-based analogies), specifically defining appearance-based schemata visualisations (where imagery is articulated in a typical culturally trained manner). In the empirical part of the research, we perform an experiment in which we evaluate the effects of visualisation on students’ ability to memorise a difficult written definition. According to the theoretical findings, we establish two hypotheses. In the first, we assume that the majority of the visualisations that students form will be appearance-based schemata visualisations. This hypothesis is based on the assumption that, in visualisation, people spontaneously use analogies based on imagery and schemas that are typical of their society. In the second hypothesis, we assume that active visualisation will contribute to the students’ ability to memorise text in a statistically significant way. This hypothesis is based on the assumption that the combination of verbal and visual experiences enhances cognitive learning. Both hypotheses were confirmed in the research. As our study only dealt with the impact of the most spontaneous type of appearancebased schemata visualisations, we see further possibilities in researching the influence of visualisations that are more complex formally. (DIPF/Orig.
    • …
    corecore