60,390 research outputs found

    Integrating visual and tactile information in the perirhinal cortex

    Get PDF
    By virtue of its widespread afferent projections, perirhinal cortex is thought to bind polymodal information into abstract object-level representations. Consistent with this proposal, deficits in cross-modal integration have been reported after perirhinal lesions in nonhuman primates. It is therefore surprising that imaging studies of humans have not observed perirhinal activation during visual–tactile object matching. Critically, however, these studies did not differentiate between congruent and incongruent trials. This is important because successful integration can only occur when polymodal information indicates a single object (congruent) rather than different objects (incongruent). We scanned neurologically intact individuals using functional magnetic resonance imaging (fMRI) while they matched shapes. We found higher perirhinal activation bilaterally for cross-modal (visual–tactile) than unimodal (visual–visual or tactile–tactile) matching, but only when visual and tactile attributes were congruent. Our results demonstrate that the human perirhinal cortex is involved in cross-modal, visual–tactile, integration and, thus, indicate a functional homology between human and monkey perirhinal cortices

    Manual Matching Of Perceived Surface Orientation Is Affected By Arm Posture: Evidence Of Calibration Between Proprioception And Visual Experience In Near Space

    Get PDF
    Proprioception of hand orientation (orientation production using the hand) is compared with manual matching of visual orientation (visual surface matching using the hand) in two experiments. In experiment 1, using self-selected arm postures, the proportions of wrist and elbow flexion spontaneously used to orient the pitch of the hand (20 and 80%, respectively) are relatively similar across both manual matching tasks and manual orientation production tasks for most participants. Proprioceptive error closely matched perceptual biases previously reported for visual orientation perception, suggesting calibration of proprioception to visual biases. A minority of participants, who attempted to use primarily wrist flexion while holding the forearm horizontal, performed poorly at the manual matching task, consistent with proprioceptive error caused by biomechanical constraints of their self-selected posture. In experiment 2, postural choices were constrained to primarily wrist or elbow flexion without imposing biomechanical constraints (using a raised forearm). Identical relative offsets were found between the two constraint groups in manual matching and manual orientation production. The results support two claims: (1) manual orientation matching to visual surfaces is based on manual proprioception and (2) calibration between visual and proprioceptive experiences guarantees relatively accurate manual matching for surfaces within reach, despite systematic visual biases in perceived surface orientation

    A Framework for Symmetric Part Detection in Cluttered Scenes

    Full text link
    The role of symmetry in computer vision has waxed and waned in importance during the evolution of the field from its earliest days. At first figuring prominently in support of bottom-up indexing, it fell out of favor as shape gave way to appearance and recognition gave way to detection. With a strong prior in the form of a target object, the role of the weaker priors offered by perceptual grouping was greatly diminished. However, as the field returns to the problem of recognition from a large database, the bottom-up recovery of the parts that make up the objects in a cluttered scene is critical for their recognition. The medial axis community has long exploited the ubiquitous regularity of symmetry as a basis for the decomposition of a closed contour into medial parts. However, today's recognition systems are faced with cluttered scenes, and the assumption that a closed contour exists, i.e. that figure-ground segmentation has been solved, renders much of the medial axis community's work inapplicable. In this article, we review a computational framework, previously reported in Lee et al. (2013), Levinshtein et al. (2009, 2013), that bridges the representation power of the medial axis and the need to recover and group an object's parts in a cluttered scene. Our framework is rooted in the idea that a maximally inscribed disc, the building block of a medial axis, can be modeled as a compact superpixel in the image. We evaluate the method on images of cluttered scenes.Comment: 10 pages, 8 figure

    Immediate and Reflective Senses

    Get PDF
    This paper argues that there are two distinct kinds of senses, immediate senses and reflective senses. Immediate senses are what we are immediately aware of when we are in an intentional mental state, while reflective senses are what we understand of an intentional mental state's (putative) referent upon reflection. I suggest an account of immediate and reflective senses that is based on the phenomenal intentionality theory, a theory of intentionality in terms of phenomenal consciousness. My focus is on the immediate and reflective senses of thoughts and the concepts they involve, but it also applies to other mental instances of intentionality

    VITON: An Image-based Virtual Try-on Network

    Full text link
    We present an image-based VIirtual Try-On Network (VITON) without using 3D information in any form, which seamlessly transfers a desired clothing item onto the corresponding region of a person using a coarse-to-fine strategy. Conditioned upon a new clothing-agnostic yet descriptive person representation, our framework first generates a coarse synthesized image with the target clothing item overlaid on that same person in the same pose. We further enhance the initial blurry clothing area with a refinement network. The network is trained to learn how much detail to utilize from the target clothing item, and where to apply to the person in order to synthesize a photo-realistic image in which the target item deforms naturally with clear visual patterns. Experiments on our newly collected Zalando dataset demonstrate its promise in the image-based virtual try-on task over state-of-the-art generative models

    Specificity and coherence of body representations

    Get PDF
    Bodily illusions differently affect body representations underlying perception and action. We investigated whether this task dependence reflects two distinct dimensions of embodiment: the sense of agency and the sense of the body as a coherent whole. In experiment 1 the sense of agency was manipulated by comparing active versus passive movements during the induction phase in a video rubber hand illusion (vRHI) setup. After induction, proprioceptive biases were measured both by perceptual judgments of hand position, as well as by measuring end-point accuracy of subjects' active pointing movements to an external object with the affected hand. The results showed, first, that the vRHI is largely perceptual: passive perceptual localisation judgments were altered, but end-point accuracy of active pointing responses with the affected hand to an external object was unaffected. Second, within the perceptual judgments, there was a novel congruence effect, such that perceptual biases were larger following passive induction of vRHI than following active induction. There was a trend for the converse effect for pointing responses, with larger pointing bias following active induction. In experiment 2, we used the traditional RHI to investigate the coherence of body representation by synchronous stimulation of either matching or mismatching fingers on the rubber hand and the participant's own hand. Stimulation of matching fingers induced a local proprioceptive bias for only the stimulated finger, but did not affect the perceived shape of the hand as a whole. In contrast, stimulation of spatially mismatching fingers eliminated the RHI entirely. The present results show that (i) the sense of agency during illusion induction has specific effects, depending on whether we represent our body for perception or to guide action, and (ii) representations of specific body parts can be altered without affecting perception of the spatial configuration of the body as a whole

    Training methods for facial image comparison: a literature review

    Get PDF
    This literature review was commissioned to explore the psychological literature relating to facial image comparison with a particular emphasis on whether individuals can be trained to improve performance on this task. Surprisingly few studies have addressed this question directly. As a consequence, this review has been extended to cover training of face recognition and training of different kinds of perceptual comparisons where we are of the opinion that the methodologies or findings of such studies are informative. The majority of studies of face processing have examined face recognition, which relies heavily on memory. This may be memory for a face that was learned recently (e.g. minutes or hours previously) or for a face learned longer ago, perhaps after many exposures (e.g. friends, family members, celebrities). Successful face recognition, irrespective of the type of face, relies on the ability to retrieve the to-berecognised face from long-term memory. This memory is then compared to the physically present image to reach a recognition decision. In contrast, in face matching task two physical representations of a face (live, photographs, movies) are compared and so long-term memory is not involved. Because the comparison is between two present stimuli rather than between a present stimulus and a memory, one might expect that face matching, even if not an easy task, would be easier to do and easier to learn than face recognition. In support of this, there is evidence that judgment tasks where a presented stimulus must be judged by a remembered standard are generally more cognitively demanding than judgments that require comparing two presented stimuli Davies & Parasuraman, 1982; Parasuraman & Davies, 1977; Warm and Dember, 1998). Is there enough overlap between face recognition and matching that it is useful to look at the literature recognition? No study has directly compared face recognition and face matching, so we turn to research in which people decided whether two non-face stimuli were the same or different. In these studies, accuracy of comparison is not always better when the comparator is present than when it is remembered. Further, all perceptual factors that were found to affect comparisons of simultaneously presented objects also affected comparisons of successively presented objects in qualitatively the same way. Those studies involved judgments about colour (Newhall, Burnham & Clark, 1957; Romero, Hita & Del Barco, 1986), and shape (Larsen, McIlhagga & Bundesen, 1999; Lawson, BĂŒlthoff & Dumbell, 2003; Quinlan, 1995). Although one must be cautious in generalising from studies of object processing to studies of face processing (see, e.g., section comparing face processing to object processing), from these kinds of studies there is no evidence to suggest that there are qualitative differences in the perceptual aspects of how recognition and matching are done. As a result, this review will include studies of face recognition skill as well as face matching skill. The distinction between face recognition involving memory and face matching not involving memory is clouded in many recognition studies which require observers to decide which of many presented faces matches a remembered face (e.g., eyewitness studies). And of course there are other forensic face-matching tasks that will require comparison to both presented and remembered comparators (e.g., deciding whether any person in a video showing a crowd is the target person). For this reason, too, we choose to include studies of face recognition as well as face matching in our revie
    • 

    corecore