5,164 research outputs found

    "What was Molyneux's Question A Question About?"

    Get PDF
    Molyneux asked whether a newly sighted person could distinguish a sphere from a cube by sight alone, given that she was antecedently able to do so by touch. This, we contend, is a question about general ideas. To answer it, we must ask (a) whether spatial locations identified by touch can be identified also by sight, and (b) whether the integration of spatial locations into an idea of shape persists through changes of modality. Posed this way, Molyneux’s Question goes substantially beyond question (a), about spatial locations, alone; for a positive answer to (a) leaves open whether a perceiver might cross-identify locations, but not be able to identify the shapes that collections of locations comprise. We further emphasize that MQ targets general ideas so as to distinguish it from corresponding questions about experiences of shape and about the property of tangible (vs. visual) shape. After proposing a generalized formulation of MQ, we extend earlier work (“Many Molyneux Questions,” Australasian Journal of Philosophy 2020) by showing that MQ does not admit a single answer across the board. Some integrative data-processes transfer across modalities; others do not. Seeing where and how such transfer succeeds and fails in individual cases has much to offer to our understanding of perception and its modalities

    What was Molyneux's Question A Question About?

    Get PDF
    Molyneux asked whether a newly sighted person could distinguish a sphere from a cube by sight alone, given that she was antecedently able to do so by touch. This, we contend, is a question about general ideas. To answer it, we must ask (a) whether spatial locations identified by touch can be identified also by sight, and (b) whether the integration of spatial locations into an idea of shape persists through changes of modality. Posed this way, Molyneux’s Question goes substantially beyond question (a), about spatial locations, alone; for a positive answer to (a) leaves open whether a perceiver might cross-identify locations, but not be able to identify the shapes that collections of locations comprise. We further emphasize that MQ targets general ideas so as to distinguish it from corresponding questions about experiences of shape and about the property of tangible (vs. visual) shape. After proposing a generalized formulation of MQ, we extend earlier work (“Many Molyneux Questions,” Australasian Journal of Philosophy 2020) by showing that MQ does not admit a single answer across the board. Some integrative data-processes transfer across modalities; others do not. Seeing where and how such transfer succeeds and fails in individual cases has much to offer to our understanding of perception and its modalities

    Molyneux's Question Within and Across the Senses

    Get PDF
    This chapter explores how our understanding of Molyneux’s question, and of the possibility of an experimental resolution to it, should be affected by recognizing the complexity that is involved in reidentifying shapes and other spatial properties across differing sensory manifestations of them. I will argue that while philosophers today usually treat the question as concerning ‘the relations between perceptions of shape in different sensory modalities’ (Campbell 1995, 301), in fact this is only part of the question’s real interest, and that the answer to the question also turns on how shape is perceived within each of sight and touch individually

    Low-level Modality Specific and Higher-order Amodal Processing in the Haptic and Visual Domains

    Get PDF
    The aim of the current study is to further investigate cross- and multi-modal object processing with the intent of increasing our understanding of the differential contributions of modal and amodal object processing in the visual and haptic domains. The project is an identification and information extraction study. The main factors are modality (vision or haptics), stimulus type (tools or animals) and level (naming and output). Each participant went through four different trials: Visual naming and size, Haptic naming and size. Naming consisted of verbally naming the item; Size (size comparison) consisted of verbally indicating if the current item is larger or smaller than a reference object. Stimuli consisted of plastic animals and tools. All stimuli are readily recognizable, and easily be manipulated with one hand. The actual figurines and tools were used for haptic trials, and digital photographs were used for visual trials (appendix 1 and 2). The main aim was to investigate modal and amodal processing in visual and haptic domains. The results suggest a strong effect, of modality type with visual object recognition being faster in comparison to haptic object recognition leading to a modality (visual-haptic) specific effect. It was also observed that tools were processed faster than animals regardless of the modality type. There was interaction reported between the factors supporting the notion that once naming is accomplished, if subsequent size processing, whether it is in the visual or haptic domain, results in similar reaction times this would be an indication of, non-modality specific or amodal processing. Thus, through using animal and tool figurines, we investigated modal and amodal processing in visual and haptic domains

    Haptic Concepts

    Get PDF

    The Effect of Experience Upon the Visual and Haptic Discrimination of 3-D Object Shape

    Get PDF
    Both our sense of touch and our sense of vision allow us to perceive common object properties such as size, shape, and texture. The extent of this functional overlap has been studied in relation to infant perception (Bushnell & Weinberger, 1987; Gibson & Walker, 1984; Streri, 1987; Streri & Gentaz, 2003), overlap in brain regions (Amedi, Malach, Hendler, Peled, & Zohary, 2001; Deibert, Kraut, Kermen, & Hart, 1999; James, Humphrey, Gati, Menon, & Goodale, 2002), and adult perception (Gibson, 1962, 1963, 1966; Klatzky, Lederman, & Reed, 1987; Lakatos & Marks, 1999; Norman, Norman, Clayton, Lianekhammy, & Zielke, 2004). The current experiment extended the findings of Norman et al. (2004) by examining the effect of experience upon the visual and haptic discrimination of 3-D object shape, as well as examining for differences in how long visual and haptic shape representations can be held in short-term memory. Participants were asked to compare the shapes of two objects either within a single sensory modality (both objects presented visually or haptically) or across the sensory modalities (one object presented visually, the other presented haptically) for 120 trials. Their task was to compare whether the objects possessed the same or different 3-D shapes. The objects were presented for a duration of 3 seconds each, with a 3-, 9-, or 15-second interstimulus interval (ISI) between them. Both the unimodal (visual-visual and haptichaptic) and cross-modal (visual-haptic and haptic-visual) conditions exhibited a linear pattern of learning, and were unaffected by the various ISI\u27s used. However, different levels of discrimination accuracies were observed for the various groups with the highest level of accuracy occurring for the visual-visual group (M = 78.65 % correct) and the lowest level of accuracy occurring for the haptic-visual group (M = 65.31 % correct). Different patterns of errors for same versus different trials were observed for the unimodal and cross-modal conditions. Taken together, the results of the current experiment give us a better understanding of the similarities and differences that exist between the visual and haptic sensory modalities representations of 3-D object shape

    Size-sensitive perceptual representations underlie visual and haptic object recognition.

    Get PDF
    A variety of similarities between visual and haptic object recognition suggests that the two modalities may share common representations. However, it is unclear whether such common representations preserve low-level perceptual features or whether transfer between vision and haptics is mediated by high-level, abstract representations. Two experiments used a sequential shape-matching task to examine the effects of size changes on unimodal and crossmodal visual and haptic object recognition. Participants felt or saw 3D plastic models of familiar objects. The two objects presented on a trial were either the same size or different sizes and were the same shape or different but similar shapes. Participants were told to ignore size changes and to match on shape alone. In Experiment 1, size changes on same-shape trials impaired performance similarly for both visual-to-visual and haptic-to-haptic shape matching. In Experiment 2, size changes impaired performance on both visual-to-haptic and haptic-to-visual shape matching and there was no interaction between the cost of size changes and direction of transfer. Together the unimodal and crossmodal matching results suggest that the same, size-specific perceptual representations underlie both visual and haptic object recognition, and indicate that crossmodal memory for objects must be at least partly based on common perceptual representations

    Tactile information improves visual object discrimination in kea, Nestor notabilis, and capuchin monkeys, Sapajus spp.

    Get PDF
    In comparative visual cognition research, the influence of information acquired by nonvisual senses has received little attention. Systematic studies focusing on how the integration of information from sight and touch can affect animal perception are sparse. Here, we investigated whether tactile input improves visual discrimination ability of a bird, the kea, and capuchin monkeys, two species with acute vision, and known for their tendency to handle objects. To this end, we assessed whether, at the attainment of a criterion, accuracy and/or learning speed in the visual modality were enhanced by haptic (i.e. active tactile) exploration of an object. Subjects were trained to select the positive stimulus between two cylinders of the same shape and size, but with different surface structures. In the Sight condition, one pair of cylinders was inserted into transparent Plexiglas tubes. This prevented animals from haptically perceiving the objects' surfaces. In the Sight and Touch condition, one pair of cylinders was not inserted into transparent Plexiglas tubes. This allowed the subjects to perceive the objects' surfaces both visually and haptically. We found that both kea and capuchins (1) showed comparable levels of accuracy at the attainment of the learning criterion in both conditions, but (2) required fewer trials to achieve the criterion in the Sight and Touch condition. Moreover, this study showed that both kea and capuchins can integrate information acquired by the visual and tactile modalities. To our knowledge, this represents the first evidence of visuotactile integration in a bird species. Overall, our findings demonstrate that the acquisition of tactile information while manipulating objects facilitates visual discrimination of objects in two phylogenetically distant species
    corecore