5,732 research outputs found

    Vision, Action, and Make-Perceive

    Get PDF
    In this paper, I critically assess the enactive account of visual perception recently defended by Alva NoĂ« (2004). I argue inter alia that the enactive account falsely identifies an object’s apparent shape with its 2D perspectival shape; that it mistakenly assimilates visual shape perception and volumetric object recognition; and that it seriously misrepresents the constitutive role of bodily action in visual awareness. I argue further that noticing an object’s perspectival shape involves a hybrid experience combining both perceptual and imaginative elements – an act of what I call ‘make-perceive.

    Auditory environmental context affects visual distance perception

    Get PDF
    In this article, we show that visual distance perception (VDP) is influenced by the auditory environmental context through reverberation-related cues. We performed two VDP experiments in two dark rooms with extremely different reverberation times: an anechoic chamber and a reverberant room. Subjects assigned to the reverberant room perceived the targets farther than subjects assigned to the anechoic chamber. Also, we found a positive correlation between the maximum perceived distance and the auditorily perceived room size. We next performed a second experiment in which the same subjects of Experiment 1 were interchanged between rooms. We found that subjects preserved the responses from the previous experiment provided they were compatible with the present perception of the environment; if not, perceived distance was biased towards the auditorily perceived boundaries of the room. Results of both experiments show that the auditory environment can influence VDP, presumably through reverberation cues related to the perception of room size.Fil: Etchemendy, Pablo Esteban. Consejo Nacional de Investigaciones CientĂ­ficas y TĂ©cnicas; Argentina. Universidad Nacional de Quilmes. Departamento de Ciencia y TecnologĂ­a. Laboratorio de AcĂșstica y PercepciĂłn Sonora; ArgentinaFil: AbregĂș, Ezequiel Lucas. Consejo Nacional de Investigaciones CientĂ­ficas y TĂ©cnicas; Argentina. Universidad Nacional de Quilmes. Departamento de Ciencia y TecnologĂ­a. Laboratorio de AcĂșstica y PercepciĂłn Sonora; ArgentinaFil: Calcagno, Esteban. Consejo Nacional de Investigaciones CientĂ­ficas y TĂ©cnicas; Argentina. Universidad Nacional de Quilmes. Departamento de Ciencia y TecnologĂ­a. Laboratorio de AcĂșstica y PercepciĂłn Sonora; ArgentinaFil: Eguia, Manuel Camilo. Consejo Nacional de Investigaciones CientĂ­ficas y TĂ©cnicas; Argentina. Universidad Nacional de Quilmes. Departamento de Ciencia y TecnologĂ­a. Laboratorio de AcĂșstica y PercepciĂłn Sonora; ArgentinaFil: Vechiatti, Nilda. Universidad Nacional de Quilmes. Departamento de Ciencia y TecnologĂ­a. Laboratorio de AcĂșstica y PercepciĂłn Sonora; ArgentinaFil: Iasi, Federico. Universidad Nacional de Quilmes. Departamento de Ciencia y TecnologĂ­a. Laboratorio de AcĂșstica y PercepciĂłn Sonora; ArgentinaFil: Vergara, Ramiro Oscar. Consejo Nacional de Investigaciones CientĂ­ficas y TĂ©cnicas; Argentina. Universidad Nacional de Quilmes. Departamento de Ciencia y TecnologĂ­a. Laboratorio de AcĂșstica y PercepciĂłn Sonora; Argentin

    Perceiving and Knowing as Activities

    Get PDF
    According to the tradition of most empiricists, perception is the basis for all our knowledge (at least of the world). The tradition also assumes that perception by humans is a passive activity resulting in some static states pertaining perception and belief, which are then, in some versions, modified by the mind before being passed onto memory and knowledge. Following the work of J. J. Gibson, we argue that perceiving involves many activities and actions. This is true of both visual as well as olfactory-taste perception. The main moral of this paper is that perceiving and knowing are best thought of not as involving static states, but rather as ongoing temporal activities involving change. This presumably means giving up a frozen ontology of states and moving towards something like a dynamic ontology as a basis

    Perceiving pictures

    Get PDF
    I aim to give a new account of picture perception: of the way our visual system functions when we see something in a picture. My argument relies on the functional distinction between the ventral and dorsal visual subsystems. I propose that it is constitutive of picture perception that our ventral subsystem attributes properties to the depicted scene, whereas our dorsal subsystem attributes properties to the picture surface. This duality elucidates Richard Wollheim’s concept of the “twofoldness” of our experience of pictures: the “visual awareness not only of what is represented but also of the surface qualities of the representation.” I argue for the following four claims: (a) the depicted scene is represented by ventral perception, (b) the depicted scene is not represented by dorsal perception, (c) the picture surface is represented by dorsal perception, and (d) the picture surface is not necessarily represented by ventral perceptio

    How do neural networks see depth in single images?

    Full text link
    Deep neural networks have lead to a breakthrough in depth estimation from single images. Recent work often focuses on the accuracy of the depth map, where an evaluation on a publicly available test set such as the KITTI vision benchmark is often the main result of the article. While such an evaluation shows how well neural networks can estimate depth, it does not show how they do this. To the best of our knowledge, no work currently exists that analyzes what these networks have learned. In this work we take the MonoDepth network by Godard et al. and investigate what visual cues it exploits for depth estimation. We find that the network ignores the apparent size of known obstacles in favor of their vertical position in the image. Using the vertical position requires the camera pose to be known; however we find that MonoDepth only partially corrects for changes in camera pitch and roll and that these influence the estimated depth towards obstacles. We further show that MonoDepth's use of the vertical image position allows it to estimate the distance towards arbitrary obstacles, even those not appearing in the training set, but that it requires a strong edge at the ground contact point of the object to do so. In future work we will investigate whether these observations also apply to other neural networks for monocular depth estimation.Comment: Submitte

    Traditional and new principles of perceptual grouping

    Get PDF
    Perceptual grouping refers to the process of determining which regions and parts of the visual scene belong together as parts of higher order perceptual units such as objects or patterns. In the early 20th century, Gestalt psychologists identified a set of classic grouping principles which specified how some image features lead to grouping between elements given that all other factors were held constant. Modern vision scientists have expanded this list to cover a wide range of image features but have also expanded the importance of learning and other non-image factors. Unlike early Gestalt accounts which were based largely on visual demonstrations, modern theories are often explicitly quantitative and involve detailed models of how various image features modulate grouping. Work has also been done to understand the rules by which different grouping principles integrate to form a final percept. This chapter gives an overview of the classic principles, modern developments in understanding them, and new principles and the evidence for them. There is also discussion of some of the larger theoretical issues about grouping such as at what stage of visual processing it occurs and what types of neural mechanisms may implement grouping principles

    Egocentric Spatial Representation in Action and Perception

    Get PDF
    Neuropsychological findings used to motivate the “two visual systems” hypothesis have been taken to endanger a pair of widely accepted claims about spatial representation in visual experience. The first is the claim that visual experience represents 3-D space around the perceiver using an egocentric frame of reference. The second is the claim that there is a constitutive link between the spatial contents of visual experience and the perceiver’s bodily actions. In this paper, I carefully assess three main sources of evidence for the two visual systems hypothesis and argue that the best interpretation of the evidence is in fact consistent with both claims. I conclude with some brief remarks on the relation between visual consciousness and rational agency

    Edge-region grouping in figure-ground organization and depth perception.

    Get PDF
    Edge-region grouping (ERG) is proposed as a unifying and previously unrecognized class of relational information that influences figure-ground organization and perceived depth across an edge. ERG occurs when the edge between two regions is differentially grouped with one region based on classic principles of similarity grouping. The ERG hypothesis predicts that the grouped side will tend to be perceived as the closer, figural region. Six experiments are reported that test the predictions of the ERG hypothesis for 6 similarity-based factors: common fate, blur similarity, color similarity, orientation similarity, proximity, and flicker synchrony. All 6 factors produce the predicted effects, although to different degrees. In a 7th experiment, the strengths of these figural/depth effects were found to correlate highly with the strength of explicit grouping ratings of the same visual displays. The relations of ERG to prior results in the literature are discussed, and possible reasons for ERG-based figural/depth effects are considered. We argue that grouping processes mediate at least some of the effects we report here, although ecological explanations are also likely to be relevant in the majority of cases
    • 

    corecore