47,245 research outputs found

    View Direction, Surface Orientation and Texture Orientation for Perception of Surface Shape

    Get PDF
    Textures are commonly used to enhance the representation of shape in non-photorealistic rendering applications such as medical drawings. Textures that have elongated linear elements appear to be superior to random textures in that they can, by the way they conform to the surface, reveal the surface shape. We observe that shape following hache marks commonly used in cartography and copper-plate illustration are locally similar to the effect of the lines that can be generated by the intersection of a set of parallel planes with a surface. We use this as a basis for investigating the relationships between view direction, texture orientation and surface orientation in affording surface shape perception. We report two experiments using parallel plane textures. The results show that textures constructed from planes more nearly orthogonal to the line of sight tend to be better at revealing surface shape. Also, viewing surfaces from an oblique view is much better for revealing surface shape than viewing them from directly above

    Shape from periodic texture using the eigenvectors of local affine distortion

    Get PDF
    This paper shows how the local slant and tilt angles of regularly textured curved surfaces can be estimated directly, without the need for iterative numerical optimization, We work in the frequency domain and measure texture distortion using the affine distortion of the pattern of spectral peaks. The key theoretical contribution is to show that the directions of the eigenvectors of the affine distortion matrices can be used to estimate local slant and tilt angles of tangent planes to curved surfaces. In particular, the leading eigenvector points in the tilt direction. Although not as geometrically transparent, the direction of the second eigenvector can be used to estimate the slant direction. The required affine distortion matrices are computed using the correspondences between spectral peaks, established on the basis of their energy ordering. We apply the method to a variety of real-world and synthetic imagery

    A Method for the Perceptual Optimization of Complex Visualizations

    Get PDF
    A common problem in visualization applications is the display of one surface overlying another. Unfortunately, it is extremely difficult to do this clearly and effectively. Stereoscopic viewing can help, but in order for us to be able to see both surfaces simultaneously, they must be textured, and the top surface must be made partially transparent. There is also abundant evidence that all textures are not equal in helping to reveal surface shape, but there are no general guidelines describing the best set of textures to be used in this way. What makes the problem difficult to perceptually optimize is that there are a great many variables involved. Both foreground and background textures must be specified in terms of their component colors, texture element shapes, distributions, and sizes. Also to be specified is the degree of transparency for the foreground texture components. Here we report on a novel approach to creating perceptually optimal solutions to complex visualization problems and we apply it to the overlapping surface problem as a test case. Our approach is a three-stage process. In the first stage we create a parameterized method for specifying a foreground and background pair of textures. In the second stage a genetic algorithm is applied to a population of texture pairs using subject judgments as a selection criterion. Over many trials effective texture pairs evolve. The third stage involves characterizing and generalizing the examples of effective textures. We detail this process and present some early results

    The effects of viewpoint on the virtual space of pictures

    Get PDF
    Pictorial displays whose primary purpose is to convey accurate information about the 3-D spatial layout of an environment are discussed. How and how well, pictures can convey such information is discussed. It is suggested that picture perception is not best approached as a unitary, indivisible process. Rather, it is a complex process depending on multiple, partially redundant, interacting sources of visual information for both the real surface of the picture and the virtual space beyond. Each picture must be assessed for the particular information that it makes available. This will determine how accurately the virtual space represented by the picture is seen, as well as how it is distorted when seen from the wrong viewpoint

    The perception of three-dimensionality across continuous surfaces

    Get PDF
    The apparent three-dimensionality of a viewed surface presumably corresponds to several internal preceptual quantities, such as surface curvature, local surface orientation, and depth. These quantities are mathematically related for points within the silhouette bounds of a smooth, continuous surface. For instance, surface curvature is related to the rate of change of local surface orientation, and surface orientation is related to the local gradient of distance. It is not clear to what extent these 3D quantities are determined directly from image information rather than indirectly from mathematically related forms, by differentiation or by integration within boundary constraints. An open empirical question, for example, is to what extent surface curvature is perceived directly, and to what extent it is quantitative rather than qualitative. In addition to surface orientation and curvature, one derives an impression of depth, i.e., variations in apparent egocentric distance. A static orthographic image is essentially devoid of depth information, and any quantitative depth impression must be inferred from surface orientation and other sources. Such conversion of orientation to depth does appear to occur, and even to prevail over stereoscopic depth information under some circumstances

    Bodily awareness and novel multisensory features

    Get PDF
    According to the decomposition thesis, perceptual experiences resolve without remainder into their different modality-specific components. Contrary to this view, I argue that certain cases of multisensory integration give rise to experiences representing features of a novel type. Through the coordinated use of bodily awareness—understood here as encompassing both proprioception and kinaesthesis—and the exteroceptive sensory modalities, one becomes perceptually responsive to spatial features whose instances couldn’t be represented by any of the contributing modalities functioning in isolation. I develop an argument for this conclusion focusing on two cases: 3D shape perception in haptic touch and experiencing an object’s egocentric location in crossmodally accessible, environmental space

    Complexity, rate, and scale in sliding friction dynamics between a finger and textured surface.

    Get PDF
    Sliding friction between the skin and a touched surface is highly complex, but lies at the heart of our ability to discriminate surface texture through touch. Prior research has elucidated neural mechanisms of tactile texture perception, but our understanding of the nonlinear dynamics of frictional sliding between the finger and textured surfaces, with which the neural signals that encode texture originate, is incomplete. To address this, we compared measurements from human fingertips sliding against textured counter surfaces with predictions of numerical simulations of a model finger that resembled a real finger, with similar geometry, tissue heterogeneity, hyperelasticity, and interfacial adhesion. Modeled and measured forces exhibited similar complex, nonlinear sliding friction dynamics, force fluctuations, and prominent regularities related to the surface geometry. We comparatively analysed measured and simulated forces patterns in matched conditions using linear and nonlinear methods, including recurrence analysis. The model had greatest predictive power for faster sliding and for surface textures with length scales greater than about one millimeter. This could be attributed to the the tendency of sliding at slower speeds, or on finer surfaces, to complexly engage fine features of skin or surface, such as fingerprints or surface asperities. The results elucidate the dynamical forces felt during tactile exploration and highlight the challenges involved in the biological perception of surface texture via touch

    Texture Segregation By Visual Cortex: Perceptual Grouping, Attention, and Learning

    Get PDF
    A neural model is proposed of how laminar interactions in the visual cortex may learn and recognize object texture and form boundaries. The model brings together five interacting processes: region-based texture classification, contour-based boundary grouping, surface filling-in, spatial attention, and object attention. The model shows how form boundaries can determine regions in which surface filling-in occurs; how surface filling-in interacts with spatial attention to generate a form-fitting distribution of spatial attention, or attentional shroud; how the strongest shroud can inhibit weaker shrouds; and how the winning shroud regulates learning of texture categories, and thus the allocation of object attention. The model can discriminate abutted textures with blurred boundaries and is sensitive to texture boundary attributes like discontinuities in orientation and texture flow curvature as well as to relative orientations of texture elements. The model quantitatively fits a large set of human psychophysical data on orientation-based textures. Object boundar output of the model is compared to computer vision algorithms using a set of human segmented photographic images. The model classifies textures and suppresses noise using a multiple scale oriented filterbank and a distributed Adaptive Resonance Theory (dART) classifier. The matched signal between the bottom-up texture inputs and top-down learned texture categories is utilized by oriented competitive and cooperative grouping processes to generate texture boundaries that control surface filling-in and spatial attention. Topdown modulatory attentional feedback from boundary and surface representations to early filtering stages results in enhanced texture boundaries and more efficient learning of texture within attended surface regions. Surface-based attention also provides a self-supervising training signal for learning new textures. Importance of the surface-based attentional feedback in texture learning and classification is tested using a set of textured images from the Brodatz micro-texture album. Benchmark studies vary from 95.1% to 98.6% with attention, and from 90.6% to 93.2% without attention.Air Force Office of Scientific Research (F49620-01-1-0397, F49620-01-1-0423); National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Vision, Action, and Make-Perceive

    Get PDF
    In this paper, I critically assess the enactive account of visual perception recently defended by Alva Noë (2004). I argue inter alia that the enactive account falsely identifies an object’s apparent shape with its 2D perspectival shape; that it mistakenly assimilates visual shape perception and volumetric object recognition; and that it seriously misrepresents the constitutive role of bodily action in visual awareness. I argue further that noticing an object’s perspectival shape involves a hybrid experience combining both perceptual and imaginative elements – an act of what I call ‘make-perceive.

    Edge-region grouping in figure-ground organization and depth perception.

    Get PDF
    Edge-region grouping (ERG) is proposed as a unifying and previously unrecognized class of relational information that influences figure-ground organization and perceived depth across an edge. ERG occurs when the edge between two regions is differentially grouped with one region based on classic principles of similarity grouping. The ERG hypothesis predicts that the grouped side will tend to be perceived as the closer, figural region. Six experiments are reported that test the predictions of the ERG hypothesis for 6 similarity-based factors: common fate, blur similarity, color similarity, orientation similarity, proximity, and flicker synchrony. All 6 factors produce the predicted effects, although to different degrees. In a 7th experiment, the strengths of these figural/depth effects were found to correlate highly with the strength of explicit grouping ratings of the same visual displays. The relations of ERG to prior results in the literature are discussed, and possible reasons for ERG-based figural/depth effects are considered. We argue that grouping processes mediate at least some of the effects we report here, although ecological explanations are also likely to be relevant in the majority of cases
    • …
    corecore