2,528 research outputs found

    Perceiving environmental structure from optical motion

    Get PDF
    Generally speaking, one of the most important sources of optical information about environmental structure is known to be the deforming optical patterns produced by the movements of the observer (pilot) or environmental objects. As an observer moves through a rigid environment, the projected optical patterns of environmental objects are systematically transformed according to their orientations and positions in 3D space relative to those of the observer. The detailed characteristics of these deforming optical patterns carry information about the 3D structure of the objects and about their locations and orientations relative to those of the observer. The specific geometrical properties of moving images that may constitute visually detected information about the shapes and locations of environmental objects is examined

    A General Framework for Flexible Multi-Cue Photometric Point Cloud Registration

    Get PDF
    The ability to build maps is a key functionality for the majority of mobile robots. A central ingredient to most mapping systems is the registration or alignment of the recorded sensor data. In this paper, we present a general methodology for photometric registration that can deal with multiple different cues. We provide examples for registering RGBD as well as 3D LIDAR data. In contrast to popular point cloud registration approaches such as ICP our method does not rely on explicit data association and exploits multiple modalities such as raw range and image data streams. Color, depth, and normal information are handled in an uniform manner and the registration is obtained by minimizing the pixel-wise difference between two multi-channel images. We developed a flexible and general framework and implemented our approach inside that framework. We also released our implementation as open source C++ code. The experiments show that our approach allows for an accurate registration of the sensor data without requiring an explicit data association or model-specific adaptations to datasets or sensors. Our approach exploits the different cues in a natural and consistent way and the registration can be done at framerate for a typical range or imaging sensor.Comment: 8 page

    3-D surface modelling of the human body and 3-D surface anthropometry

    Get PDF
    This thesis investigates three-dimensional (3-D) surface modelling of the human body and 3-D surface anthropometry. These are two separate, but closely related, areas. 3-D surface modelling is an essential technology for representing and describing the surface shape of an object on a computer. 3-D surface modelling of the human body has wide applications in engineering design, work space simulation, the clothing industry, medicine, biomechanics and animation. These applications require increasingly realistic surface models of the human body. 3-D surface anthropometry is a new interdisciplinary subject. It is defined in this thesis as the art, science, and technology of acquiring, modelling and interrogating 3-D surface data of the human body. [Continues.

    Misperception of rigidity from actively generated optic flow

    Get PDF
    It is conventionally assumed that the goal of the visual system is to derive a perceptual representation that is a veridical reconstruction of the external world: a reconstruction that leads to optimal accuracy and precision of metric estimates, given sensory information. For example, 3-D structure is thought to be veridically recovered from optic flow signals in combination with egocentric motion information and assumptions of the stationarity and rigidity of the external world. This theory predicts veridical perceptual judgments under conditions that mimic natural viewing, while ascribing nonoptimality under laboratory conditions to unreliable or insufficient sensory information\u2014for example, the lack of natural and measurable observer motion. In two experiments, we contrasted this optimal theory with a heuristic theory that predicts the derivation of perceived 3-D structure based on the velocity gradients of the retinal flow field without the use of egomotion signals or a rigidity prior. Observers viewed optic flow patterns generated by their own motions relative to two surfaces and later viewed the same patterns while stationary. When the surfaces were part of a rigid structure, static observers systematically perceived a nonrigid structure, consistent with the predictions of both an optimal and a heuristic model. Contrary to the optimal model, moving observers also perceived nonrigid structures in situations where retinal and extraretinal signals, combined with a rigidity assumption, should have yielded a veridical rigid estimate. The perceptual biases were, however, consistent with a heuristic model which is only based on an analysis of the optic flow

    Model-Based Environmental Visual Perception for Humanoid Robots

    Get PDF
    The visual perception of a robot should answer two fundamental questions: What? and Where? In order to properly and efficiently reply to these questions, it is essential to establish a bidirectional coupling between the external stimuli and the internal representations. This coupling links the physical world with the inner abstraction models by sensor transformation, recognition, matching and optimization algorithms. The objective of this PhD is to establish this sensor-model coupling

    Real Time Structured Light and Applications

    Get PDF

    Depth Perception in Humans and Animals

    Get PDF
    This thesis has been the product of three projects which are all related to depth perception, within the core discipline of vision science. The first project was collaborative work between the University of Durham and researchers at University of California, Berkeley. These included Prof. Martin S. Banks and Bill Sprague at U.C. Berkeley, and Dr. Jurgen Schmoll and Prof. Gordon Love at the University of Durham. This project built on previous research investigating the ocular adaptations in different land-dwelling vertebrate species. We found that we could strongly predict pupil shape based on the diel activity and trophic strategies of a species, and our simulations showed that multifocal pupils may extend depth of focus. The second project was also in collaboration with U.C. Berkeley; Prof. Martin S. Banks, and Paul Johnson, which involved a study into 3D displays and different approaches to reducing the vergence-accommodation conflict. Our results showed that a focus-correct adaptive system did assist in the vergence-accommodation conflict, but monovision was less efficacious and we believe this was due to a reduction in stereoacuity. The third project considered spherical aberration as a cue to the sign of defocus. We present simulations which show that the spatial frequency content of images on either side of focus differ, and suggest that this could, in principle, drive the accommodative process
    • …
    corecore