12,048 research outputs found

    Phenomenal regression to the real object in physical and virtual worlds

    Get PDF
    © 2014, Springer-Verlag London. In this paper, we investigate a new approach to comparing physical and virtual size and depth percepts that captures the involuntary responses of participants to different stimuli in their field of view, rather than relying on their skill at judging size, reaching or directed walking. We show, via an effect first observed in the 1930s, that participants asked to equate the perspective projections of disc objects at different distances make a systematic error that is both individual in its extent and comparable in the particular physical and virtual setting we have tested. Prior work has shown that this systematic error is difficult to correct, even when participants are knowledgeable of its likelihood of occurring. In fact, in the real world, the error only reduces as the available cues to depth are artificially reduced. This makes the effect we describe a potentially powerful, intrinsic measure of VE quality that ultimately may contribute to our understanding of VE depth compression phenomena

    A Neural Model of Visually Guided Steering, Obstacle Avoidance, and Route Selection

    Full text link
    A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3D virtual reality environment to determine the position of objects based on motion discontinuities, and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles such that the goal acts as an attractor of heading, while obstacles act as repellers. In addition the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas MT, MST, and posterior parietal cortex can be used to guide steering. The model quantitatively simulates human psychophysical data about visually-guided steering, obstacle avoidance, and route selection.Air Force Office of Scientific Research (F4960-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Joint Material and Illumination Estimation from Photo Sets in the Wild

    Get PDF
    Faithful manipulation of shape, material, and illumination in 2D Internet images would greatly benefit from a reliable factorization of appearance into material (i.e., diffuse and specular) and illumination (i.e., environment maps). On the one hand, current methods that produce very high fidelity results, typically require controlled settings, expensive devices, or significant manual effort. To the other hand, methods that are automatic and work on 'in the wild' Internet images, often extract only low-frequency lighting or diffuse materials. In this work, we propose to make use of a set of photographs in order to jointly estimate the non-diffuse materials and sharp lighting in an uncontrolled setting. Our key observation is that seeing multiple instances of the same material under different illumination (i.e., environment), and different materials under the same illumination provide valuable constraints that can be exploited to yield a high-quality solution (i.e., specular materials and environment illumination) for all the observed materials and environments. Similar constraints also arise when observing multiple materials in a single environment, or a single material across multiple environments. The core of this approach is an optimization procedure that uses two neural networks that are trained on synthetic images to predict good gradients in parametric space given observation of reflected light. We evaluate our method on a range of synthetic and real examples to generate high-quality estimates, qualitatively compare our results against state-of-the-art alternatives via a user study, and demonstrate photo-consistent image manipulation that is otherwise very challenging to achieve

    OmniSCV: An omnidirectional synthetic image generator for computer vision

    Get PDF
    Omnidirectional and 360º images are becoming widespread in industry and in consumer society, causing omnidirectional computer vision to gain attention. Their wide field of view allows the gathering of a great amount of information about the environment from only an image. However, the distortion of these images requires the development of specific algorithms for their treatment and interpretation. Moreover, a high number of images is essential for the correct training of computer vision algorithms based on learning. In this paper, we present a tool for generating datasets of omnidirectional images with semantic and depth information. These images are synthesized from a set of captures that are acquired in a realistic virtual environment for Unreal Engine 4 through an interface plugin. We gather a variety of well-known projection models such as equirectangular and cylindrical panoramas, different fish-eye lenses, catadioptric systems, and empiric models. Furthermore, we include in our tool photorealistic non-central-projection systems as non-central panoramas and non-central catadioptric systems. As far as we know, this is the first reported tool for generating photorealistic non-central images in the literature. Moreover, since the omnidirectional images are made virtually, we provide pixel-wise information about semantics and depth as well as perfect knowledge of the calibration parameters of the cameras. This allows the creation of ground-truth information with pixel precision for training learning algorithms and testing 3D vision approaches. To validate the proposed tool, different computer vision algorithms are tested as line extractions from dioptric and catadioptric central images, 3D Layout recovery and SLAM using equirectangular panoramas, and 3D reconstruction from non-central panoramas

    Virtual reality as an educational tool in interior architecture

    Get PDF
    Ankara : The Department of Interior Architecture and Environmental Design and the Institute of Fine Arts of Bilkent Univ., 1997.Thesis (Master's) -- Bilkent University, 1997.Includes bibliographical references.This thesis discusses the use of virtual reality technology as an educational tool in interior architectural design. As a result of this discussion, it is proposed that virtual reality can be of use in aiding three-dimensional design and visualization, and may speed up the design process. It may also be of help in getting the designers/students more involved in their design projects. Virtual reality can enhance the capacity of designers to design in three dimensions. The virtual reality environment used in designing should be capable of aiding both the design and the presentation process. The tradeoffs of the technology, newly emerging trends and future directions in virtual reality are discussed.AktaÅŸ, OrkunM.S

    The effects of changing projection geometry on perception of 3D objects on and around tabletops

    Get PDF
    Funding: Natural Sciences and Engineering Research Council of Canada Networks of Centres of Excellence of Canada.Displaying 3D objects on horizontal displays can cause problems in the way that the virtual scene is presented on the 2D surface; inappropriate choices in how 3D is represented can lead to distorted images and incorrect object interpretations. We present four experiments that test 3D perception. We varied projection geometry in three ways: type of projection (perspective/parallel), separation between the observer’s point of view and the projection’s center (discrepancy), and the presence of motion parallax (with/without parallax). Projection geometry had strong effects different for each task. Reducing discrepancy is desirable for orientation judgments, but not for object recognition or internal angle judgments. Using a fixed center of projection above the table reduces error and improves accuracy in most tasks. The results have far-reaching implications for the design of 3D views on tables, in particular for multi-user applications where projections that appear correct for one person will not be perceived correctly by another.PostprintPeer reviewe
    • …
    corecore