494 research outputs found

    Defining Reality in Virtual Reality: Exploring Visual Appearance and Spatial Experience Focusing on Colour

    Get PDF
    Today, different actors in the design process have communication difficulties in visualizing and predictinghow the not yet built environment will be experienced. Visually believable virtual environments (VEs) can make it easier for architects, users and clients to participate in the planning process. This thesis deals with the difficulties of translating reality into digital counterparts, focusing on visual appearance(particularly colour) and spatial experience. The goal is to develop knowledge of how differentaspects of a VE, especially light and colour, affect the spatial experience; and thus to contribute to a better understanding of the prerequisites for visualizing believable spatial VR-models. The main aims are to 1) identify problems and test solutions for simulating realistic spatial colour and light in VR; and 2) develop knowledge of the spatial conditions in VR required to convey believable experiences; and evaluate different ways of visualizing spatial experiences. The studies are conducted from an architecturalperspective; i.e. the whole of the spatial settings is considered, which is a complex task. One important contribution therefore concerns the methodology. Different approaches were used: 1) a literature review of relevant research areas; 2) a comparison between existing studies on colour appearance in 2D vs 3D; 3) a comparison between a real room and different VR-simulations; 4) elaborationswith an algorithm for colour correction; 5) reflections in action on a demonstrator for correct appearance and experience; and 6) an evaluation of texture-styles with non-photorealistic expressions. The results showed various problems related to the translation and comparison of reality to VR. The studies pointed out the significance of inter-reflections; colour variations; perceived colour of light and shadowing for the visual appearance in real rooms. Some differences in VR were connected to arbitrary parameter settings in the software; heavily simplified chromatic information on illumination; and incorrectinter-reflections. The models were experienced differently depending on the application. Various spatial differences between reality and VR could be solved by visual compensation. The study with texture-styles pointed out the significance of varying visual expressions in VR-models

    Gloss Perception in Painterly and Cartoon Rendering

    Get PDF
    International audienceDepictions with traditional media such as painting and drawing represent scene content in a stylized manner. It is unclear however how well stylized images depict scene properties like shape, material and lighting. In this paper, we describe the first study of material perception in stylized images (specifically painting and cartoon) and use non photorealistic rendering algorithms to evaluate how such stylization alters the perception of gloss. Our study reveals a compression of the range of representable gloss in stylized images so that shiny materials appear more diffuse in painterly rendering, while diffuse materials appear shinier in cartoon images. From our measurements we estimate the function that maps realistic gloss parameters to their perception in a stylized rendering. This mapping allows users of NPR algorithms to predict the perception of gloss in their images. The inverse of this function exaggerates gloss properties to make the contrast between materials in a stylized image more faithful. We have conducted our experiment both in a lab and on a crowdsourcing website. While crowdsourcing allows us to quickly design our pilot study, a lab experiment provides more control on how subjects perform the task. We provide a detailed comparison of the results obtained with the two approaches and discuss their advantages and drawbacks for studies like ours

    Defining Reality in Virtual Reality: Exploring Visual Appearance and Spatial Experience Focusing on Colour

    Get PDF
    Today, different actors in the design process have communication difficulties in visualizing and predictinghow the not yet built environment will be experienced. Visually believable virtual environments (VEs) can make it easier for architects, users and clients to participate in the planning process. This thesis deals with the difficulties of translating reality into digital counterparts, focusing on visual appearance(particularly colour) and spatial experience. The goal is to develop knowledge of how differentaspects of a VE, especially light and colour, affect the spatial experience; and thus to contribute to a better understanding of the prerequisites for visualizing believable spatial VR-models. The main aims are to 1) identify problems and test solutions for simulating realistic spatial colour and light in VR; and 2) develop knowledge of the spatial conditions in VR required to convey believable experiences; and evaluate different ways of visualizing spatial experiences. The studies are conducted from an architecturalperspective; i.e. the whole of the spatial settings is considered, which is a complex task. One important contribution therefore concerns the methodology. Different approaches were used: 1) a literature review of relevant research areas; 2) a comparison between existing studies on colour appearance in 2D vs 3D; 3) a comparison between a real room and different VR-simulations; 4) elaborationswith an algorithm for colour correction; 5) reflections in action on a demonstrator for correct appearance and experience; and 6) an evaluation of texture-styles with non-photorealistic expressions. The results showed various problems related to the translation and comparison of reality to VR. The studies pointed out the significance of inter-reflections; colour variations; perceived colour of light and shadowing for the visual appearance in real rooms. Some differences in VR were connected to arbitrary parameter settings in the software; heavily simplified chromatic information on illumination; and incorrectinter-reflections. The models were experienced differently depending on the application. Various spatial differences between reality and VR could be solved by visual compensation. The study with texture-styles pointed out the significance of varying visual expressions in VR-models

    Colour videos with depth : acquisition, processing and evaluation

    Get PDF
    The human visual system lets us perceive the world around us in three dimensions by integrating evidence from depth cues into a coherent visual model of the world. The equivalent in computer vision and computer graphics are geometric models, which provide a wealth of information about represented objects, such as depth and surface normals. Videos do not contain this information, but only provide per-pixel colour information. In this dissertation, I hence investigate a combination of videos and geometric models: videos with per-pixel depth (also known as RGBZ videos). I consider the full life cycle of these videos: from their acquisition, via filtering and processing, to stereoscopic display. I propose two approaches to capture videos with depth. The first is a spatiotemporal stereo matching approach based on the dual-cross-bilateral grid – a novel real-time technique derived by accelerating a reformulation of an existing stereo matching approach. This is the basis for an extension which incorporates temporal evidence in real time, resulting in increased temporal coherence of disparity maps – particularly in the presence of image noise. The second acquisition approach is a sensor fusion system which combines data from a noisy, low-resolution time-of-flight camera and a high-resolution colour video camera into a coherent, noise-free video with depth. The system consists of a three-step pipeline that aligns the video streams, efficiently removes and fills invalid and noisy geometry, and finally uses a spatiotemporal filter to increase the spatial resolution of the depth data and strongly reduce depth measurement noise. I show that these videos with depth empower a range of video processing effects that are not achievable using colour video alone. These effects critically rely on the geometric information, like a proposed video relighting technique which requires high-quality surface normals to produce plausible results. In addition, I demonstrate enhanced non-photorealistic rendering techniques and the ability to synthesise stereoscopic videos, which allows these effects to be applied stereoscopically. These stereoscopic renderings inspired me to study stereoscopic viewing discomfort. The result of this is a surprisingly simple computational model that predicts the visual comfort of stereoscopic images. I validated this model using a perceptual study, which showed that it correlates strongly with human comfort ratings. This makes it ideal for automatic comfort assessment, without the need for costly and lengthy perceptual studies

    Remote Visual Observation of Real Places Through Virtual Reality Headsets

    Get PDF
    Virtual Reality has always represented a fascinating yet powerful opportunity that has attracted studies and technology developments, especially since the latest release on the market of powerful high-resolution and wide field-of-view VR headsets. While the great potential of such VR systems is common and accepted knowledge, issues remain related to how to design systems and setups capable of fully exploiting the latest hardware advances. The aim of the proposed research is to study and understand how to increase the perceived level of realism and sense of presence when remotely observing real places through VR headset displays. Hence, to produce a set of guidelines that give directions to system designers about how to optimize the display-camera setup to enhance performance, focusing on remote visual observation of real places. The outcome of this investigation represents unique knowledge that is believed to be very beneficial for better VR headset designs towards improved remote observation systems. To achieve the proposed goal, this thesis presents a thorough investigation of existing literature and previous researches, which is carried out systematically to identify the most important factors ruling realism, depth perception, comfort, and sense of presence in VR headset observation. Once identified, these factors are further discussed and assessed through a series of experiments and usability studies, based on a predefined set of research questions. More specifically, the role of familiarity with the observed place, the role of the environment characteristics shown to the viewer, and the role of the display used for the remote observation of the virtual environment are further investigated. To gain more insights, two usability studies are proposed with the aim of defining guidelines and best practices. The main outcomes from the two studies demonstrate that test users can experience an enhanced realistic observation when natural features, higher resolution displays, natural illumination, and high image contrast are used in Mobile VR. In terms of comfort, simple scene layouts and relaxing environments are considered ideal to reduce visual fatigue and eye strain. Furthermore, sense of presence increases when observed environments induce strong emotions, and depth perception improves in VR when several monocular cues such as lights and shadows are combined with binocular depth cues. Based on these results, this investigation then presents a focused evaluation on the outcomes and introduces an innovative eye-adapted High Dynamic Range (HDR) approach, which the author believes to be of great improvement in the context of remote observation when combined with eye-tracked VR headsets. Within this purpose, a third user study is proposed to compare static HDR and eye-adapted HDR observation in VR, to assess that the latter can improve realism, depth perception, sense of presence, and in certain cases even comfort. Results from this last study confirmed the author expectations, proving that eye-adapted HDR and eye tracking should be used to achieve best visual performances for remote observation in modern VR systems

    Two and three dimensional segmentation of multimodal imagery

    Get PDF
    The role of segmentation in the realms of image understanding/analysis, computer vision, pattern recognition, remote sensing and medical imaging in recent years has been significantly augmented due to accelerated scientific advances made in the acquisition of image data. This low-level analysis protocol is critical to numerous applications, with the primary goal of expediting and improving the effectiveness of subsequent high-level operations by providing a condensed and pertinent representation of image information. In this research, we propose a novel unsupervised segmentation framework for facilitating meaningful segregation of 2-D/3-D image data across multiple modalities (color, remote-sensing and biomedical imaging) into non-overlapping partitions using several spatial-spectral attributes. Initially, our framework exploits the information obtained from detecting edges inherent in the data. To this effect, by using a vector gradient detection technique, pixels without edges are grouped and individually labeled to partition some initial portion of the input image content. Pixels that contain higher gradient densities are included by the dynamic generation of segments as the algorithm progresses to generate an initial region map. Subsequently, texture modeling is performed and the obtained gradient, texture and intensity information along with the aforementioned initial partition map are used to perform a multivariate refinement procedure, to fuse groups with similar characteristics yielding the final output segmentation. Experimental results obtained in comparison to published/state-of the-art segmentation techniques for color as well as multi/hyperspectral imagery, demonstrate the advantages of the proposed method. Furthermore, for the purpose of achieving improved computational efficiency we propose an extension of the aforestated methodology in a multi-resolution framework, demonstrated on color images. Finally, this research also encompasses a 3-D extension of the aforementioned algorithm demonstrated on medical (Magnetic Resonance Imaging / Computed Tomography) volumes
    • …
    corecore