526 research outputs found

    Simplifying collaboration in co-located virtual environments using the active-passive approach

    Get PDF
    The design and implementation of co-located immersive virtual environments with equal interaction possibilities for all participants is a complex topic. The main problem, on a fundamental technical level, is the difficulty of providing perspective-correct images for each participant. There is consensus that the lack of a correct perspective view will negatively affect interaction fidelity and therefore also collaboration. Several research approaches focus on providing a correct perspective view to all participants to enable co-located work. However, these approaches are usually either based on custom hardware solutions that limit the number of users with a correct perspective view or software solutions striving to eliminate or mitigate restrictions with custom image-generation approaches. In this paper we investigate an often overlooked approach to enable collaboration for multiple users in an immersive virtual environment designed for a single user. The approach provides one (active) user with a perspective-correct view while other (passive) users receive visual cues that are not perspective-correct. We used this active-passive approach to investigate the limitations posed by assigning the viewpoint to only one user. The findings of our study, though inconclusive, revealed two curiosities. First, our results suggest that the location of target geometry is an important factor to consider for designing interaction, expanding on prior work that has studied only the relation between user positions. Secondly, there seems to be only a low cost involved in accepting the limitation of providing perspective-correct images to a single user, when comparing with a baseline, during a coordinated work approach. These findings advance our understanding of collaboration in co-located virtual environments and suggest an approach to simplify co-located collaboration

    Diagnosing perceptual distortion present in group stereoscopic viewing

    Get PDF
    Stereoscopic displays are an increasingly prevalent tool for experiencing virtual environments, and the inclusion of stereo has the potential to improve distance perception within the virtual environment. When multiple users simultaneously view the same stereoscopic display, only one user experiences the projectively correct view of the virtual environment, and all other users view the same stereoscopic images while standing at locations displaced from the center of projection (CoP). This study was designed to evaluate the perceptual distortions caused by displacement from the CoP when viewing virtual objects in the context of a virtual scene containing stereo depth cues. Judgments of angles were distorted after leftward and rightward displacement from the CoP. Judgments of object depth were distorted after forward and backward displacement from the CoP. However, perceptual distortions of angle and depth were smaller than predicted by a ray-intersection model based on stereo viewing geometry. Furthermore, perceptual distortions were asymmetric, leading to different patterns of distortion depending on the direction of displacement. This asymmetry also conflicts with the predictions of the ray-intersection model. The presence of monocular depth cues might account for departures from model predictions

    Imaging methods for understanding and improving visual training in the geosciences

    Get PDF
    Experience in the field is a critical educational component of every student studying geology. However, it is typically difficult to ensure that every student gets the necessary experience because of monetary and scheduling limitations. Thus, we proposed to create a virtual field trip based off of an existing 10-day field trip to California taken as part of an undergraduate geology course at the University of Rochester. To assess the effectiveness of this approach, we also proposed to analyze the learning and observation processes of both students and experts during the real and virtual field trips. At sites intended for inclusion in the virtual field trip, we captured gigapixel resolution panoramas by taking hundreds of images using custom built robotic imaging systems. We gathered data to analyze the learning process by fitting each geology student and expert with a portable eye- tracking system that records a video of their eye movements and a video of the scene they are observing. An important component of analyzing the eye-tracking data requires mapping the gaze of each observer into a common reference frame. We have made progress towards developing a software tool that helps automate this procedure by using image feature tracking and registration methods to map the scene video frames from each eye-tracker onto a reference panorama for each site. For the purpose of creating a virtual field trip, we have a large scale semi-immersive display system that consists of four tiled projectors, which have been colorimetrically and photometrically calibrated, and a curved widescreen display surface. We use this system to present the previously captured panoramas, which simulates the experience of visiting the sites in person. In terms of broader geology education and outreach, we have created an interactive website that uses Google Earth as the interface for visually exploring the panoramas captured for each site

    Videoscapes: Exploring Unstructured Video Collections

    No full text

    Radiometric Compensation through Inverse Light Transport

    Get PDF
    Radiometric compensation techniques allow seamless projections onto complex everyday surfaces. Implemented with projector-camera systems they support the presentation of visual content in situations where projection-optimized screens are not available or not desired - as in museums, historic sites, air-plane cabins, or stage performances. We propose a novel approach that employs the full light transport between a projector and a camera to account for many illumination aspects, such as interreflections, refractions and defocus. Precomputing the inverse light transport in combination with an efficient implementation on the GPU makes the real-time compensation of captured local and global light modulations possible

    Hack the Experience: Tools for Artists from Cognitive Science

    Get PDF
    Hack The Experience will reframe your perspective on how your audience engages your work. This will happen as you learn how to control attention through spatial and time-based techniques that you can harness as you build immersive installations or as you think about how to best arrange your work in an exhibition. You’ll learn things about the senses and how they interface with attention so that you can build in visceral forms of interactivity, engage people’s empathetic responses, and frame their moods. This book is a dense bouillon-cube of techniques that you can adapt and apply to your personal practice, and it’s a book that will walk you step-by-step through skill sets from ethnography, cognitive science, and multi-modal metaphors. The core argument of this book is that art is a form of cognitive engineering and that the physical environment (or objects in the physical environment) can be shaped to maximize emotional and sensory experience. Many types of art will benefit from this handbook (because cognition is pervasive in our experience of art), but it is particularly relevant to immersive experiential works such as installations, participatory/interactive environments, performance art, curatorial practice, architecture and landscape architecture, complex durational works, and works requiring new models of documentation. These types of work benefit from the empirical findings of cognitive science because intentionally leveraging basic human cognition in artworks can give participants new ways of seeing the world that are cognitively relevant. This leveraging process provides a new layer in the construction of conceptually grounded works

    Videos in Context for Telecommunication and Spatial Browsing

    Get PDF
    The research presented in this thesis explores the use of videos embedded in panoramic imagery to transmit spatial and temporal information describing remote environments and their dynamics. Virtual environments (VEs) through which users can explore remote locations are rapidly emerging as a popular medium of presence and remote collaboration. However, capturing visual representation of locations to be used in VEs is usually a tedious process that requires either manual modelling of environments or the employment of specific hardware. Capturing environment dynamics is not straightforward either, and it is usually performed through specific tracking hardware. Similarly, browsing large unstructured video-collections with available tools is difficult, as the abundance of spatial and temporal information makes them hard to comprehend. At the same time, on a spectrum between 3D VEs and 2D images, panoramas lie in between, as they offer the same 2D images accessibility while preserving 3D virtual environments surrounding representation. For this reason, panoramas are an attractive basis for videoconferencing and browsing tools as they can relate several videos temporally and spatially. This research explores methods to acquire, fuse, render and stream data coming from heterogeneous cameras, with the help of panoramic imagery. Three distinct but interrelated questions are addressed. First, the thesis considers how spatially localised video can be used to increase the spatial information transmitted during video mediated communication, and if this improves quality of communication. Second, the research asks whether videos in panoramic context can be used to convey spatial and temporal information of a remote place and the dynamics within, and if this improves users' performance in tasks that require spatio-temporal thinking. Finally, the thesis considers whether there is an impact of display type on reasoning about events within videos in panoramic context. These research questions were investigated over three experiments, covering scenarios common to computer-supported cooperative work and video browsing. To support the investigation, two distinct video+context systems were developed. The first telecommunication experiment compared our videos in context interface with fully-panoramic video and conventional webcam video conferencing in an object placement scenario. The second experiment investigated the impact of videos in panoramic context on quality of spatio-temporal thinking during localization tasks. To support the experiment, a novel interface to video-collection in panoramic context was developed and compared with common video-browsing tools. The final experimental study investigated the impact of display type on reasoning about events. The study explored three adaptations of our video-collection interface to three display types. The overall conclusion is that videos in panoramic context offer a valid solution to spatio-temporal exploration of remote locations. Our approach presents a richer visual representation in terms of space and time than standard tools, showing that providing panoramic contexts to video collections makes spatio-temporal tasks easier. To this end, videos in context are suitable alternative to more difficult, and often expensive solutions. These findings are beneficial to many applications, including teleconferencing, virtual tourism and remote assistance

    Physiological system modelling

    Get PDF
    Computer graphics has a major impact in our day-to-day life. It is used in diverse areas such as displaying the results of engineering and scientific computations and visualization, producing television commercials and feature films, simulation and analysis of real world problems, computer aided design, graphical user interfaces that increases the communication bandwidth between humans and machines, etc Scientific visualization is a well-established method for analysis of data, originating from scientific computations, simulations or measurements. The development and implementation of the 3Dgen software was developed by the author using OpenGL and C language was presented in this report 3Dgen was used to visualize threedimensional cylindrical models such as pipes and also for limited usage in virtual endoscopy. Using the developed software a model was created using the centreline data input by the user or from the output of some other program, stored in a normal text file. The model was constructed by drawing surface polygons between two adjacent centreline points. The software allows the user to view the internal and external surfaces of the model. The software was designed in such a way that it runs in more than one operating systems with minimal installation procedures Since the size of the software is very small it can be stored in a 1 44 Megabyte floppy diskette. Depending on the processing speed of the PC the software can generate models of any length and size Compared to other packages, 3Dgen has minimal input procedures was able to generate models with smooth bends. It has both modelling and virtual exploration features. For models with sharp bends the software generates an overshoot
    • 

    corecore