206,102 research outputs found

    Simplifying collaboration in co-located virtual environments using the active-passive approach

    Get PDF
    The design and implementation of co-located immersive virtual environments with equal interaction possibilities for all participants is a complex topic. The main problem, on a fundamental technical level, is the difficulty of providing perspective-correct images for each participant. There is consensus that the lack of a correct perspective view will negatively affect interaction fidelity and therefore also collaboration. Several research approaches focus on providing a correct perspective view to all participants to enable co-located work. However, these approaches are usually either based on custom hardware solutions that limit the number of users with a correct perspective view or software solutions striving to eliminate or mitigate restrictions with custom image-generation approaches. In this paper we investigate an often overlooked approach to enable collaboration for multiple users in an immersive virtual environment designed for a single user. The approach provides one (active) user with a perspective-correct view while other (passive) users receive visual cues that are not perspective-correct. We used this active-passive approach to investigate the limitations posed by assigning the viewpoint to only one user. The findings of our study, though inconclusive, revealed two curiosities. First, our results suggest that the location of target geometry is an important factor to consider for designing interaction, expanding on prior work that has studied only the relation between user positions. Secondly, there seems to be only a low cost involved in accepting the limitation of providing perspective-correct images to a single user, when comparing with a baseline, during a coordinated work approach. These findings advance our understanding of collaboration in co-located virtual environments and suggest an approach to simplify co-located collaboration

    High fidelity walkthroughs in archaeology sites

    Get PDF
    Comunicação apresentada no 6th International Symposium on Virtual Reality, Archaeology and Cultural Heritage (VAST 2005), Pisa, Italy, 8-11 Novembro 2005.Fast and affordable computing systems currently support walkthroughs into virtual reconstructed sites, with fast frame rate generation of synthetic images. However, archaeologists still complain about the lack of realism in these interactive tours, mainly due to the false ambient illumination. Accurate visualizations require physically based global illumination models to render the scenes, which are computationally too demanding. Faster systems and novel rendering techniques are required: current clusters provide a feasible and affordable path towards these goals, and we developed a framework to support smooth virtual walkthroughs, using progressive rendering to converge to high fidelity images whenever computing power surplus is available. This framework exploits spatial and temporal coherence among successive frames, serving multiple clients that share and interact with the same virtual model, while maintaining each its own view of the model. It is based on a three-tier architecture: the outer layer embodies light-weight visualization clients, which perform all the user interactions and display the final images using the available graphics hardware; the inner layer is a parallel version of a physically based ray tracer running on a cluster of off-the-shelf PCs; in the middle layer lies the shading management agent (SMA), which monitors the clients' states, supplies each with properly shaded 3D points, maintains a cache of previously rendered geometry and requests relevant shading samples to the parallel renderer, whenever required. A prototype of a high fidelity walkthrough in the archaeologic virtual model of the roman town of Bracara Augusta was developed, and the current evaluation tests aimed to measure the performance improvements due to the use of SMA caches and associated parallel rendering capabilities. Preliminary results show that interactive frame rates are sustainable and the system is highly responsive.Fundação para a Ciência e Tecnologia (FCT) - POSI/CHS/42041/2001

    Wavelet based stereo images reconstruction using depth images

    Get PDF
    It is believed by many that three-dimensional (3D) television will be the next logical development toward a more natural and vivid home entertaiment experience. While classical 3D approach requires the transmission of two video streams, one for each view, 3D TV systems based on depth image rendering (DIBR) require a single stream of monoscopic images and a second stream of associated images usually termed depth images or depth maps, that contain per-pixel depth information. Depth map is a two-dimensional function that contains information about distance from camera to a certain point of the object as a function of the image coordinates. By using this depth information and the original image it is possible to reconstruct a virtual image of a nearby viewpoint by projecting the pixels of available image to their locations in 3D space and finding their position in the desired view plane. One of the most significant advantages of the DIBR is that depth maps can be coded more efficiently than two streams corresponding to left and right view of the scene, thereby reducing the bandwidth required for transmission, which makes it possible to reuse existing transmission channels for the transmission of 3D TV. This technique can also be applied for other 3D technologies such as multimedia systems. In this paper we propose an advanced wavelet domain scheme for the reconstruction of stereoscopic images, which solves some of the shortcommings of the existing methods discussed above. We perform the wavelet transform of both the luminance and depth images in order to obtain significant geometric features, which enable more sensible reconstruction of the virtual view. Motion estimation employed in our approach uses Markov random field smoothness prior for regularization of the estimated motion field. The evaluation of the proposed reconstruction method is done on two video sequences which are typically used for comparison of stereo reconstruction algorithms. The results demonstrate advantages of the proposed approach with respect to the state-of-the-art methods, in terms of both objective and subjective performance measures

    Reflectance Transformation Imaging (RTI) System for Ancient Documentary Artefacts

    No full text
    This tutorial summarises our uses of reflectance transformation imaging in archaeological contexts. It introduces the UK AHRC funded project reflectance Transformation Imaging for Anciant Documentary Artefacts and demonstrates imaging methodologies

    Innovative strategies for 3D visualisation using photogrammetry and 3D scanning for mobile phones

    Get PDF
    3D model generation through Photogrammetry is a modern overlay of digital information representing real world objects in a virtual world. The immediate scope of this study aims at generating 3D models using imagery and overcoming the challenge of acquiring accurate 3D meshes. This research aims to achieve optimised ways to document raw 3D representations of real life objects and then converting them into retopologised, textured usable data through mobile phones. Augmented Reality (AR) is a projected combination of real and virtual objects. A lot of work is done to create market dependant AR applications so customers can view products before purchasing them. The need is to develop a product independent photogrammetry to AR pipeline which is freely available to create independent 3D Augmented models. Although for the particulars of this research paper, the aim would be to compare and analyse different open source SDK’s and libraries for developing optimised 3D Mesh using Photogrammetry/3D Scanning which will contribute as a main skeleton to the 3D-AR pipeline. Natural disasters, global political crisis, terrorist attacks and other catastrophes have led researchers worldwide to capture monuments using photogrammetry and laser scans. Some of these objects of “global importance” are processed by companies including CyArk (Cyber Archives) and UNESCO’s World Heritage Centre, who work against time to preserve these historical monuments, before they are damaged or in some cases completely destroyed. The need is to question the significance of preserving objects and monuments which might be of value locally to a city or town. What is done to preserve those objects? This research would develop pipelines for collecting and processing 3D data so the local communities could contribute towards restoring endangered sites and objects using their smartphones and making these objects available to be viewed in location based AR. There exist some companies which charge relatively large amounts of money for local scanning projects. This research would contribute as a non-profitable project which could be later used in school curriculums, visitor attractions and historical preservation organisations all over the globe at no cost. The scope isn’t limited to furniture, museums or marketing, but could be used for personal digital archiving as well. This research will capture and process virtual objects using Mobile Phones comparing methodologies used in Computer Vision design from data conversion on Mobile phones to 3D generation, texturing and retopologising. The outcomes of this research will be used as input for generating AR which is application independent of any industry or product
    • …
    corecore