57 research outputs found

    CyberWalk : a web-based distributed virtual walkthrough environment

    Get PDF
    A distributed virtual walkthrough environment allows users connected to the geometry server to walk through a specific place of interest, without having to travel physically. This place of interest may be a virtual museum, virtual library or virtual university. There are two basic approaches to distribute the virtual environment from the geometry server to the clients, complete replication and on-demand transmission. Although the on-demand transmission approach saves waiting time and optimizes network usage, many technical issues need to be addressed in order for the system to be interactive. CyberWalk is a web-based distributed virtual walkthrough system developed based on the on-demand transmission approach. It achieves the necessary performance with a multiresolution caching mechanism. First, it reduces the model transmission and rendering times by employing a progressive multiresolution modeling technique. Second, it reduces the Internet response time by providing a caching and prefetching mechanism. Third, it allows a client to continue to operate, at least partially, when the Internet is disconnected. The caching mechanism of CyberWalk tries to maintain at least a minimum resolution of the object models in order to provide at least a coarse view of the objects to the viewer. All these features allow CyberWalk to provide sufficient interactivity to the user for virtual walkthrough over the Internet environment. In this paper, we demonstrate the design and implementation of CyberWalk. We investigate the effectiveness of the multiresolution caching mechanism of CyberWalk in supporting virtual walkthrough applications in the Internet environment through numerous experiments, both on the simulation system and on the prototype system

    Photo Based 3D Walkthrough

    Get PDF
    The objective of 'Photo Based 3D Walkthrough' is to understand how image-based rendering technology is used to create virtual environment and to develop aprototype system which is capable ofproviding real-time 3D walkthrough experience by solely using 2D images. Photo realism has always been an aim of computer graphics in virtual environment. Traditional graphics needs a great amount of works and time to construct a detailed 3D model andscene. Despite the tedious works in constructing the 3D models andscenes, a lot ofefforts need to beput in to render the constructed 3D models and scenes to enhance the level of realism. Traditional geometry-based rendering systems fall short ofsimulating the visual realism of a complex environment and are unable to capture and store a sampled representation ofa large environment with complex lighting and visibility effects. Thus, creating a virtual walkthrough ofa complex real-world environment remains one of the most challenging problems in computer graphics. Due to the various disadvantages of the traditional graphics and geometry-based rendering systems, image-based rendering (IBR) has been introduced recently to overcome the above problems. In this project, a research will be carried out to create anIBR virtual walkthrough by using only OpenGL and C++program without the use of any game engine or QuickTime VR function. Normal photographs (not panoramic photographs) are used as the source material in creating the virtual scene and keyboard is used asthe main navigation tool in the virtual environment. The quality ofthe virtual walkthrough prototype constructed isgood withjust a littlejerkiness

    Querying large virtual models for interactive walkthrough

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Prediction-Based Prefetching for Remote Rendering Streaming in Mobile Virtual Environments

    Get PDF
    Remote Image-based rendering (IBR) is the most suitable solution for rendering complex 3D scenes on mobile devices, where the server renders the 3D scene and streams the rendered images to the client. However, sending a large number of images is inefficient due to the possible limitations of wireless connections. In this paper, we propose a prefetching scheme at the server side that predicts client movements and hence prefetches the corresponding images. In addition, an event-driven simulator was designed and implemented to evaluate the performance of the proposed scheme. The simulator was used to compare between prediction-based prefetching and prefetching images based on spatial locality. Several experiments were conducted to study the performance with different movement patterns as well as with different virtual environments (VEs). The results have shown that the hit ratio of the prediction-based scheme is greater than the localization scheme in the case of random and circular walk movement patterns by approximately 35% and 17%, respectively. In addition, for a VE with high level of details, the proposed scheme outperforms the localization scheme by approximately 13%. However, for a VE with low level of details the localization based scheme outperforms the proposed scheme by only 5%

    Prediction-based Prefetching for Remote Rendering Streaming in Mobile Virtual Environments

    Full text link

    Virtual Field Trip via Digital Storytelling

    Get PDF
    Digital storytelling is a practice of combining digital content such as 3-dimensional images, text, sound, images, and video to create a short story. It is the intersection between the old art of storytelling and access to powerful technologies. This project will be a step to experiment the development and effectiveness of digital storytelling and hopefully ignite a source of motivation and encourages others to tap into their interests and skills to develop their own digital storytelling and expand ICT usage in this country. School children look forward to traditional field trips. However, such trips are costly. VFT aims to reduce if not eliminate the constraints that traditional field trips face such as money, time, energy, resources, distance and inaccessible area. To fit the time frame, the VFT is created only for small selected areas in the KL Bird Park even though the KL Bird Park is not that big because some of the areas are not suitable to take panoramic pictures. The development of the VFT is adapted from QTVR Creation Steps by Kitchens (2006). The procedure consists of defining the problem statements and goals, literature review and research, creating image content through taking photos at the site, transforming the photos to QTVR node through stitching, design and construct prototype, inserting interactivity such as hotspots, delivering the output, and last but not least, evaluation. The final output of the project is the KL Bird Park Virtual Field Trip which consists of a photo based 3D panoramic images for each scene from the site which are linked to one another and also hotspots which are placed on the panoramic images to reveal the birds' information with one click on the hotspots. The informal evaluation of the final output that was conducted shows an overwhelming response and acceptance. All of the respondents would like to see more of this type of VFT in the future

    SPRITE TREE: AN EFFICIENT IMAGE-BASED REPRESENTATION FOR NETWORKED VIRTUAL ENVIRONMENTS

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH
    corecore