786 research outputs found

    Design of Immersive Online Hotel Walkthrough System Using Image-Based (Concentric Mosaics) Rendering

    Get PDF
    Conventional hotel booking websites only represents their services in 2D photos to show their facilities. 2D photos are just static photos that cannot be move and rotate. Imagebased virtual walkthrough for the hospitality industry is a potential technology to attract more customers. In this project, a research will be carried out to create an Image-based rendering (IBR) virtual walkthrough and panoramic-based walkthrough by using only Macromedia Flash Professional 8, Photovista Panorama 3.0 and Reality Studio for the interaction of the images. The web-based of the image-based are using the Macromedia Dreamweaver Professional 8. The images will be displayed in Adobe Flash Player 8 or higher. In making image-based walkthrough, a concentric mosaic technique is used while image mosaicing technique is applied in panoramic-based walkthrough. A comparison of the both walkthrough is compared. The study is also focus on the comparison between number of pictures and smoothness of the walkthrough. There are advantages of using different techniques such as image-based walkthrough is a real time walkthrough since the user can walk around right, left, forward and backward whereas the panoramic-based cannot experience real time walkthrough because the user can only view 360 degrees from a fixed spot

    Early Visual Cultures and Panofsky’s Perspektive als ‘symbolische Form’

    Get PDF
    This paper investigates the historical dimension of perspectival representations. It aims to provide a heterogeneous though comparative picture of culturally unrelated visual con- ceptualizations of pictorial spaces, written with a view toward explaining how the multiple modes of perspective were introduced in antiquity. Point of departure for this critical approach is Erwin Panofsky’s essay Die Perspektive als ‘symbolische Form’ , published in 1927. His essay analyses the pictorial visualization of space and spatiality in different historical contexts, examining their cultural codification in terms of the heuristic category of ‘sym- bolic form’. However, ‘perspective’, which is commonly understood as synonymous with ‘linear perspective’, deserves a new discussion in the context of diverse visual cultures: A ‘naturalisation’ of the gaze as it is suggested by pictorial spaces which function mimetically is primarily associated with the early modern period in Western art. Instead of merely re- reading Panofsky’s canonical text, this paper presents an interdisciplinary re-viewing of a selection of the pictorial examples chosen by Panofsky, commenting upon their perspec- tive(s) from different vantage points

    Variable Resolution & Dimensional Mapping For 3d Model Optimization

    Get PDF
    Three-dimensional computer models, especially geospatial architectural data sets, can be visualized in the same way humans experience the world, providing a realistic, interactive experience. Scene familiarization, architectural analysis, scientific visualization, and many other applications would benefit from finely detailed, high resolution, 3D models. Automated methods to construct these 3D models traditionally has produced data sets that are often low fidelity or inaccurate; otherwise, they are initially highly detailed, but are very labor and time intensive to construct. Such data sets are often not practical for common real-time usage and are not easily updated. This thesis proposes Variable Resolution & Dimensional Mapping (VRDM), a methodology that has been developed to address some of the limitations of existing approaches to model construction from images. Key components of VRDM are texture palettes, which enable variable and ultra-high resolution images to be easily composited; texture features, which allow image features to integrated as image or geometry, and have the ability to modify the geometric model structure to add detail. These components support a primary VRDM objective of facilitating model refinement with additional data. This can be done until the desired fidelity is achieved as practical limits of infinite detail are approached. Texture Levels, the third component, enable real-time interaction with a very detailed model, along with the flexibility of having alternate pixel data for a given area of the model and this is achieved through extra dimensions. Together these techniques have been used to construct models that can contain GBs of imagery data

    Progressive Refinement Imaging

    Get PDF
    This paper presents a novel technique for progressive online integration of uncalibrated image sequences with substantial geometric and/or photometric discrepancies into a single, geometrically and photometrically consistent image. Our approach can handle large sets of images, acquired from a nearly planar or infinitely distant scene at different resolutions in object domain and under variable local or global illumination conditions. It allows for efficient user guidance as its progressive nature provides a valid and consistent reconstruction at any moment during the online refinement process. // Our approach avoids global optimization techniques, as commonly used in the field of image refinement, and progressively incorporates new imagery into a dynamically extendable and memory‐efficient Laplacian pyramid. Our image registration process includes a coarse homography and a local refinement stage using optical flow. Photometric consistency is achieved by retaining the photometric intensities given in a reference image, while it is being refined. Globally blurred imagery and local geometric inconsistencies due to, e.g. motion are detected and removed prior to image fusion. // We demonstrate the quality and robustness of our approach using several image and video sequences, including handheld acquisition with mobile phones and zooming sequences with consumer cameras

    Real time color projection for 3d models

    Get PDF
    In this work, we present a solution for interactive visualization of virtual objects composed of a 3D model and a set of calibrated photographies. Our approach selects, projects and blends the photos based on a few criteria in order to improve perception of details while maintaining an interactive performance. It works as a dynamic texture map generator, where for each new view position and direction the best combination of the photos is sought. The main advantage of our technique is that it tries to preserve the original photo information as best as possible. Furthermore, the proposed method were compared with a popular texture mapping technique. Our method produced less artifacts in general, and was able to handle better large and non uniform datasets.Essa dissertação apresenta uma solução o para visualizar, em tempo real, datasets compostos por um modelo 3D e um conjunto de fotos calibradas. Nossa solução seleciona, projeta e compõe as fotografias em função da posição e da direção da câmera de forma a maximizar a percepção de detalhes e, ao mesmo tempo, atingir taxas interativas de visualização. O método funciona como um gerador dinâmico de texturas, onde para cada novo ponto de vista a melhor combinação das fotos é buscada. A principal vantagem da nossa abordagem é tentar preservar as informações originais das fotos da melhor forma possível. Além disso, os resultados do método proposto foi comparado com o tradicional texture mapping. Revelando, assim, mais precisão e menos artefatos para datasets extensos com câmeras distribuídas não uniformemente
    corecore