415 research outputs found

    Die Vorstandsperspektive

    Full text link

    Practical 3D Reconstruction of Cultural Heritage Artefacts from Photographs – Potentials and Issues

    Full text link
    [EN] A new technology is on the rise that allows the 3D-reconstruction of Cultural Heritage objects from image sequences taken by ordinary digital cameras. We describe the first experiments we made as early adopters in a community-funded research project whose goal is to develop it into a standard CH technology. The paper describes in detail a step-by-step procedure that can be reproduced using free tools by any CH professional. We also give a critical assessment of the workflow and describe several ideas for developing it further into an automatic procedure for 3D reconstruction from images.We gratefully acknowledge the funding from the European Commission for the FP7-IP 3D-COFORM under grant No. 231809. With this support, we are confident to provide solutions for the mentioned problems soon.Fellner, DW.; Havemann, S.; Beckmann, P.; Pan, X. (2011). Practical 3D Reconstruction of Cultural Heritage Artefacts from Photographs – Potentials and Issues. Virtual Archaeology Review. 2(4):95-103. https://doi.org/10.4995/var.2011.4564OJS9510324CROFTS N., DOERR M., GILL T., STEAD S., STIFF M.: Definition of the CIDOC Conceptual Reference Model, version 4.2 ed. CIDOC Documentation Standards Working Group, June 2005. Also ISO/PRF 21127, available from cidoc.ics.forth.gr.LONDON CHARTER INITIATIVE (HUGH DENARD): The london charter, June 2006. www.londoncharter.org.PAN, X., BECKMANN, P., HAVEMANN, S., TZOMPANAKI, K., DOERR, M., FELLNER, D.W., A distributed Object Repository for Cultural Heritage, Proc. VAST 2010 conference, Eurographics Press, 201

    Fine‐Grained Memory Profiling of GPGPU Kernels

    Get PDF
    Memory performance is a crucial bottleneck in many GPGPU applications, making optimizations for hardware and software mandatory. While hardware vendors already use highly efficient caching architectures, software engineers usually have to organize their data accordingly in order to efficiently make use of these, requiring deep knowledge of the actual hardware. In this paper we present a novel technique for fine‐grained memory profiling that simulates the whole pipeline of memory flow and finally accumulates profiling values in a way that the user retains information about the potential region in the GPU program by showing these values separately for each allocation. Our memory simulator turns out to outperform state‐of‐the‐art memory models of NVIDIA architectures by a magnitude of 2.4 for the L1 cache and 1.3 for the L2 cache, in terms of accuracy. Additionally, we find our technique of fine grained memory profiling a useful tool for memory optimizations, which we successfully show in case of ray tracing and machine learning applications

    LIFE-SHARE Project: Developing a Digitisation Strategy Toolkit

    Get PDF
    This poster will outline the Digitisation Strategy Toolkit created as part of the LIFE-SHARE project. The toolkit is based on the lifecycle model created by the LIFE project and explores the creation, acquisition, ingest, preservation (bit-stream and content) and access requirements for a digitisation strategy. This covers the policies and infrastructure required in libraries to establish successful practices. The toolkit also provides both internal and external resources to support the service. This poster will illustrate how the toolkit works effectively to support digitisation with examples from three case studies at the Universities of Leeds, Sheffield and York

    Dithered Color Quantization

    Get PDF
    Image quantization and digital halftoning are fundamental problems in computer graphics, which arise when displaying high-color images on non-truecolor devices. Both steps are generally performed sequentially and, in most cases, independent of each other. Color quantization with a pixel-wise defined distortion measure and the dithering process with its local neighborhood optimize different quality criteria or, frequently, follow a heuristic without reference to any quality measure. In this paper we propose a new method to simultaneously quantize and dither color images. The method is based on a rigorous cost–function approach which optimizes a quality criterion derived from a generic model of human perception. A highly efficient algorithm for optimization based on a multiscale method is developed for the dithered color quantization cost function. The quality criterion and the optimization algorithms are evaluated on a representative set of artificial and real–world images as well as on a collection of icons. A significant image quality improvement is observed compared to standard color reduction approaches

    Human-Centric Chronographics:Making Historical Time Memorable

    Get PDF
    A series of experiments is described, evaluating user recall of visualisations of historical chronology. Such visualisations are widely created but have not hitherto been evaluated. Users were tested on their ability to learn a sequence of historical events presented in a virtual environment (VE) fly-through visualisation, compared with the learning of equivalent material in other formats that are sequential but lack the 3D spatial aspect. Memorability is a particularly important function of visualisation in education. The measures used during evaluation are enumerated and discussed. The majority of the experiments reported compared three conditions, one using a virtual environment visualisation with a significant spatial element, one using a serial on-screen presentation in PowerPoint, and one using serial presentation on paper. Some aspects were trialled with groups having contrasting prior experience of computers, in the UK and Ukraine. Evidence suggests that a more complex environment including animations and sounds or music, intended to engage users and reinforce memorability, were in fact distracting. Findings are reported in relation to the age of the participants, suggesting that children at 11–14 years benefit less from, or are even disadvantaged by, VE visualisations when compared with 7–9 year olds or undergraduates. Finally, results suggest that VE visualisations offering a ‘landscape’ of information are more memorable than those based on a linear model. Keywords: timeline, chronographic

    Depth of Field Segmentation for Near-Lossless Image Compression and 3D Reconstruction

    Get PDF
    Over the years, photometric 3d reconstruction gained increasing importance in several disciplines, especially in cultural heritage preservation. While increasing sizes of images and datasets enhanced the overall reconstruction results, requirements in storage got immense. Additionally, unsharp areas in the background have a negative influence on 3d reconstructions algorithms. Handling the sharp foreground differently from the background simultaneously helps to reduce storage size requirements and improves 3d reconstruction results. In this paper, we examine regions outside the Depth of Field (DoF) and eliminate their inaccurate information to 3d reconstructions. We extract DoF maps from the images and use them to handle the foreground and background with different compression backends making sure that the actual object is compressed losslessly. Our algorithm achieves compression rates between 1:8 and 1:30 depending on the artifact and DoF size and improves the 3d reconstruction

    Solutions, Future Demands

    No full text
    Abstract: Working with the ubiquitous ’Web ’ we immediately realize its limitations when it comes to the delivery or exchange of non-textual, particularly graphical, information. Graphical information is still predominantly represented by raster images, either in a fairly low resolution to warrant acceptable transmission times or in high resolutions to please the reader’s perception thereby challenging his or her patience (as these large data sets take their time to travel over congested internet highways). Comparing the current situation with efforts and developments of the past, e.g. the Videotex systems developed in the time period from 1977 to 1985, we see that a proper integration of graphics from the very beginning has, once again, been overlooked. The situation is even worse going from two-dimensional images to three-dimensional models or scenes. VRML, originally designed to address this very demand has failed to establish itself as a reliable tool for the time window given and recent advances in graphics technology as well as digital library technology demand new approaches which VRML, at least in its current form, won’t be able to deliver. After summarizing the situation for 2D graphics in digital documents or digital libraries this paper concentrates on the 3D graphics aspects of recent digital library developments and tries to identify the future challenges the community needs to master. Category: I.3.5, H.3.7
    • 

    corecore