421 research outputs found
Practical 3D Reconstruction of Cultural Heritage Artefacts from Photographs â Potentials and Issues
[EN] A new technology is on the rise that allows the 3D-reconstruction of Cultural Heritage objects from image sequences taken by ordinary digital cameras. We describe the first experiments we made as early adopters in a community-funded research project whose goal is to develop it into a standard CH technology. The paper describes in detail a step-by-step procedure that can be reproduced using free tools by any CH professional. We also give a critical assessment of the workflow and describe several ideas for developing it further into an automatic procedure for 3D reconstruction from images.We gratefully acknowledge the funding from the European Commission for the FP7-IP 3D-COFORM under grant No. 231809. With
this support, we are confident to provide solutions for the mentioned problems soon.Fellner, DW.; Havemann, S.; Beckmann, P.; Pan, X. (2011). Practical 3D Reconstruction of Cultural Heritage Artefacts from Photographs â Potentials and Issues. Virtual Archaeology Review. 2(4):95-103. https://doi.org/10.4995/var.2011.4564OJS9510324CROFTS N., DOERR M., GILL T., STEAD S., STIFF M.: Definition of the CIDOC Conceptual Reference Model, version 4.2 ed. CIDOC Documentation Standards Working Group, June 2005. Also ISO/PRF 21127, available from cidoc.ics.forth.gr.LONDON CHARTER INITIATIVE (HUGH DENARD): The london charter, June 2006. www.londoncharter.org.PAN, X., BECKMANN, P., HAVEMANN, S., TZOMPANAKI, K., DOERR, M., FELLNER, D.W., A distributed Object Repository for Cultural Heritage, Proc. VAST 2010 conference, Eurographics Press, 201
FineâGrained Memory Profiling of GPGPU Kernels
Memory performance is a crucial bottleneck in many GPGPU applications, making optimizations for hardware and software mandatory. While hardware vendors already use highly efficient caching architectures, software engineers usually have to organize their data accordingly in order to efficiently make use of these, requiring deep knowledge of the actual hardware. In this paper we present a novel technique for fineâgrained memory profiling that simulates the whole pipeline of memory flow and finally accumulates profiling values in a way that the user retains information about the potential region in the GPU program by showing these values separately for each allocation. Our memory simulator turns out to outperform stateâofâtheâart memory models of NVIDIA architectures by a magnitude of 2.4 for the L1 cache and 1.3 for the L2 cache, in terms of accuracy. Additionally, we find our technique of fine grained memory profiling a useful tool for memory optimizations, which we successfully show in case of ray tracing and machine learning applications
LIFE-SHARE Project: Developing a Digitisation Strategy Toolkit
This poster will outline the Digitisation Strategy Toolkit created as part of the LIFE-SHARE project. The toolkit is based on the lifecycle model created by the LIFE project and explores the creation, acquisition, ingest, preservation (bit-stream and content) and access requirements for a digitisation strategy. This covers the policies and infrastructure required in libraries to establish successful practices. The toolkit also provides both internal and external resources to support the service. This poster will illustrate how the toolkit works effectively to support digitisation with examples from three case studies at the Universities of Leeds, Sheffield and York
OLBVH: octree linear bounding volume hierarchy for volumetric meshes
We present a novel bounding volume hierarchy for GPU-accelerated direct volume rendering (DVR) as well as volumetric mesh slicing and inside-outside intersection testing. Our novel octree-based data structure is laid out linearly in memory using space filling Morton curves. As our new data structure results in tightly fitting bounding volumes, boundary markers can be associated with nodes in the hierarchy. These markers can be used to speed up all three use cases that we examine. In addition, our data structure is memory-efficient, reducing memory consumption by up to 75%. Tree depth and memory consumption can be controlled using a parameterized heuristic during construction. This allows for significantly shorter construction times compared to the state of the art. For GPU-accelerated DVR, we achieve performance gain of 8.4Ăâ13Ă. For 3D printing, we present an efficient conservative slicing method that results in a 3Ăâ25Ă speedup when using our data structure. Furthermore, we improve volumetric mesh intersection testing speed by 5Ăâ52Ă
Reconstructing Bounding Volume Hierarchies from Memory Traces of Ray Tracers
The ongoing race to improve computer graphics leads to more complex GPU hardware and ray tracing techniques whose internal functionality is sometimes hidden to the user. Bounding volume hierarchies and their construction are an important performance aspect of such ray tracing implementations. We propose a novel approach that utilizes binary instrumentation to collect memory traces and then uses them to extract the bounding volume hierarchy (BVH) by analyzing access patters. Our reconstruction allows combining memory traces captured from multiple ray tracing views independently, increasing the reconstruction result. It reaches accuracies of 30% to 45% when comparing against the ground-truth BVH used for ray tracing a single view on a simple scene with one object. With multiple views it is even possible to reconstruct the whole BVH, while we already achieve 98% with just seven views. Because our approach is largely independent of the data structures used internally, these accurate reconstructions serve as a first step into estimation of unknown construction techniques of ray tracing implementations
Dithered Color Quantization
Image quantization and digital halftoning are fundamental problems in computer graphics, which arise when displaying high-color images on non-truecolor devices. Both steps are generally performed sequentially and, in most cases, independent of each other. Color quantization with a pixel-wise defined distortion measure and the dithering process with its local neighborhood optimize different quality criteria or, frequently, follow a heuristic without reference to any quality measure. In this paper we propose a new method to simultaneously quantize and dither color images. The method is based on a rigorous costâfunction approach which optimizes a quality criterion derived from a generic model of human perception. A highly efficient algorithm for optimization based on a multiscale method is developed for the dithered color quantization cost function. The quality criterion and the optimization algorithms are evaluated on a representative set of artificial and realâworld images as well as on a collection of icons. A significant image quality improvement is observed compared to standard color reduction approaches
Human-Centric Chronographics:Making Historical Time Memorable
A series of experiments is described, evaluating user recall of visualisations of historical chronology. Such visualisations are widely created but have not hitherto been evaluated. Users were tested on their ability to learn a sequence of historical events presented in a virtual environment (VE) fly-through visualisation, compared with the learning of equivalent material in other formats that are sequential but lack the 3D spatial aspect. Memorability is a particularly important function of visualisation in education. The measures used during evaluation are enumerated and discussed. The majority of the experiments reported compared three conditions, one using a virtual environment visualisation with a significant spatial element, one using a serial on-screen presentation in PowerPoint, and one using serial presentation on paper. Some aspects were trialled with groups having contrasting prior experience of computers, in the UK and Ukraine. Evidence suggests that a more complex environment including animations and sounds or music, intended to engage users and reinforce memorability, were in fact distracting. Findings are reported in relation to the age of the participants, suggesting that children at 11â14 years benefit less from, or are even disadvantaged by, VE visualisations when compared with 7â9 year olds or undergraduates. Finally, results suggest that VE visualisations offering a âlandscapeâ of information are more memorable than those based on a linear model.
Keywords: timeline, chronographic
Depth of Field Segmentation for Near-Lossless Image Compression and 3D Reconstruction
Over the years, photometric 3d reconstruction gained increasing importance in several disciplines, especially in cultural heritage preservation. While increasing sizes of images and datasets enhanced the overall reconstruction results, requirements in storage got immense. Additionally, unsharp areas in the background have a negative influence on 3d reconstructions algorithms. Handling the sharp foreground differently from the background simultaneously helps to reduce storage size requirements and improves 3d reconstruction results. In this paper, we examine regions outside the Depth of Field (DoF) and eliminate their inaccurate information to 3d reconstructions. We extract DoF maps from the images and use them to handle the foreground and background with different compression backends making sure that the actual object is compressed losslessly. Our algorithm achieves compression rates between 1:8 and 1:30 depending on the artifact and DoF size and improves the 3d reconstruction
Alignment and Reassembly of Broken Specimens for Creep Ductility Measurements
Designing new types of heat-resistant steel components is an important and active research field in material science. It requires detailed knowledge of the inherent steel properties, especially concerning their creep ductility. Highly precise automatic state-of-the-art approaches for such measurements are very expensive and often times invasive. The alternative requires manual work from specialists and is time consuming and unrobust. In this paper, we present a novel approach that uses a photometric scanning system for capturing the geometry of steel specimens, making further measurement extractions possible. In our proposed system, we apply calibration for pan angles that occur during capturing and a robust reassembly for matching two broken specimen pieces to extract the specimen's geometry. We compare our results against ”CT scans and found that it deviates by 0.057mm on average distributed over the whole specimen for a small amount of 36 captured images. Additionally, comparisons to manually measured values indicate that our system leads to more robust measurements
- âŠ