47 research outputs found
FIRES: Fast Imaging and 3D Reconstruction of Archaeological Sherds
Sherds, as the most common artifacts uncovered during archaeologicalexcavations, carry rich information about past human societies so need to beaccurately reconstructed and recorded digitally for analysis and preservation.Often hundreds of fragments are uncovered in a day at an archaeologicalexcavation site, far beyond the scanning capacity of existing imaging systems.Hence, there is high demand for a desirable image acquisition system capable ofimaging hundreds of fragments per day. In response to this demand, we developeda new system, dubbed FIRES, for Fast Imaging and 3D REconstruction of Sherds.The FIRES system consists of two main components. The first is an optimallydesigned fast image acquisition device capable of capturing over 700 sherds perday (in 8 working hours) in actual tests at an excavation site, which is oneorder-of-magnitude faster than existing systems. The second component is anautomatic pipeline for 3D reconstruction of the sherds from the images capturedby the imaging acquisition system, achieving reconstruction accuracy of 0.16milimeters. The pipeline includes a novel batch matching algorithm that matchespartial 3D scans of the front and back sides of the sherds and a new ICP-typemethod that registers the front and back sides sharing very narrow overlappingregions. Extensive validation in labs and testing in excavation sitesdemonstrated that our FIRES system provides the first fast, accurate, portal,and cost-effective solution for the task of imaging and 3D reconstruction ofsherds in archaeological excavations.<br
Morphometric Analysis through 3D Modelling of Bronze Age Stone Moulds from Central Sardinia
Stone moulds were basic elements of metallurgy during the Bronze Age, and their analysis and characterization are very important to improve the knowledge on these artefacts useful for typological characterization. The stone moulds investigated in this study were found during an archaeological field survey in several Nuragic (Bronze Age) settlements in Central Sardinia. Recent studies have shown that photogrammetry can be effectively used for the 3D reconstruction of small and mediumâsized archaeological finds, although there are still many challenges in producing highâquality digital replicas of ancient artefacts due to their surface complexity and consistency. In this paper, we propose a multidisciplinary approach using mineralogical (Xâray powder diffraction) and petrographic (thin section) analysis of stone materials, as well as an experimental photogrammetric method for 3D reconstruction from multiâview images performed with recent software based on the CMPMVS algorithm. The photogrammetric image dataset was carried out using an experimental rig equipped with a 26.2 Mpix full frame digital camera. We also assessed the accuracy of the reconstruction models in order to verify their precision and readability according to archaeological goals. This allowed us to provide an effective tool for more detailed study of the geometricâdimensional aspects of the moulds. Furthermore, this paper demonstrates the potentialities of an integrated mineroâpetrographic and photogrammetric approach for the characterization of small artefacts, providing an effective tool for more inâdepth investigation of future typo-logical comparisons and provenance studies
Batch-based Model Registration for Fast 3D Sherd Reconstruction
3D reconstruction techniques have widely been used for digital documentation
of archaeological fragments. However, efficient digital capture of fragments
remains as a challenge. In this work, we aim to develop a portable,
high-throughput, and accurate reconstruction system for efficient digitization
of fragments excavated in archaeological sites. To realize high-throughput
digitization of large numbers of objects, an effective strategy is to perform
scanning and reconstruction in batches. However, effective batch-based scanning
and reconstruction face two key challenges: 1) how to correlate partial scans
of the same object from multiple batch scans, and 2) how to register and
reconstruct complete models from partial scans that exhibit only small
overlaps. To tackle these two challenges, we develop a new batch-based matching
algorithm that pairs the front and back sides of the fragments, and a new
Bilateral Boundary ICP algorithm that can register partial scans sharing very
narrow overlapping regions. Extensive validation in labs and testing in
excavation sites demonstrate that these designs enable efficient batch-based
scanning for fragments. We show that such a batch-based scanning and
reconstruction pipeline can have immediate applications on digitizing sherds in
archaeological excavations. Our project page:
https://jiepengwang.github.io/FIRES/.Comment: Project page: https://jiepengwang.github.io/FIRES
Quantifying Depth of Field and Sharpness for Image-Based 3D Reconstruction of Heritage Objects
Image-based 3D reconstruction processing tools assume sharp focus across the entire object being imaged, but depth of field (DOF) can be a limitation when imaging small to medium sized objects resulting in variation in image sharpness with range from the camera. While DOF is well understood in the context of photographic imaging and it is considered with the acquisition for image-based 3D reconstruction, an "acceptable" level of sharpness and associated "circle of confusion" has not yet been quantified for the 3D case. The work described in this paper contributes to the understanding and quantification of acceptable sharpness by providing evidence of the influence of DOF on the 3D reconstruction of small to medium sized museum objects. Spatial frequency analysis using established collections photography imaging guidelines and targets is used to connect input image quality with 3D reconstruction output quality. Combining quantitative spatial frequency analysis with metrics from a series of comparative 3D reconstructions provides insights into the connection between DOF and output model quality. Lab-based quantification of DOF is used to investigate the influence of sharpness on the output 3D reconstruction to better understand the effects of lens aperture, camera to object surface angle, and taking distance. The outcome provides evidence of the role of DOF in image-based 3D reconstruction and it is briefly presented how masks derived from image content and depth maps can be used to remove unsharp image content and optimise structure from motion (SfM) and multiview stereo (MVS) workflows
Object segmentation from low depth of field images and video sequences
This thesis addresses the problem of autonomous object segmentation. To do so
the proposed segementation method uses some prior information, namely that the
image to be segmented will have a low depth of field and that the object of interest
will be more in focus than the background. To differentiate the object from the
background scene, a multiscale wavelet based assessment is proposed. The focus
assessment is used to generate a focus intensity map, and a sparse fields level set
implementation of active contours is used to segment the object of interest. The
initial contour is generated using a grid based technique.
The method is extended to segment low depth of field video sequences with
each successive initialisation for the active contours generated from the binary dilation
of the previous frame's segmentation. Experimental results show good segmentations
can be achieved with a variety of different images, video sequences, and
objects, with no user interaction or input.
The method is applied to two different areas. In the first the segmentations
are used to automatically generate trimaps for use with matting algorithms. In the
second, the method is used as part of a shape from silhouettes 3D object reconstruction
system, replacing the need for a constrained background when generating
silhouettes. In addition, not using a thresholding to perform the silhouette segmentation
allows for objects with dark components or areas to be segmented accurately.
Some examples of 3D models generated using silhouettes are shown
Depth Enhancement and Surface Reconstruction with RGB/D Sequence
Surface reconstruction and 3D modeling is a challenging task, which has been explored for decades by the computer vision, computer graphics, and machine learning communities. It is fundamental to many applications such as robot navigation, animation and scene understanding, industrial control and medical diagnosis. In this dissertation, I take advantage of the consumer depth sensors for surface reconstruction. Considering its limited performance on capturing detailed surface geometry, a depth enhancement approach is proposed in the first place to recovery small and rich geometric details with captured depth and color sequence. In addition to enhancing its spatial resolution, I present a hybrid camera to improve the temporal resolution of consumer depth sensor and propose an optimization framework to capture high speed motion and generate high speed depth streams. Given the partial scans from the depth sensor, we also develop a novel fusion approach to build up complete and watertight human models with a template guided registration method. Finally, the problem of surface reconstruction for non-Lambertian objects, on which the current depth sensor fails, is addressed by exploiting multi-view images captured with a hand-held color camera and we propose a visual hull based approach to recovery the 3D model