9 research outputs found
PHOTOGRAMMETRY AND MEDIEVAL ARCHITECTURE. USING BLACK AND WHITE ANALOGIC PHOTOGRAPHS FOR RECONSTRUCTING THE FOUNDATIONS OF THE LOST ROOD SCREEN AT SANTA CROCE, FLORENCE
In this research paper photogrammetric techniques have been successfully applied to historic black and white analogic photographs to convey previously inaccessible architectural and archaeological information. The chosen case study for this paper is the Franciscan Basilica of Santa Croce in Florence, Italy. A photogrammetric algorithm has been implemented over a series of b/w negatives portraying the archaeological excavations carried out in the years 1967–1969, after the traumatic flood of the river Arno in 1966 that severely damaged the city centre of Florence and, particularly, the Santa Croce monumental site. The final aim of this operation is to provide solid evidence for the virtual reconstruction of the lost rood screen of the basilica of Santa Croce, the current subject of the PhD research of one of the Authors (Giovanni Pescarmona) at the University of Florence. The foundations that were uncovered during the archaeological excavation in the ‘60s are one of the most important hints towards a convincing retro-planning of the structure. Using advanced photogrammetric techniques, and combining them with LIDAR scanning, it is possible to uncover new datasets that were previously inaccessible for scholars, opening new paths of research. This interdisciplinary approach, combining traditional art-historical research methods and state-of-the-art computational tools, tries to bridge the gap between areas of research that still do not communicate enough with each other, defining new frameworks in the field of Digital Art History
improving performance of feature extraction in sfm algorithms for 3d sparse point cloud
Abstract. The use of Structure-from-Motion algorithms is a common practice to obtain a rapid photogrammetric reconstruction. However, the performance of these algorithms is limited by the fact that in some conditions the resulting point clouds present low density. This is the case when processing materials from historical archives, such as photographs and videos, which generates only sparse point clouds due to the lack of necessary information in the photogrammetric reconstruction. This paper explores ways to improve the performance of open source SfM algorithms in order to guarantee the presence of strategic feature points in the resulting point cloud, even if sparse. To reach this objective, a photogrammetric workflow is proposed to process historical images. The first part of the workflow presents a method that allows the manual selection of feature points during the photogrammetric process. The second part evaluates the metric quality of the reconstruction on the basis of a comparison with a point cloud that has a different density from the sparse point cloud. The workflow was applied to two different case studies. Transformations of wall paintings of the Karanlık church in Cappadocia were analysed thanks to the comparison of 3D model resulting from archive photographs and a recent survey. Then a comparison was performed between the state of the Komise building in Japan, before and after restoration. The findings show that the method applied allows the metric scale and evaluation of the model also in bad condition and when only low-density point clouds are available. Moreover, this tool should be of great use for both art and architecture historians and geomatics experts, to study the evolution of Cultural Heritage
Multi-Sample Consensus Driven Unsupervised Normal Estimation for 3D Point Clouds
Deep normal estimators have made great strides on synthetic benchmarks.
Unfortunately, their performance dramatically drops on the real scan data since
they are supervised only on synthetic datasets. The point-wise annotation of
ground truth normals is vulnerable to inefficiency and inaccuracies, which
totally makes it impossible to build perfect real datasets for supervised deep
learning. To overcome the challenge, we propose a multi-sample consensus
paradigm for unsupervised normal estimation. The paradigm consists of
multi-candidate sampling, candidate rejection, and mode determination. The
latter two are driven by neighbor point consensus and candidate consensus
respectively. Two primary implementations of the paradigm, MSUNE and MSUNE-Net,
are proposed. MSUNE minimizes a candidate consensus loss in mode determination.
As a robust optimization method, it outperforms the cutting-edge supervised
deep learning methods on real data at the cost of longer runtime for sampling
enough candidate normals for each query point. MSUNE-Net, the first
unsupervised deep normal estimator as far as we know, significantly promotes
the multi-sample consensus further. It transfers the three online stages of
MSUNE to offline training. Thereby its inference time is 100 times faster.
Besides that, more accurate inference is achieved, since the candidates of
query points from similar patches can form a sufficiently large candidate set
implicitly in MSUNE-Net. Comprehensive experiments demonstrate that the two
proposed unsupervised methods are noticeably superior to some supervised deep
normal estimators on the most common synthetic dataset. More importantly, they
show better generalization ability and outperform all the SOTA conventional and
deep methods on three real datasets: NYUV2, KITTI, and a dataset from PCV [1]
Point cloud filtering
Trabalho de conclusão de curso (graduação)—Universidade de BrasÃlia, Faculdade de Tecnologia, Departamento de Engenharia Elétrica, 2020.Este trabalho realiza uma revisão dos diversos métodos existentes para a filtragem de
nuvem de pontos. Com o foco em reconstrução de objetos de pequeno porte escaneados a
laser. O escâner utilizado foi desenvolvido na própria universidade e é composto por um
sensor de distância a laser VL53L0X, baseado em tecnologia Time of Flight (ToF), e dois
motores de passo, um para mover o sensor e outro para o objeto.
Apresentam-se cinco princÃpios de filtragem: filtragem de forma estatÃstica, filtragem
baseada na vizinhança de pontos, filtragem por projeção em superfÃcie, técnicas de proces samento de sinais e por meio equações diferenciais parciais. Os métodos foram aplicados
Moving Least Squares, Operador Laplaciano, Operador de Taubain e simplificação de nuvem
com o auxÃlio do software MeshLab.
Testou-se duas amostra de nuvem de pontos, uma criada por computador e outra amos trada pelo escâner 3D do laboratório. Variou-se os parâmetros dos filtros e por fim realizou-se
uma análise qualitativa de ambos os resultados.This work reviews several filtering methods for point cloud filtering. The main objective
is to recreate in a virtual environment small objects sampled from a 3D scanner. The scanning
device used was developed at the university. It is made of a laser ranging sensor, called
VL53L0X, that uses the time of flight (ToF) technology, and two stepper motors to move the
sensor and the object.
Five filtering principles are shown: statistical-based filtering, neighborhood-based fil tering, projection-based, signal processing methods and partial differential equations based.
With the use of MeshLab software tested the filtering methods. At the end the reconstruc tion of smooth surfaces from the scanner samples was possible. The Moving Least Squares
Algorithm, Laplacian operator, Taubain operator and Point Simplification were applied with
the MeshLab software.
Two point clouds were tested. One created by a computer and the other sampled from a
real object by the 3D scanner. Varying the filters parameters we tested the quality gain after
the usage
Recommended from our members
Image-Based Modeling of Bridges and Its Applications to Evaluating Resiliency of Transportation Networks
Modern urban areas are heavily dependent on transportation networks to sustain their economic life. Hence, when vital components of a regional network are disrupted, economic losses are inevitable. As evidenced by 1989, Loma Prieta and 1994, Northridge earthquakes, the seismic damages experienced by bridges alone result in extensive traffic delays and rerouting, not only hindering emergency response but also causing indirect economic losses that far surpass the direct cost of damage to infrastructure. Nevertheless, in many areas of the U.S., transportation networks lack the resilience required to sustain the potential demands of natural hazards. Traditional hazard assessment methods, in theory, provide the tools required for predicting the vulnerabilities associated with natural hazards. Nonetheless, due to their abstractions of the complex infrastructure and the coupled regional behavior, they often fall short of that expectation. This study proposes a semi-automated image-based model generation framework for producing structure-specific models and fragility functions of bridges. The framework effectively fuses geometric and semantic information extracted from Google Street View images with centerline curve geometry, surface topology, and various relevant metadata to construct extremely accurate geometric representations of bridges. Then, using class statistics available in the literature for bridge structural properties, the framework generates structural models. Both the performance of the geometry extraction procedure and the structural modeling method proposed here are validated by comparison against the structural model of a real-life bridge developed based on as-built drawings.In principle, these models can be utilized to assess physical damage for any type of hazard, but in this study, the focus is limited to seismic applications. Thus to relate the damage resulting from seismic demands from ground shaking, bridge-specific fragility functions are developed for 100 bridge structures in the immediate surroundings of Ports of Los Angeles and Long Beach. Using these fragility curves, the physical damage resulting from a magnitude 7.3 scenario earthquake on Palos Verdes fault is predicted. Subsequently, the effects of the bridge infrastructure damage to the transportation patterns in the Los Angeles metropolitan area are investigated in terms of various resilience metrics