7,007 research outputs found

    Documenting Bronze Age Akrotiri on Thera using laser scanning, image-based modelling and geophysical prospection

    Get PDF
    The excavated architecture of the exceptional prehistoric site of Akrotiri on the Greek island of Thera/Santorini is endangered by gradual decay, damage due to accidents, and seismic shocks, being located on an active volcano in an earthquake-prone area. Therefore, in 2013 and 2014 a digital documentation project has been conducted with support of the National Geographic Society in order to generate a detailed digital model of Akrotiri’s architecture using terrestrial laser scanning and image-based modeling. Additionally, non-invasive geophysical prospection has been tested in order to investigate its potential to explore and map yet buried archaeological remains. This article describes the project and the generated results

    A 4D information system for the exploration of multitemporal images and maps using photogrammetry, web technologies and VR/AR

    Full text link
    [EN] This contribution shows the comparison, investigation, and implementation of different access strategies on multimodal data. The first part of the research is structured as a theoretical part opposing and explaining the terms of conventional access, virtual archival access, and virtual museums while additionally referencing related work. Especially, issues that still persist in repositories like the ambiguity or missing of metadata is pointed out. The second part explains the practical implementation of a workflow from a large image repository to various four-dimensional applications. Mainly, the filtering of images and in the following, the orientation of images is explained. Selection of the relevant images is partly done manually but also with the use of deep convolutional neural networks for image classification. In the following, photogrammetric methods are used for finding the relative orientation between image pairs in a projective frame. For this purpose, an adapted Structure from Motion (SfM) workflow is presented, in which the step of feature detection and matching is replaced by the Radiant-Invariant Feature Transform (RIFT) and Matching On Demand with View Synthesis (MODS). Both methods have been evaluated on a benchmark dataset and performed superior than other approaches. Subsequently, the oriented images are placed interactively and in the future automatically in a 4D browser application showing images, maps, and building models Further usage scenarios are presented in several Virtual Reality (VR) and Augmented Reality (AR) applications. The new representation of the archival data enables spatial and temporal browsing of repositories allowing the research of innovative perspectives and the uncovering of historical details.Highlights:Strategies for a completely automated workflow from image repositories to four-dimensional (4D) access approaches.The orientation of historical images using adapted and evaluated feature matching methods.4D access methods for historical images and 3D models using web technologies and Virtual Reality (VR)/Augmented Reality (AR).[ES] Esta contribución muestra la comparación, investigación e implementación de diferentes estrategias de acceso a datos multimodales. La primera parte de la investigación se estructura en una parte teórica en la que se oponen y explican los términos de acceso convencional, acceso a los archivos virtuales, y museos virtuales, a la vez que se hace referencia a trabajos relacionados. En especial, se señalan los problemas que aún persisten en los repositorios, como la ambigüedad o la falta de metadatos. La segunda parte explica la implementación práctica de un flujo de trabajo desde un gran repositorio de imágenes a varias aplicaciones en cuatro dimensiones (4D). Principalmente, se explica el filtrado de imágenes y, a continuación, la orientación de las mismas. La selección de las imágenes relevantes se hace en parte manualmente, pero también con el uso de redes neuronales convolucionales profundas para la clasificación de las imágenes. A continuación, se utilizan métodos fotogramétricos para encontrar la orientación relativa entre pares de imágenes en un marco proyectivo. Para ello, se presenta un flujo de trabajo adaptado a partir de Structure from Motion, (SfM), en el que el paso de la detección y la correspondencia de entidades es sustituido por la Transformación de entidades invariante a la radiancia (Radiant-Invariant Feature Transform, RIFT) y la Correspondencia a demanda con vistas sintéticas (Matching on Demand with View Synthesis, MODS). Ambos métodos han sido evaluados sobre la base de un conjunto de datos de referencia y funcionaron mejor que otros procedimientos. Posteriormente, las imágenes orientadas se colocan interactivamente y en el futuro automáticamente en una aplicación de navegador 4D que muestra imágenes, mapas y modelos de edificios. Otros escenarios de uso se presentan en varias aplicación es de Realidad Virtual (RV) y Realidad Aumentada (RA). La nueva representación de los datos archivados permite la navegación espacial y temporal de los repositorios, lo que permite la investigación en perspectivas innovadoras y el descubrimiento de detalles históricos.The research upon which this paper is based is part of the junior research group UrbanHistory4D’s activities which has received funding from the German Federal Ministry of Education and Research under grant agreement No 01UG1630. This work was supported by the German Federal Ministry of Education and Research (BMBF, 01IS18026BA-F) by funding the competence center for Big Data “ScaDS Dresden/Leipzig”.Maiwald, F.; Bruschke, J.; Lehmann, C.; Niebling, F. (2019). Un sistema de información 4D para la exploración de imágenes y mapas multitemporales utilizando fotogrametría, tecnologías web y VR/AR. Virtual Archaeology Review. 10(21):1-13. https://doi.org/10.4995/var.2019.11867SWORD1131021Ackerman, A., & Glekas, E. (2017). Digital Capture and Fabrication Tools for Interpretation of Historic Sites. ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, IV-2/W2, 107-114. doi:10.5194/isprs-annals-IV-2-W2-107-2017Armingeon, M., Komani, P., Zanwar, T., Korkut, S., & Dornberger, R. (2019). A Case Study: Assessing Effectiveness of the Augmented Reality Application in Augusta Raurica Augmented Reality and Virtual Reality (pp. 99-111): Springer.Artstor. (2019). Artstor Digital Library. Retrieved April 30, 2019, from https://library.artstor.orgBay, H., Tuytelaars, T., & Van Gool, L. (2006). SURF: Speeded Up Robust Features. Paper presented at the European Conference on Computer Vision, Berlin, Heidelberg.Beaudoin, J. E., & Brady, J. E. (2011). Finding visual information: a study of image resources used by archaeologists, architects, art historians, and artists. Art Documentation: Journal of the Art Libraries Society of North America, 30(2), 24-36.Beltrami, C., Cavezzali, D., Chiabrando, F., Iaccarino Idelson, A., Patrucco, G., & Rinaudo, F. (2019). 3D Digital and Physical Reconstruction of a Collapsed Dome using SFM Techniques from Historical Images. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-2/W11, 217-224. doi:10.5194/isprs-archives-XLII-2-W11-217-2019Bevilacqua, M. G., Caroti, G., Piemonte, A., & Ulivieri, D. (2019). Reconstruction of lost Architectural Volumes by Integration of Photogrammetry from Archive Imagery with 3-D Models of the Status Quo. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-2/W9, 119-125. doi:10.5194/isprs-archives-XLII-2-W9-119-2019Bitelli, G., Dellapasqua, M., Girelli, V. A., Sbaraglia, S., & Tinia, M. A. (2017). Historical Photogrammetry and Terrestrial Laser Scanning for the 3d Virtual Reconstruction of Destroyed Structures: A Case Study in Italy. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLII-5/W1, 113-119. doi:10.5194/isprs-archives-XLII-5-W1-113-2017Bruschke, J., Niebling, F., Maiwald, F., Friedrichs, K., Wacker, M., & Latoschik, M. E. (2017). Towards browsing repositories of spatially oriented historic photographic images in 3D web environments. Paper presented at the Proceedings of the 22nd International Conference on 3D Web Technology.Bruschke, J., Niebling, F., & Wacker, M. (2018). Visualization of Orientations of Spatial Historical Photographs. Paper presented at the Eurographics Workshop on Graphics and Cultural Heritage.Bruschke, J., & Wacker, M. (2014). Application of a Graph Database and Graphical User Interface for the CIDOC CRM. Paper presented at the Access and Understanding-Networking in the Digital Era. Session J1. The 2014 annual conference of CIDOC, the International Committee for Documentation of ICOM.Burdea, G. C., & Coiffet, P. (2003). Virtual reality technology: John Wiley & Sons.Callieri, M., Cignoni, P., Corsini, M., & Scopigno, R. (2008). Masked photo blending: Mapping dense photographic data set on high-resolution sampled 3D models. Computers & Graphics, 32(4), 464-473.Chum, O., & Matas, J. (2005). Matching with PROSAC-progressive sample consensus. Paper presented at the Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on.Coordination and Support Action Virtual Multimodal Museum (ViMM). (2018). ViMM. Retrieved April 30, 2019, from https://www.vi-mm.eu/CultLab3D. (2019). CultLab3D. Retrieved April 30, 2019, from https://www.cultlab3d.deDeng, J., Dong, W., Socher, R., Li, L.-J., Li, K., & Fei-Fei, L. (2009). Imagenet: A large-scale hierarchical image database. Paper presented at the 2009 IEEE conference on computer vision and pattern recognition.Deutsches Archäologisches Institut (DAI). (2019). iDAI.objects arachne (Arachne). Retrieved April 30, 2019, from https://arachne.dainst.org/Efron, B., & Tibshirani, R. J. (1994). An introduction to the bootstrap: CRC press.Europeana. (2019). Europeana Collections. Retrieved 30.04.2019, from https://www.europeana.euEvens, T., & Hauttekeete, L. (2011). Challenges of digital preservation for cultural heritage institutions. Journal of Librarianship and Information Science, 43(3), 157-165.Fischler, M. A., & Bolles, R. C. (1981). Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6), 381-395.Fleming‐May, R. A., & Green, H. (2016). Digital innovations in poetry: Practices of creative writing faculty in online literary publishing. Journal of the Association for Information Science and Technology, 67(4), 859-873.Franken, T., Dellepiane, M., Ganovelli, F., Cignoni, P., Montani, C., & Scopigno, R. (2005). Minimizing user intervention in registering 2D images to 3D models. The visual computer, 21(8-10), 619-628.Girardi, G., von Schwerin, J., Richards-Rissetto, H., Remondino, F., & Agugiaro, G. (2013). The MayaArch3D project: A 3D WebGIS for analyzing ancient architecture and landscapes. Literary and Linguistic Computing, 28(4), 736-753. doi:10.1093/llc/fqt059Grussenmeyer, P., & Al Khalil, O. (2017). From Metric Image Archives to Point Cloud Reconstruction: Case Study of the Great Mosque of Aleppo in Syria. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-2/W5, 295-301. doi:10.5194/isprs-archives-XLII-2-W5-295-2017Gutierrez, M., Vexo, F., & Thalmann, D. (2008). Stepping into virtual reality: Springer Science & Business Media.Guttentag, D. A. (2010). Virtual reality: Applications and implications for tourism. Tourism Management, 31(5), 637-651.Hartley, R., & Zisserman, A. (2003). Multiple view geometry in computer vision: Cambridge university press.Koutsoudis, A., Arnaoutoglou, F., Tsaouselis, A., Ioannakis, G., & Chamzas, C. (2015). Creating 3D Replicas of Medium-to Large-Scale Monuments for Web-Based Dissemination Within the Framework of the 3D-Icons Project. CAA2015, 971.Li, J., Hu, Q., & Ai, M. (2018). RIFT: Multi-modal Image Matching Based on Radiation-invariant Feature Transform. arXiv preprint arXiv:1804.09493.Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. International journal of computer vision, 60(2), 91-110.Maietti, F., Di Giulio, R., Piaia, E., Medici, M., & Ferrari, F. (2018). Enhancing Heritage fruition through 3D semantic modelling and digital tools: the INCEPTION project. Paper presented at the IOP Conference Series: Materials Science and Engineering.Maiwald, F., Schneider, D., Henze, F., Münster, S., & Niebling, F. (2018). Feature Matching of Historical Images Based on Geometry of Quadrilaterals. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLII-2, 643-650. doi:10.5194/isprs-archives-XLII-2-643-2018Maiwald, F., Vietze, T., Schneider, D., Henze, F., Münster, S., & Niebling, F. (2017). Photogrammetric analysis of historical image repositories for virtual reconstruction in the field of digital humanities. The International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 42, 447.Matas, J., Chum, O., Urban, M., & Pajdla, T. (2004). Robust wide-baseline stereo from maximally stable extremal regions. Image and Vision Computing, 22(10), 761-767.Melero, F. J., Revelles, J., & Bellido, M. L. (2018). Atalaya3D: making universities' cultural heritage accessible through 3D technologies.Milgram, P., Takemura, H., Utsumi, A., & Kishino, F. (1995). Augmented reality: A class of displays on the reality-virtuality continuum. Paper presented at the Telemanipulator and telepresence technologies.Mishkin, D., Matas, J., & Perdoch, M. (2015). MODS: Fast and robust method for two-view matching. Computer Vision and Image Understanding, 141, 81-93.Moulon, P., Monasse, P., & Marlet, R. (2012). Adaptive structure from motion with a contrario model estimation. Paper presented at the Asian Conference on Computer Vision.Münster, S., Kamposiori, C., Friedrichs, K., & Kröber, C. (2018). Image libraries and their scholarly use in the field of art and architectural history. International journal on digital libraries, 19(4), 367-383.Niebling, F., Bruschke, J., & Latoschik, M. E. (2018). Browsing Spatial Photography for Dissemination of Cultural Heritage Research Results using Augmented Models.Niebling, F., Maiwald, F., Barthel, K., & Latoschik, M. E. (2017). 4D Augmented City Models, Photogrammetric Creation and Dissemination Digital Research and Education in Architectural Heritage (pp. 196-212). Cham: Springer International Publishing.Oliva, L. S., Mura, A., Betella, A., Pacheco, D., Martinez, E., & Verschure, P. (2015). Recovering the history of Bergen Belsen using an interactive 3D reconstruction in a mixed reality space the role of pre-knowledge on memory recollection. Paper presented at the 2015 Digital Heritage.Pani Paudel, D., Habed, A., Demonceaux, C., & Vasseur, P. (2015). Robust and optimal sum-of-squares-based point-to-plane registration of image sets and structured scenes. Paper presented at the Proceedings of the IEEE International Conference on Computer Vision.Ross, S., & Hedstrom, M. (2005). Preservation research and sustainable digital libraries. International journal on digital libraries, 5(4), 317-324.Schindler, G., & Dellaert, F. (2012). 4D Cities: Analyzing, Visualizing, and Interacting with Historical Urban Photo Collections. Journal of Multimedia, 7(2), 124-131.Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-cam: Visual explanations from deep networks via gradient-based localization. Paper presented at the Proceedings of the IEEE International Conference on Computer Vision.Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.Slater, M., & Sanchez-Vives, M. V. (2016). Enhancing our lives with immersive virtual reality. Frontiers in Robotics and AI, 3, 74.Styliani, S., Fotis, L., Kostas, K., & Petros, P. (2009). Virtual museums, a survey and some issues for consideration. Journal of cultural Heritage, 10(4), 520-528.Tschirschwitz, F., Büyüksalih, G., Kersten, T., Kan, T., Enc, G., & Baskaraca, P. (2019). Virtualising an Ottoman Fortress - Laser Scanning and 3D Modelling for the Development of an Interactive, Immersive Virtual Reality Application. International archives of the photogrammetry, remote sensing and spatial information sciences, 42(2/W9).Web3D Consortium. (2019). Open Standards for Real-Time 3D Communication. Retrieved April 30, 2019, from http://www.web3d.org/Wu, C. (2013). Towards linear-time incremental structure from motion. Paper presented at the 3D Vision-3DV 2013, 2013 International conference on.Wu, Y., Ma, W., Gong, M., Su, L., & Jiao, L. (2015). A Novel Point-Matching Algorithm Based on Fast Sample Consensus for Image Registration. IEEE Geosci. Remote Sensing Lett., 12(1), 43-47.Yoon, J., & Chung, E. (2011). Understanding image needs in daily life by analyzing questions in a social Q&A site. Journal of the American Society for Information Science and Technology, 62(11), 2201-2213

    Recuperación de pares estereoscópicos antiguos para la generación de modelos tridimensionales digitales de estados pasados de edificios históricos

    Get PDF
    [EN] Three-dimensional models with photographic textures have become a usual product for the study and dissemination of elements of heritage. The interest for cultural heritage also includes evolution along time; therefore, apart from the 3D models of the current state, it is interesting to be able to generate models representing how they were in the past. To that end, it is necessary to resort to archive information corresponding to the moments that we want to visualize. This text analyses the possibilities of generating 3D models of surfaces with photographic textures from old collections of analog negatives coming from works of terrestrial stereoscopic photogrammetry of historic buildings. The case studies presented refer to the geometric documentation of a small hermitage (done in 1996) and two sections of a wall (year 2000). The procedure starts with the digitization of the film negatives and the processing of the images generated, after which a combination of different methods for 3D reconstruction and texture wrapping are applied: techniques working simultaneously with several images (such as the algorithms of Structure from Motion – SfM) and single image techniques (such as the reconstruction based on vanishing points). Then, the features of the obtained models are described according to the geometric accuracy, completeness and aesthetic quality. In this way, it is possible to establish the real applicability of the models in order to be useful for the aforementioned historical studies and dissemination purposes. The text also wants to draw attention to the importance of preserving the documentary heritage available in the collections of negatives in archival custody and to the increasing difficulty of using them due to: (1) problems of access and physical conservation, (2) obsolescence of the equipment for scanning and stereoplotting and (3) the fact that the software for processing digitized photographs is discontinued.[ES] Los modelos tridimensionales con textura fotográfica se han convertido en un producto habitual para el estudio y la difusión de los elementos patrimoniales. El interés por el patrimonio también abarca su evolución a lo largo del tiempo, por lo tanto, además de los modelos 3D del estado actual, es interesante disponer de modelos que representen estados pasados. Para ello, es necesario recurrir a información de archivo que corresponda a los momentos que se desean visualizar. El presente texto analiza las posibilidades de generar modelos tridimensionales de superficies con texturas fotográficas a partir de colecciones antiguas de negativos analógicos (en película) procedentes de trabajos de fotogrametría estereoscópica terrestre de edificios históricos. Los casos que se presentan son una pequeña ermita (documentada en 1996) y dos secciones de una muralla (documentada en el año 2000). El procedimiento se inicia con la digitalización de los negativos y su procesamiento, a continuación se aplican diferentes métodos de reconstrucción 3D y aplicación de texturas, en concreto: técnicas que trabajas simultáneamente con varias imágenes (como los algoritmos de Structure from Motion – SfM) y técnicas monoscópicas (como la reconstrucción basada en los puntos de fuga). A continuación, se describen las características de los productos obtenidos en función de la precisión geométrica, grado de compleción y calidad estética. De esta forma, es posible establecer la aplicabilidad real de estos modelos a los estudios históricos y los usos de difusión anteriormente mencionados. El texto también quiere llamar la atención sobre la importancia de preservar el patrimonio documental disponible en las colecciones de negativos que actualmente se encuentran en archivos y su creciente dificultad de uso debido a varios factores entre los que se pueden destacar: (1) los problemas de acceso y conservación física, (2) obsolescencia del instrumental de escaneado y restitución fotogramétrica y (3) el hecho de que el software para el procesamiento de las fotografías digitalizadas también esté obsoleto

    Virtual Exploration of Underwater Archaeological Sites : Visualization and Interaction in Mixed Reality Environments

    Get PDF
    This paper describes the ongoing developments in Photogrammetry and Mixed Reality for the Venus European project (Virtual ExploratioN of Underwater Sites, http://www.venus-project.eu). The main goal of the project is to provide archaeologists and the general public with virtual and augmented reality tools for exploring and studying deep underwater archaeological sites out of reach of divers. These sites have to be reconstructed in terms of environment (seabed) and content (artifacts) by performing bathymetric and photogrammetric surveys on the real site and matching points between geolocalized pictures. The base idea behind using Mixed Reality techniques is to offer archaeologists and general public new insights on the reconstructed archaeological sites allowing archaeologists to study directly from within the virtual site and allowing the general public to immersively explore a realistic reconstruction of the sites. Both activities are based on the same VR engine but drastically differ in the way they present information. General public activities emphasize the visually and auditory realistic aspect of the reconstruction while archaeologists activities emphasize functional aspects focused on the cargo study rather than realism which leads to the development of two parallel VR demonstrators. This paper will focus on several key points developed for the reconstruction process as well as both VR demonstrators (archaeological and general public) issues. The ?rst developed key point concerns the densi?cation of seabed points obtained through photogrammetry in order to obtain high quality terrain reproduction. The second point concerns the development of the Virtual and Augmented Reality (VR/AR) demonstrators for archaeologists designed to exploit the results of the photogrammetric reconstruction. And the third point concerns the development of the VR demonstrator for general public aimed at creating awareness of both the artifacts that were found and of the process with which they were discovered by recreating the dive process from ship to seabed

    A window to the past through modern urban environments: Developing a photogrammetric workflow for the orientation parameter estimation of historical images

    Get PDF
    The ongoing process of digitization in archives is providing access to ever-increasing historical image collections. In many of these repositories, images can typically be viewed in a list or gallery view. Due to the growing number of digitized objects, this type of visualization is becoming increasingly complex. Among other things, it is difficult to determine how many photographs show a particular object and spatial information can only be communicated via metadata. Within the scope of this thesis, research is conducted on the automated determination and provision of this spatial data. Enhanced visualization options make this information more eas- ily accessible to scientists as well as citizens. Different types of visualizations can be presented in three-dimensional (3D), Virtual Reality (VR) or Augmented Reality (AR) applications. However, applications of this type require the estimation of the photographer’s point of view. In the photogrammetric context, this is referred to as estimating the interior and exterior orientation parameters of the camera. For determination of orientation parameters for single images, there are the established methods of Direct Linear Transformation (DLT) or photogrammetric space resection. Using these methods requires the assignment of measured object points to their homologue image points. This is feasible for single images, but quickly becomes impractical due to the large amount of images available in archives. Thus, for larger image collections, usually the Structure-from-Motion (SfM) method is chosen, which allows the simultaneous estimation of the interior as well as the exterior orientation of the cameras. While this method yields good results especially for sequential, contemporary image data, its application to unsorted historical photographs poses a major challenge. In the context of this work, which is mainly limited to scenarios of urban terrestrial photographs, the reasons for failure of the SfM process are identified. In contrast to sequential image collections, pairs of images from different points in time or from varying viewpoints show huge differences in terms of scene representation such as deviations in the lighting situation, building state, or seasonal changes. Since homologue image points have to be found automatically in image pairs or image sequences in the feature matching procedure of SfM, these image differences pose the most complex problem. In order to test different feature matching methods, it is necessary to use a pre-oriented historical dataset. Since such a benchmark dataset did not exist yet, eight historical image triples (corresponding to 24 image pairs) are oriented in this work by manual selection of homologue image points. This dataset allows the evaluation of frequently new published methods in feature matching. The initial methods used, which are based on algorithmic procedures for feature matching (e.g., Scale Invariant Feature Transform (SIFT)), provide satisfactory results for only few of the image pairs in this dataset. By introducing methods that use neural networks for feature detection and feature description, homologue features can be reliably found for a large fraction of image pairs in the benchmark dataset. In addition to a successful feature matching strategy, determining camera orientation requires an initial estimate of the principal distance. Hence for historical images, the principal distance cannot be directly determined as the camera information is usually lost during the process of digitizing the analog original. A possible solution to this problem is to use three vanishing points that are automatically detected in the historical image and from which the principal distance can then be determined. The combination of principal distance estimation and robust feature matching is integrated into the SfM process and allows the determination of the interior and exterior camera orientation parameters of historical images. Based on these results, a workflow is designed that allows archives to be directly connected to 3D applications. A search query in archives is usually performed using keywords, which have to be assigned to the corresponding object as metadata. Therefore, a keyword search for a specific building also results in hits on drawings, paintings, events, interior or detailed views directly connected to this building. However, for the successful application of SfM in an urban context, primarily the photographic exterior view of the building is of interest. While the images for a single building can be sorted by hand, this process is too time-consuming for multiple buildings. Therefore, in collaboration with the Competence Center for Scalable Data Services and Solutions (ScaDS), an approach is developed to filter historical photographs by image similarities. This method reliably enables the search for content-similar views via the selection of one or more query images. By linking this content-based image retrieval with the SfM approach, automatic determination of camera parameters for a large number of historical photographs is possible. The developed method represents a significant improvement over commercial and open-source SfM standard solutions. The result of this work is a complete workflow from archive to application that automatically filters images and calculates the camera parameters. The expected accuracy of a few meters for the camera position is sufficient for the presented applications in this work, but offer further potential for improvement. A connection to archives, which will automatically exchange photographs and positions via interfaces, is currently under development. This makes it possible to retrieve interior and exterior orientation parameters directly from historical photography as metadata which opens up new fields of research.:1 Introduction 1 1.1 Thesis structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Historical image data and archives . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Structure-from-Motion for historical images . . . . . . . . . . . . . . . . . . . 4 1.3.1 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.3.2 Selection of images and preprocessing . . . . . . . . . . . . . . . . . . 5 1.3.3 Feature detection, feature description and feature matching . . . . . . 6 1.3.3.1 Feature detection . . . . . . . . . . . . . . . . . . . . . . . . 7 1.3.3.2 Feature description . . . . . . . . . . . . . . . . . . . . . . . 9 1.3.3.3 Feature matching . . . . . . . . . . . . . . . . . . . . . . . . 10 1.3.3.4 Geometric verification and robust estimators . . . . . . . . . 13 1.3.3.5 Joint methods . . . . . . . . . . . . . . . . . . . . . . . . . . 16 1.3.4 Initial parameterization . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.3.5 Bundle adjustment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 1.3.6 Dense reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 1.3.7 Georeferencing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 1.4 Research objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2 Generation of a benchmark dataset using historical photographs for the evaluation of feature matching methods 29 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.1.1 Image differences based on digitization and image medium . . . . . . . 30 2.1.2 Image differences based on different cameras and acquisition technique 31 2.1.3 Object differences based on different dates of acquisition . . . . . . . . 31 2.2 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.3 The image dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.4 Comparison of different feature detection and description methods . . . . . . 35 2.4.1 Oriented FAST and Rotated BRIEF (ORB) . . . . . . . . . . . . . . . 36 2.4.2 Maximally Stable Extremal Region Detector (MSER) . . . . . . . . . 36 2.4.3 Radiation-invariant Feature Transform (RIFT) . . . . . . . . . . . . . 36 2.4.4 Feature matching and outlier removal . . . . . . . . . . . . . . . . . . 36 2.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 2.6 Conclusions and future work . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 3 Photogrammetry as a link between image repository and 4D applications 45 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 IX Contents 3.2 Multimodal access on repositories . . . . . . . . . . . . . . . . . . . . . . . . . 47 3.2.1 Conventional access . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 3.2.2 Virtual access using online collections . . . . . . . . . . . . . . . . . . 48 3.2.3 Virtual museums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3.3 Workflow and access strategies . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.3.2 Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.3.3 Photogrammetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.3.4 Browser access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.3.5 VR and AR access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 3.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 4 An adapted Structure-from-Motion Workflow for the orientation of historical images 69 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 4.2 Related Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 4.2.1 Historical images for 3D reconstruction . . . . . . . . . . . . . . . . . 72 4.2.2 Algorithmic Feature Detection and Matching . . . . . . . . . . . . . . 73 4.2.3 Feature Detection and Matching using Convolutional Neural Networks 74 4.3 Feature Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 4.4 Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 4.4.1 Step 1: Data preparation . . . . . . . . . . . . . . . . . . . . . . . . . 78 4.4.2 Step 2.1: Feature Detection and Matching . . . . . . . . . . . . . . . . 78 4.4.3 Step 2.2: Vanishing Point Detection and Principal Distance Estimation 80 4.4.4 Step 3: Scene Reconstruction . . . . . . . . . . . . . . . . . . . . . . . 80 4.4.5 Comparison with Three Other State-of-the-Art SfM Workflows . . . . 81 4.5 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 4.6 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 4.7 Conclusions and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 4.8 Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.A Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 5 Fully automated pose estimation of historical images 97 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 5.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 5.2.1 Image Retrieval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 5.2.2 Feature Detection and Matching . . . . . . . . . . . . . . . . . . . . . 101 5.3 Data Preparation: Image Retrieval . . . . . . . . . . . . . . . . . . . . . . . . 102 5.3.1 Experiment and Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 5.3.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 5.3.2.1 Layer Extraction Approach (LEA) . . . . . . . . . . . . . . . 104 5.3.2.2 Attentive Deep Local Features (DELF) Approach . . . . . . 105 5.3.3 Results and Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . 106 5.4 Camera Pose Estimation of Historical Images Using Photogrammetric Methods 110 5.4.1 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 5.4.1.1 Benchmark Datasets . . . . . . . . . . . . . . . . . . . . . . . 111 5.4.1.2 Retrieval Datasets . . . . . . . . . . . . . . . . . . . . . . . . 113 5.4.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 5.4.2.1 Feature Detection and Matching . . . . . . . . . . . . . . . . 115 5.4.2.2 Geometric Verification and Camera Pose Estimation . . . . . 116 5.4.3 Results and Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . 117 5.5 Conclusions and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 5.A Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 6 Related publications 129 6.1 Photogrammetric analysis of historical image repositores for virtual reconstruction in the field of digital humanities . . . . . . . . . . . . . . . . . . . . . . . 130 6.2 Feature matching of historical images based on geometry of quadrilaterals . . 131 6.3 Geo-information technologies for a multimodal access on historical photographs and maps for research and communication in urban history . . . . . . . . . . 132 6.4 An automated pipeline for a browser-based, city-scale mobile 4D VR application based on historical images . . . . . . . . . . . . . . . . . . . . . . . . . . 133 6.5 Software and content design of a browser-based mobile 4D VR application to explore historical city architecture . . . . . . . . . . . . . . . . . . . . . . . . 134 7 Synthesis 135 7.1 Summary of the developed workflows . . . . . . . . . . . . . . . . . . . . . . . 135 7.1.1 Error assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 7.1.2 Accuracy estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 7.1.3 Transfer of the workflow . . . . . . . . . . . . . . . . . . . . . . . . . . 141 7.2 Developments and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 8 Appendix 149 8.1 Setup for the feature matching evaluation . . . . . . . . . . . . . . . . . . . . 149 8.2 Transformation from COLMAP coordinate system to OpenGL . . . . . . . . 150 References 151 List of Figures 165 List of Tables 167 List of Abbreviations 169Der andauernde Prozess der Digitalisierung in Archiven ermöglicht den Zugriff auf immer größer werdende historische Bildbestände. In vielen Repositorien können die Bilder typischerweise in einer Listen- oder Gallerieansicht betrachtet werden. Aufgrund der steigenden Zahl an digitalisierten Objekten wird diese Art der Visualisierung zunehmend unübersichtlicher. Es kann u.a. nur noch schwierig bestimmt werden, wie viele Fotografien ein bestimmtes Motiv zeigen. Des Weiteren können räumliche Informationen bisher nur über Metadaten vermittelt werden. Im Rahmen der Arbeit wird an der automatisierten Ermittlung und Bereitstellung dieser räumlichen Daten geforscht. Erweiterte Visualisierungsmöglichkeiten machen diese Informationen Wissenschaftlern sowie Bürgern einfacher zugänglich. Diese Visualisierungen können u.a. in drei-dimensionalen (3D), Virtual Reality (VR) oder Augmented Reality (AR) Anwendungen präsentiert werden. Allerdings erfordern Anwendungen dieser Art die Schätzung des Standpunktes des Fotografen. Im photogrammetrischen Kontext spricht man dabei von der Schätzung der inneren und äußeren Orientierungsparameter der Kamera. Zur Bestimmung der Orientierungsparameter für Einzelbilder existieren die etablierten Verfahren der direkten linearen Transformation oder des photogrammetrischen Rückwärtsschnittes. Dazu muss eine Zuordnung von gemessenen Objektpunkten zu ihren homologen Bildpunkten erfolgen. Das ist für einzelne Bilder realisierbar, wird aber aufgrund der großen Menge an Bildern in Archiven schnell nicht mehr praktikabel. Für größere Bildverbände wird im photogrammetrischen Kontext somit üblicherweise das Verfahren Structure-from-Motion (SfM) gewählt, das die simultane Schätzung der inneren sowie der äußeren Orientierung der Kameras ermöglicht. Während diese Methode vor allem für sequenzielle, gegenwärtige Bildverbände gute Ergebnisse liefert, stellt die Anwendung auf unsortierten historischen Fotografien eine große Herausforderung dar. Im Rahmen der Arbeit, die sich größtenteils auf Szenarien stadträumlicher terrestrischer Fotografien beschränkt, werden zuerst die Gründe für das Scheitern des SfM Prozesses identifiziert. Im Gegensatz zu sequenziellen Bildverbänden zeigen Bildpaare aus unterschiedlichen zeitlichen Epochen oder von unterschiedlichen Standpunkten enorme Differenzen hinsichtlich der Szenendarstellung. Dies können u.a. Unterschiede in der Beleuchtungssituation, des Aufnahmezeitpunktes oder Schäden am originalen analogen Medium sein. Da für die Merkmalszuordnung in SfM automatisiert homologe Bildpunkte in Bildpaaren bzw. Bildsequenzen gefunden werden müssen, stellen diese Bilddifferenzen die größte Schwierigkeit dar. Um verschiedene Verfahren der Merkmalszuordnung testen zu können, ist es notwendig einen vororientierten historischen Datensatz zu verwenden. Da solch ein Benchmark-Datensatz noch nicht existierte, werden im Rahmen der Arbeit durch manuelle Selektion homologer Bildpunkte acht historische Bildtripel (entspricht 24 Bildpaaren) orientiert, die anschließend genutzt werden, um neu publizierte Verfahren bei der Merkmalszuordnung zu evaluieren. Die ersten verwendeten Methoden, die algorithmische Verfahren zur Merkmalszuordnung nutzen (z.B. Scale Invariant Feature Transform (SIFT)), liefern nur für wenige Bildpaare des Datensatzes zufriedenstellende Ergebnisse. Erst durch die Verwendung von Verfahren, die neuronale Netze zur Merkmalsdetektion und Merkmalsbeschreibung einsetzen, können für einen großen Teil der historischen Bilder des Benchmark-Datensatzes zuverlässig homologe Bildpunkte gefunden werden. Die Bestimmung der Kameraorientierung erfordert zusätzlich zur Merkmalszuordnung eine initiale Schätzung der Kamerakonstante, die jedoch im Zuge der Digitalisierung des analogen Bildes nicht mehr direkt zu ermitteln ist. Eine mögliche Lösung dieses Problems ist die Verwendung von drei Fluchtpunkten, die automatisiert im historischen Bild detektiert werden und aus denen dann die Kamerakonstante bestimmt werden kann. Die Kombination aus Schätzung der Kamerakonstante und robuster Merkmalszuordnung wird in den SfM Prozess integriert und erlaubt die Bestimmung der Kameraorientierung historischer Bilder. Auf Grundlage dieser Ergebnisse wird ein Arbeitsablauf konzipiert, der es ermöglicht, Archive mittels dieses photogrammetrischen Verfahrens direkt an 3D-Anwendungen anzubinden. Eine Suchanfrage in Archiven erfolgt üblicherweise über Schlagworte, die dann als Metadaten dem entsprechenden Objekt zugeordnet sein müssen. Eine Suche nach einem bestimmten Gebäude generiert deshalb u.a. Treffer zu Zeichnungen, Gemälden, Veranstaltungen, Innen- oder Detailansichten. Für die erfolgreiche Anwendung von SfM im stadträumlichen Kontext interessiert jedoch v.a. die fotografische Außenansicht des Gebäudes. Während die Bilder für ein einzelnes Gebäude von Hand sortiert werden können, ist dieser Prozess für mehrere Gebäude zu zeitaufwendig. Daher wird in Zusammenarbeit mit dem Competence Center for Scalable Data Services and Solutions (ScaDS) ein Ansatz entwickelt, um historische Fotografien über Bildähnlichkeiten zu filtern. Dieser ermöglicht zuverlässig über die Auswahl eines oder mehrerer Suchbilder die Suche nach inhaltsähnlichen Ansichten. Durch die Verknüpfung der inhaltsbasierten Suche mit dem SfM Ansatz ist es möglich, automatisiert für eine große Anzahl historischer Fotografien die Kameraparameter zu bestimmen. Das entwickelte Verfahren stellt eine deutliche Verbesserung im Vergleich zu kommerziellen und open-source SfM Standardlösungen dar. Das Ergebnis dieser Arbeit ist ein kompletter Arbeitsablauf vom Archiv bis zur Applikation, der automatisch Bilder filtert und diese orientiert. Die zu erwartende Genauigkeit von wenigen Metern für die Kameraposition sind ausreichend für die dargestellten Anwendungen in dieser Arbeit, bieten aber weiteres Verbesserungspotential. Eine Anbindung an Archive, die über Schnittstellen automatisch Fotografien und Positionen austauschen soll, befindet sich bereits in der Entwicklung. Dadurch ist es möglich, innere und äußere Orientierungsparameter direkt von der historischen Fotografie als Metadaten abzurufen, was neue Forschungsfelder eröffnet.:1 Introduction 1 1.1 Thesis structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Historical image data and archives . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Structure-from-Motion for historical images . . . . . . . . . . . . . . . . . . . 4 1.3.1 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.3.2 Selection of images and preprocessing . . . . . . . . . . . . . . . . . . 5 1.3.3 Feature detection, feature description and feature matching . . . . . . 6 1.3.3.1 Feature detection . . . . . . . . . . . . . . . . . . . . . . . . 7 1.3.3.2 Feature description . . . . . . . . . . . . . . . . . . . . . . . 9 1.3.3.3 Feature matching . . . . . . . . . . . . . . . . . . . . . . . . 10 1.3.3.4 Geometric verification and robust estimators . . . . . . . . . 13 1.3.3.5 Joint methods . . . . . . . . . . . . . . . .

    Recuperación de pares estereoscópicos antiguos para la generación de modelos tridimensionales digitales de estados pasados de edificios históricos

    Get PDF
    [EN] Three-dimensional models with photographic textures have become a usual product for the study and dissemination of elements of heritage. The interest for cultural heritage also includes evolution along time; therefore, apart from the 3D models of the current state, it is interesting to be able to generate models representing how they were in the past. To that end, it is necessary to resort to archive information corresponding to the moments that we want to visualize. This text analyses the possibilities of generating 3D models of surfaces with photographic textures from old collections of analog negatives coming from works of terrestrial stereoscopic photogrammetry of historic buildings. The case studies presented refer to the geometric documentation of a small hermitage (done in 1996) and two sections of a wall (year 2000). The procedure starts with the digitization of the film negatives and the processing of the images generated, after which a combination of different methods for 3D reconstruction and texture wrapping are applied: techniques working simultaneously with several images (such as the algorithms of Structure from Motion – SfM) and single image techniques (such as the reconstruction based on vanishing points). Then, the features of the obtained models are described according to the geometric accuracy, completeness and aesthetic quality. In this way, it is possible to establish the real applicability of the models in order to be useful for the aforementioned historical studies and dissemination purposes. The text also wants to draw attention to the importance of preserving the documentary heritage available in the collections of negatives in archival custody and to the increasing difficulty of using them due to: (1) problems of access and physical conservation, (2) obsolescence of the equipment for scanning and stereoplotting and (3) the fact that the software for processing digitized photographs is discontinued.[ES] Los modelos tridimensionales con textura fotográfica se han convertido en un producto habitual para el estudio y la difusión de los elementos patrimoniales. El interés por el patrimonio también abarca su evolución a lo largo del tiempo, por lo tanto, además de los modelos 3D del estado actual, es interesante disponer de modelos que representen estados pasados. Para ello, es necesario recurrir a información de archivo que corresponda a los momentos que se desean visualizar. El presente texto analiza las posibilidades de generar modelos tridimensionales de superficies con texturas fotográficas a partir de colecciones antiguas de negativos analógicos (en película) procedentes de trabajos de fotogrametría estereoscópica terrestre de edificios históricos. Los casos que se presentan son una pequeña ermita (documentada en 1996) y dos secciones de una muralla (documentada en el año 2000). El procedimiento se inicia con la digitalización de los negativos y su procesamiento, a continuación se aplican diferentes métodos de reconstrucción 3D y aplicación de texturas, en concreto: técnicas que trabajas simultáneamente con varias imágenes (como los algoritmos de Structure from Motion – SfM) y técnicas monoscópicas (como la reconstrucción basada en los puntos de fuga). A continuación, se describen las características de los productos obtenidos en función de la precisión geométrica, grado de compleción y calidad estética. De esta forma, es posible establecer la aplicabilidad real de estos modelos a los estudios históricos y los usos de difusión anteriormente mencionados. El texto también quiere llamar la atención sobre la importancia de preservar el patrimonio documental disponible en las colecciones de negativos que actualmente se encuentran en archivos y su creciente dificultad de uso debido a varios factores entre los que se pueden destacar: (1) los problemas de acceso y conservación física, (2) obsolescencia del instrumental de escaneado y restitución fotogramétrica y (3) el hecho de que el software para el procesamiento de las fotografías digitalizadas también esté obsoleto

    An approach for precise 2D/3D semantic annotation of spatially-oriented images for in-situ visualization applications

    Get PDF
    Thanks to nowadays technologies, innovative tools afford to increase our knowledge of historic monuments, in the field of preservation and valuation of cultural heritage. These tools are aimed to help experts to create, enrich and share information on historical buildings. Among the various documentary sources, photographs contain a high level of details about shapes and colors. With the development of image analysis and image-based-modeling techniques, large sets of images can be spatially oriented towards a digital mock-up. For these reasons, digital photographs prove to be an easy to use, affordable and flexible support, for heritage documentation. This article presents, in a first step, an approach for 2D/3D semantic annotations in a set of spatially-oriented photographs (whose positions and orientations in space are automatically estimated). In a second step, we will focus on a method for displaying those annotations on new images acquired by mobile devices in situ. Firstly, an automated image-based reconstruction method produces 3D information (specifically 3D coordinates) by processing a large images set. Then, images are semantically annotated and a process uses the previously generated 3D information inherent to images for the annotations transfer. As a consequence, this protocol provides a simple way to finely annotate a large quantity of images at once instead of one by one. As those images annotations are directly inherent to 3D information, they can be stored as 3D files. To bring up on screen the information related to a building, the user takes a picture in situ. An image processing method allows estimating the orientation parameters of this new photograph inside the already oriented large images base. Then the annotations can be precisely projected on the oriented picture and send back to the user. In this way a continuity of information could be established from the initial acquisition to the in situ visualization

    Development and Application of a Performance and Operational Feasibility Guide to Facilitate Adoption of Soil Moisture Sensors

    Get PDF
    Soil moisture sensors can be effective and promising decision-making tools for diverse applications and audiences, including agricultural managers, irrigation practitioners, and researchers. Nevertheless, there exists immense adoption potential in the United States, with only 1.2 in 10 farms nationally using soil moisture sensors to decide when to irrigate. This number is much lower in the global scale. Increased adoption is likely hindered by lack of scientific support in need assessment, selection, suitability and use of these sensors. Here, through extensive field research, we address the operational feasibility of soil moisture sensors, an aspect which has been overlooked in the past, and integrate it with their performance accuracy, in order to develop a quantitative framework to guide users in the selection of best-suited sensors for varying applications. These evaluations were conducted for nine commercially available sensors under silt loam and loamy sand soils in irrigated cropland and rainfed grassland for two different installation orientations [sensing component parallel (horizontal) and perpendicular (vertical) to the ground surface] typically used. All the sensors were assessed for their aptness in terms of cost, ease of operation, convenience of telemetry, and performance accuracy. Best sensors under each soil condition, sensor orientation, and user applications (research versus agricultural production) were identified. The step-by-step guide presented here will serve as an unprecedented and holistic adoption-assisting resource and can be extended to other sensors as well

    El modelado 3D en el registro y la restauración de una tumba megalítica

    Get PDF
    [EN] This article aims at showing some potential applications of geomatics to works in which both excavation and restoration are carried out simultaneously, so that the three-dimensional layout of the monument undergoes continuous and important changes throughout the intervention. The article offers an overview of several works in geometric documentation and three-dimensional modelling carried out during the archaeological excavations and the restoration of a megalithic monument -dolmen Alto de la Huesera- from 2010 to 2014. The activities described here encompass a wide range of goals, including marking out the excavation grid, the geometric recording of the burials, the three-dimensional modelling of the slabs and the surrounding mound, the virtual visualization and check of possible reconstructions before undertaking actual rearrangements of the components on site as well as the classification and archive of the information so as to maintain the traceability of the tasks accomplished during this period.[ES] El presente artículo muestra algunas aplicaciones de las técnicas geomáticas aplicadas a trabajos en los que la excavación y la restauración se realizan de forma simultánea, de forma que la disposición tridimensional del monumento experimenta cambios significativos a lo largo de la intervención. El artículo repasa diversos trabajos de documentación geométrica y modelado tridimensional desarrollados durante las excavaciones arqueológicas y la restauración de un monumento megalítico –dolmen del Alto de la Huesera- en el periodo 2010 a 2014. Las actividades descritas abarcan un amplio rango de objetivos, incluyendo el replanteo de la cuadrícula de excavación, el registro geométrico de los enterramientos, el modelado tridimensional de las losas y túmulo circundante, la visualización virtual y prueba de posibles reconstrucciones antes de acometer la recolocación física de los elementos del dolmen, así como la clasificación y el archivo de la información con el fin de mantener la trazabilidad de las tareas realizadas durante este periodo
    corecore