60 research outputs found

    Reconstructing the Past in 3D Using Historical Aerial Imgery

    Get PDF
    Historical aerial film images are a valuable record of the past, and are useful as a baseline for change detection and landcover analysis. To be useful in GIS analysis the images must be oriented to a spatial reference system. This is challenging as historical imagery is often missing flight and camera information. Traditional photogrammetric processing techniques exist to overcome these challenges, but they require specialized knowledge, time and expense to complete. Because of this, many collections of historical images are left unprocessed. This project produced a method to quickly standardize the photos, spatially orient them, correct them for distortion effects, and extract a digital surface model from the overlapping image series using Pix4D Professional. The horizontal accuracy met National Map Accuracy Standards when the Pix 4D process was combined with traditional georeferencing. The workflow was faster than traditional methods due to economies of scale in the new process

    Photogrammetric suite to manage the survey workflow in challenging environments and conditions

    Get PDF
    The present work is intended in providing new and innovative instruments to support the photogrammetric survey workflow during all its phases. A suite of tools has been conceived in order to manage the planning, the acquisition, the post-processing and the restitution steps, with particular attention to the rigorousness of the approach and to the final precision. The main focus of the research has been the implementation of the tool MAGO, standing for Adaptive Mesh for Orthophoto Generation. Its novelty consists in the possibility to automatically reconstruct \u201cunrolled\u201d orthophotos of adjacent fa\ue7ades of a building using the point cloud, instead of the mesh, as input source for the orthophoto reconstruction. The second tool has been conceived as a photogrammetric procedure based on Bundle Block Adjustment. The same issue is analysed from two mirrored perspectives: on the one hand, the use of moving cameras in a static scenario in order to manage real-time indoor navigation; on the other hand, the use of static cameras in a moving scenario in order to achieve the simultaneously reconstruction of the 3D model of the changing object. A third tool named U.Ph.O., standing for Unmanned Photogrammetric Office, has been integrated with a new module. The general aim is on the one hand to plan the photogrammetric survey considering the expected precision, computed on the basis of a network simulation, and on the other hand to check if the achieved survey has been collected compatibly with the planned conditions. The provided integration concerns the treatment of surfaces with a generic orientation further than the ones with a planimetric development. After a brief introduction, a general description about the photogrammetric principles is given in the first chapter of the dissertation; a chapter follows about the parallelism between Photogrammetry and Computer Vision and the contribution of this last in the development of the described tools. The third chapter specifically regards, indeed, the implemented software and tools, while the fourth contains the training test and the validation. Finally, conclusions and future perspectives are reported

    A window to the past through modern urban environments: Developing a photogrammetric workflow for the orientation parameter estimation of historical images

    Get PDF
    The ongoing process of digitization in archives is providing access to ever-increasing historical image collections. In many of these repositories, images can typically be viewed in a list or gallery view. Due to the growing number of digitized objects, this type of visualization is becoming increasingly complex. Among other things, it is difficult to determine how many photographs show a particular object and spatial information can only be communicated via metadata. Within the scope of this thesis, research is conducted on the automated determination and provision of this spatial data. Enhanced visualization options make this information more eas- ily accessible to scientists as well as citizens. Different types of visualizations can be presented in three-dimensional (3D), Virtual Reality (VR) or Augmented Reality (AR) applications. However, applications of this type require the estimation of the photographer’s point of view. In the photogrammetric context, this is referred to as estimating the interior and exterior orientation parameters of the camera. For determination of orientation parameters for single images, there are the established methods of Direct Linear Transformation (DLT) or photogrammetric space resection. Using these methods requires the assignment of measured object points to their homologue image points. This is feasible for single images, but quickly becomes impractical due to the large amount of images available in archives. Thus, for larger image collections, usually the Structure-from-Motion (SfM) method is chosen, which allows the simultaneous estimation of the interior as well as the exterior orientation of the cameras. While this method yields good results especially for sequential, contemporary image data, its application to unsorted historical photographs poses a major challenge. In the context of this work, which is mainly limited to scenarios of urban terrestrial photographs, the reasons for failure of the SfM process are identified. In contrast to sequential image collections, pairs of images from different points in time or from varying viewpoints show huge differences in terms of scene representation such as deviations in the lighting situation, building state, or seasonal changes. Since homologue image points have to be found automatically in image pairs or image sequences in the feature matching procedure of SfM, these image differences pose the most complex problem. In order to test different feature matching methods, it is necessary to use a pre-oriented historical dataset. Since such a benchmark dataset did not exist yet, eight historical image triples (corresponding to 24 image pairs) are oriented in this work by manual selection of homologue image points. This dataset allows the evaluation of frequently new published methods in feature matching. The initial methods used, which are based on algorithmic procedures for feature matching (e.g., Scale Invariant Feature Transform (SIFT)), provide satisfactory results for only few of the image pairs in this dataset. By introducing methods that use neural networks for feature detection and feature description, homologue features can be reliably found for a large fraction of image pairs in the benchmark dataset. In addition to a successful feature matching strategy, determining camera orientation requires an initial estimate of the principal distance. Hence for historical images, the principal distance cannot be directly determined as the camera information is usually lost during the process of digitizing the analog original. A possible solution to this problem is to use three vanishing points that are automatically detected in the historical image and from which the principal distance can then be determined. The combination of principal distance estimation and robust feature matching is integrated into the SfM process and allows the determination of the interior and exterior camera orientation parameters of historical images. Based on these results, a workflow is designed that allows archives to be directly connected to 3D applications. A search query in archives is usually performed using keywords, which have to be assigned to the corresponding object as metadata. Therefore, a keyword search for a specific building also results in hits on drawings, paintings, events, interior or detailed views directly connected to this building. However, for the successful application of SfM in an urban context, primarily the photographic exterior view of the building is of interest. While the images for a single building can be sorted by hand, this process is too time-consuming for multiple buildings. Therefore, in collaboration with the Competence Center for Scalable Data Services and Solutions (ScaDS), an approach is developed to filter historical photographs by image similarities. This method reliably enables the search for content-similar views via the selection of one or more query images. By linking this content-based image retrieval with the SfM approach, automatic determination of camera parameters for a large number of historical photographs is possible. The developed method represents a significant improvement over commercial and open-source SfM standard solutions. The result of this work is a complete workflow from archive to application that automatically filters images and calculates the camera parameters. The expected accuracy of a few meters for the camera position is sufficient for the presented applications in this work, but offer further potential for improvement. A connection to archives, which will automatically exchange photographs and positions via interfaces, is currently under development. This makes it possible to retrieve interior and exterior orientation parameters directly from historical photography as metadata which opens up new fields of research.:1 Introduction 1 1.1 Thesis structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Historical image data and archives . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Structure-from-Motion for historical images . . . . . . . . . . . . . . . . . . . 4 1.3.1 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.3.2 Selection of images and preprocessing . . . . . . . . . . . . . . . . . . 5 1.3.3 Feature detection, feature description and feature matching . . . . . . 6 1.3.3.1 Feature detection . . . . . . . . . . . . . . . . . . . . . . . . 7 1.3.3.2 Feature description . . . . . . . . . . . . . . . . . . . . . . . 9 1.3.3.3 Feature matching . . . . . . . . . . . . . . . . . . . . . . . . 10 1.3.3.4 Geometric verification and robust estimators . . . . . . . . . 13 1.3.3.5 Joint methods . . . . . . . . . . . . . . . . . . . . . . . . . . 16 1.3.4 Initial parameterization . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.3.5 Bundle adjustment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 1.3.6 Dense reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 1.3.7 Georeferencing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 1.4 Research objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2 Generation of a benchmark dataset using historical photographs for the evaluation of feature matching methods 29 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.1.1 Image differences based on digitization and image medium . . . . . . . 30 2.1.2 Image differences based on different cameras and acquisition technique 31 2.1.3 Object differences based on different dates of acquisition . . . . . . . . 31 2.2 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.3 The image dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.4 Comparison of different feature detection and description methods . . . . . . 35 2.4.1 Oriented FAST and Rotated BRIEF (ORB) . . . . . . . . . . . . . . . 36 2.4.2 Maximally Stable Extremal Region Detector (MSER) . . . . . . . . . 36 2.4.3 Radiation-invariant Feature Transform (RIFT) . . . . . . . . . . . . . 36 2.4.4 Feature matching and outlier removal . . . . . . . . . . . . . . . . . . 36 2.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 2.6 Conclusions and future work . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 3 Photogrammetry as a link between image repository and 4D applications 45 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 IX Contents 3.2 Multimodal access on repositories . . . . . . . . . . . . . . . . . . . . . . . . . 47 3.2.1 Conventional access . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 3.2.2 Virtual access using online collections . . . . . . . . . . . . . . . . . . 48 3.2.3 Virtual museums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3.3 Workflow and access strategies . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.3.2 Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.3.3 Photogrammetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.3.4 Browser access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.3.5 VR and AR access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 3.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 4 An adapted Structure-from-Motion Workflow for the orientation of historical images 69 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 4.2 Related Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 4.2.1 Historical images for 3D reconstruction . . . . . . . . . . . . . . . . . 72 4.2.2 Algorithmic Feature Detection and Matching . . . . . . . . . . . . . . 73 4.2.3 Feature Detection and Matching using Convolutional Neural Networks 74 4.3 Feature Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 4.4 Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 4.4.1 Step 1: Data preparation . . . . . . . . . . . . . . . . . . . . . . . . . 78 4.4.2 Step 2.1: Feature Detection and Matching . . . . . . . . . . . . . . . . 78 4.4.3 Step 2.2: Vanishing Point Detection and Principal Distance Estimation 80 4.4.4 Step 3: Scene Reconstruction . . . . . . . . . . . . . . . . . . . . . . . 80 4.4.5 Comparison with Three Other State-of-the-Art SfM Workflows . . . . 81 4.5 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 4.6 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 4.7 Conclusions and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 4.8 Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.A Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 5 Fully automated pose estimation of historical images 97 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 5.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 5.2.1 Image Retrieval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 5.2.2 Feature Detection and Matching . . . . . . . . . . . . . . . . . . . . . 101 5.3 Data Preparation: Image Retrieval . . . . . . . . . . . . . . . . . . . . . . . . 102 5.3.1 Experiment and Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 5.3.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 5.3.2.1 Layer Extraction Approach (LEA) . . . . . . . . . . . . . . . 104 5.3.2.2 Attentive Deep Local Features (DELF) Approach . . . . . . 105 5.3.3 Results and Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . 106 5.4 Camera Pose Estimation of Historical Images Using Photogrammetric Methods 110 5.4.1 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 5.4.1.1 Benchmark Datasets . . . . . . . . . . . . . . . . . . . . . . . 111 5.4.1.2 Retrieval Datasets . . . . . . . . . . . . . . . . . . . . . . . . 113 5.4.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 5.4.2.1 Feature Detection and Matching . . . . . . . . . . . . . . . . 115 5.4.2.2 Geometric Verification and Camera Pose Estimation . . . . . 116 5.4.3 Results and Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . 117 5.5 Conclusions and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 5.A Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 6 Related publications 129 6.1 Photogrammetric analysis of historical image repositores for virtual reconstruction in the field of digital humanities . . . . . . . . . . . . . . . . . . . . . . . 130 6.2 Feature matching of historical images based on geometry of quadrilaterals . . 131 6.3 Geo-information technologies for a multimodal access on historical photographs and maps for research and communication in urban history . . . . . . . . . . 132 6.4 An automated pipeline for a browser-based, city-scale mobile 4D VR application based on historical images . . . . . . . . . . . . . . . . . . . . . . . . . . 133 6.5 Software and content design of a browser-based mobile 4D VR application to explore historical city architecture . . . . . . . . . . . . . . . . . . . . . . . . 134 7 Synthesis 135 7.1 Summary of the developed workflows . . . . . . . . . . . . . . . . . . . . . . . 135 7.1.1 Error assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 7.1.2 Accuracy estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 7.1.3 Transfer of the workflow . . . . . . . . . . . . . . . . . . . . . . . . . . 141 7.2 Developments and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 8 Appendix 149 8.1 Setup for the feature matching evaluation . . . . . . . . . . . . . . . . . . . . 149 8.2 Transformation from COLMAP coordinate system to OpenGL . . . . . . . . 150 References 151 List of Figures 165 List of Tables 167 List of Abbreviations 169Der andauernde Prozess der Digitalisierung in Archiven ermöglicht den Zugriff auf immer grĂ¶ĂŸer werdende historische BildbestĂ€nde. In vielen Repositorien können die Bilder typischerweise in einer Listen- oder Gallerieansicht betrachtet werden. Aufgrund der steigenden Zahl an digitalisierten Objekten wird diese Art der Visualisierung zunehmend unĂŒbersichtlicher. Es kann u.a. nur noch schwierig bestimmt werden, wie viele Fotografien ein bestimmtes Motiv zeigen. Des Weiteren können rĂ€umliche Informationen bisher nur ĂŒber Metadaten vermittelt werden. Im Rahmen der Arbeit wird an der automatisierten Ermittlung und Bereitstellung dieser rĂ€umlichen Daten geforscht. Erweiterte Visualisierungsmöglichkeiten machen diese Informationen Wissenschaftlern sowie BĂŒrgern einfacher zugĂ€nglich. Diese Visualisierungen können u.a. in drei-dimensionalen (3D), Virtual Reality (VR) oder Augmented Reality (AR) Anwendungen prĂ€sentiert werden. Allerdings erfordern Anwendungen dieser Art die SchĂ€tzung des Standpunktes des Fotografen. Im photogrammetrischen Kontext spricht man dabei von der SchĂ€tzung der inneren und Ă€ußeren Orientierungsparameter der Kamera. Zur Bestimmung der Orientierungsparameter fĂŒr Einzelbilder existieren die etablierten Verfahren der direkten linearen Transformation oder des photogrammetrischen RĂŒckwĂ€rtsschnittes. Dazu muss eine Zuordnung von gemessenen Objektpunkten zu ihren homologen Bildpunkten erfolgen. Das ist fĂŒr einzelne Bilder realisierbar, wird aber aufgrund der großen Menge an Bildern in Archiven schnell nicht mehr praktikabel. FĂŒr grĂ¶ĂŸere BildverbĂ€nde wird im photogrammetrischen Kontext somit ĂŒblicherweise das Verfahren Structure-from-Motion (SfM) gewĂ€hlt, das die simultane SchĂ€tzung der inneren sowie der Ă€ußeren Orientierung der Kameras ermöglicht. WĂ€hrend diese Methode vor allem fĂŒr sequenzielle, gegenwĂ€rtige BildverbĂ€nde gute Ergebnisse liefert, stellt die Anwendung auf unsortierten historischen Fotografien eine große Herausforderung dar. Im Rahmen der Arbeit, die sich grĂ¶ĂŸtenteils auf Szenarien stadtrĂ€umlicher terrestrischer Fotografien beschrĂ€nkt, werden zuerst die GrĂŒnde fĂŒr das Scheitern des SfM Prozesses identifiziert. Im Gegensatz zu sequenziellen BildverbĂ€nden zeigen Bildpaare aus unterschiedlichen zeitlichen Epochen oder von unterschiedlichen Standpunkten enorme Differenzen hinsichtlich der Szenendarstellung. Dies können u.a. Unterschiede in der Beleuchtungssituation, des Aufnahmezeitpunktes oder SchĂ€den am originalen analogen Medium sein. Da fĂŒr die Merkmalszuordnung in SfM automatisiert homologe Bildpunkte in Bildpaaren bzw. Bildsequenzen gefunden werden mĂŒssen, stellen diese Bilddifferenzen die grĂ¶ĂŸte Schwierigkeit dar. Um verschiedene Verfahren der Merkmalszuordnung testen zu können, ist es notwendig einen vororientierten historischen Datensatz zu verwenden. Da solch ein Benchmark-Datensatz noch nicht existierte, werden im Rahmen der Arbeit durch manuelle Selektion homologer Bildpunkte acht historische Bildtripel (entspricht 24 Bildpaaren) orientiert, die anschließend genutzt werden, um neu publizierte Verfahren bei der Merkmalszuordnung zu evaluieren. Die ersten verwendeten Methoden, die algorithmische Verfahren zur Merkmalszuordnung nutzen (z.B. Scale Invariant Feature Transform (SIFT)), liefern nur fĂŒr wenige Bildpaare des Datensatzes zufriedenstellende Ergebnisse. Erst durch die Verwendung von Verfahren, die neuronale Netze zur Merkmalsdetektion und Merkmalsbeschreibung einsetzen, können fĂŒr einen großen Teil der historischen Bilder des Benchmark-Datensatzes zuverlĂ€ssig homologe Bildpunkte gefunden werden. Die Bestimmung der Kameraorientierung erfordert zusĂ€tzlich zur Merkmalszuordnung eine initiale SchĂ€tzung der Kamerakonstante, die jedoch im Zuge der Digitalisierung des analogen Bildes nicht mehr direkt zu ermitteln ist. Eine mögliche Lösung dieses Problems ist die Verwendung von drei Fluchtpunkten, die automatisiert im historischen Bild detektiert werden und aus denen dann die Kamerakonstante bestimmt werden kann. Die Kombination aus SchĂ€tzung der Kamerakonstante und robuster Merkmalszuordnung wird in den SfM Prozess integriert und erlaubt die Bestimmung der Kameraorientierung historischer Bilder. Auf Grundlage dieser Ergebnisse wird ein Arbeitsablauf konzipiert, der es ermöglicht, Archive mittels dieses photogrammetrischen Verfahrens direkt an 3D-Anwendungen anzubinden. Eine Suchanfrage in Archiven erfolgt ĂŒblicherweise ĂŒber Schlagworte, die dann als Metadaten dem entsprechenden Objekt zugeordnet sein mĂŒssen. Eine Suche nach einem bestimmten GebĂ€ude generiert deshalb u.a. Treffer zu Zeichnungen, GemĂ€lden, Veranstaltungen, Innen- oder Detailansichten. FĂŒr die erfolgreiche Anwendung von SfM im stadtrĂ€umlichen Kontext interessiert jedoch v.a. die fotografische Außenansicht des GebĂ€udes. WĂ€hrend die Bilder fĂŒr ein einzelnes GebĂ€ude von Hand sortiert werden können, ist dieser Prozess fĂŒr mehrere GebĂ€ude zu zeitaufwendig. Daher wird in Zusammenarbeit mit dem Competence Center for Scalable Data Services and Solutions (ScaDS) ein Ansatz entwickelt, um historische Fotografien ĂŒber BildĂ€hnlichkeiten zu filtern. Dieser ermöglicht zuverlĂ€ssig ĂŒber die Auswahl eines oder mehrerer Suchbilder die Suche nach inhaltsĂ€hnlichen Ansichten. Durch die VerknĂŒpfung der inhaltsbasierten Suche mit dem SfM Ansatz ist es möglich, automatisiert fĂŒr eine große Anzahl historischer Fotografien die Kameraparameter zu bestimmen. Das entwickelte Verfahren stellt eine deutliche Verbesserung im Vergleich zu kommerziellen und open-source SfM Standardlösungen dar. Das Ergebnis dieser Arbeit ist ein kompletter Arbeitsablauf vom Archiv bis zur Applikation, der automatisch Bilder filtert und diese orientiert. Die zu erwartende Genauigkeit von wenigen Metern fĂŒr die Kameraposition sind ausreichend fĂŒr die dargestellten Anwendungen in dieser Arbeit, bieten aber weiteres Verbesserungspotential. Eine Anbindung an Archive, die ĂŒber Schnittstellen automatisch Fotografien und Positionen austauschen soll, befindet sich bereits in der Entwicklung. Dadurch ist es möglich, innere und Ă€ußere Orientierungsparameter direkt von der historischen Fotografie als Metadaten abzurufen, was neue Forschungsfelder eröffnet.:1 Introduction 1 1.1 Thesis structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Historical image data and archives . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Structure-from-Motion for historical images . . . . . . . . . . . . . . . . . . . 4 1.3.1 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.3.2 Selection of images and preprocessing . . . . . . . . . . . . . . . . . . 5 1.3.3 Feature detection, feature description and feature matching . . . . . . 6 1.3.3.1 Feature detection . . . . . . . . . . . . . . . . . . . . . . . . 7 1.3.3.2 Feature description . . . . . . . . . . . . . . . . . . . . . . . 9 1.3.3.3 Feature matching . . . . . . . . . . . . . . . . . . . . . . . . 10 1.3.3.4 Geometric verification and robust estimators . . . . . . . . . 13 1.3.3.5 Joint methods . . . . . . . . . . . . . . . .

    PHOTOGRAMMETRIC WIREFRAME AND DENSE POINT CLOUD 3D MODELLING OF HISTORICAL STRUCTURES: THE STUDY OF SULTAN SELIM MOSQUE AND YUSUF AGA LIBRARY IN KONYA, TURKEY

    Get PDF
    The photogrammetry enables to getting high accuracy measurement with low-cost and easy application in documentation of historical structures. The object details are signified with lines in cultural heritage documentation by photogrammetry. The combination of all the lines create 3D wireframe model of the measurement object. In addition, patch surfaces of the wireframe are mapped with the texture from the images for more realistic visualization. On the other hand, the progress on computer vision and image processing techniques is allowing automatically perform the photogrammetric process. A large number of points that are called dense point cloud can be measured from coverage area of multi view images. The dense point cloud represents the object shape with small space measured points while the wireframe photogrammetry is representing the object with lines. In this study these two photogrammetric methods were evaluated with respect to visualization, cost, labour and measurement time through 3D modelling of historical structures of Sultan Selim Mosque and Yusuf Aga Library

    Object Tracking Using Local Binary Descriptors

    Get PDF
    Visual tracking has become an increasingly important topic of research in the field of Computer Vision (CV). There are currently many tracking methods based on the Detect-then-Track paradigm. This type of approach may allow for a system to track a random object with just one initialization phase, but may often rely on constructing models to follow the object. Another limitation of these methods is that they are computationally and memory intensive, which hinders their application to resource constrained platforms such as mobile devices. Under these conditions, the implementation of Augmented Reality (AR) or complex multi-part systems is not possible. In this thesis, we explore a variety of interest point descriptors for generic object tracking. The SIFT descriptor is considered a benchmark and will be compared with binary descriptors such as BRIEF, ORB, BRISK, and FREAK. The accuracy of these descriptors is benchmarked against the ground truth of the object\u27s location. We use dictionaries of descriptors to track regions with small error under variations due to occlusions, illumination changes, scaling, and rotation. This is accomplished by using Dense-to-Sparse Search Pattern, Locality Constraints, and Scale Adaptation. A benchmarking system is created to test the descriptors\u27 accuracy, speed, robustness, and distinctness. This data offers a comparison of the tracking system to current state of the art systems such as Multiple Instance Learning Tracker (MILTrack), Tracker Learned Detection (TLD), and Continuously Adaptive MeanShift (CAMSHIFT)

    The Application and Analysis of Automated Triangulation of Video Imagery by Successive Relative Orientation

    Get PDF
    The purpose of this thesis is the analysis and evaluation of methods to orient a strip of images using an automated approach. Automatic orientation of strips of video frame imagery would facilitate the construction of three dimensional models with less demand on a human operator for tedious measurement. Often one has no control points, so only relative orientation is possible. The relative orientation process gives camera parameters such as attitudes and selected baseline components and it can be implemented by using either collinearity or coplanarity equations. To automate the point selection, the pass and/or tie points were detected by the Colored Harris Laplace Corner detector along a strip of images and they were matched by cross correlation across multiple scales. However, the matched points from cross correlation still include the outliers. Therefore, the Random Sample Consensus (RANSAC) method with the essential matrix was applied to detect only inliers of point pairs. Then relative orientation was performed for this series of video imagery using the coplanarity condition. However, there is no guarantee that three rays for a single point will intersect in a single point. Therefore for all photos, subsequent to the first one, the scale restraint equation was applied along with the coplanarity equation to ensure these three rays\u27 intersection. At this point, the Kalman Filtering algorithm was introduced to address the problem of uncompensated systematic error accumulation. Kalman Filtering is more parsimonious of computing effort than Simultaneous Least Squares, and it gives superior results compared with Cantilever Least Squares models by including trajectory information. To conform with accepted photogrammetric standards, the camera was calibrated with selected frames extracted from the video stream. For the calibration, minimal constraints are applied. Coplanarity and scale restraint equations in relative orientation were also used for initial approximation for the nonlinear bundle block adjustment to accomplish camera calibration. For calibration imagery, the main building of the bell tower at the University of Texas was used as an object because it has lots of three dimensional features with an open view and the data could be acquired at infinity focus distance. Another two sets of calibrations were implemented with targets placed inside of a laboratory room. The automated relative orientation experiment was carried out with one terrestrial, one aerial and another simulated strip. The real data was acquired by a high definition camcorder. Both terrestrial and aerial data were acquired at the Purdue University campus. The terrestrial data was acquired from a moving vehicle. The aerial data of the Purdue University campus was acquired from a Cessna aircraft. The results from the aerial and simulation cases were evaluated by control points. The three estimation strategies are stripwise Simultaneous, Kalman Filtering and Cantilever, all employing coplanarity equations. For the aerial and simulation case, an absolute comparison was made between the three experimental techniques and the bundle block adjustment. In all cases, the relative solutions were transformed to ground coordinates by a rigid body, 7-parameter transformation. In retrospect, the aerial case was too short (8 photographs) to demonstrate the compensation of strip formation errors. Therefore a simulated strip (30 photographs) was used for this purpose. Absolute accuracy for the aerial and simulation approaches was evaluated by ground control points. Precision of each approach was evaluated by error ellipsoid at each intersected point. Also memory occupancy for each approach was measured to compare resource requirements for each approach. When considering computing resources and absolute accuracy, the Kalman Filter solution is superior compared with the Simultaneous and the Cantilever methods
    • 

    corecore