36 research outputs found

    Generic Multisensor Integration Strategy and Innovative Error Analysis for Integrated Navigation

    Get PDF
    A modern multisensor integrated navigation system applied in most of civilian applications typically consists of GNSS (Global Navigation Satellite System) receivers, IMUs (Inertial Measurement Unit), and/or other sensors, e.g., odometers and cameras. With the increasing availabilities of low-cost sensors, more research and development activities aim to build a cost-effective system without sacrificing navigational performance. Three principal contributions of this dissertation are as follows: i) A multisensor kinematic positioning and navigation system built on Linux Operating System (OS) with Real Time Application Interface (RTAI), York University Multisensor Integrated System (YUMIS), was designed and realized to integrate GNSS receivers, IMUs, and cameras. YUMIS sets a good example of a low-cost yet high-performance multisensor inertial navigation system and lays the ground work in a practical and economic way for the personnel training in following academic researches. ii) A generic multisensor integration strategy (GMIS) was proposed, which features a) the core system model is developed upon the kinematics of a rigid body; b) all sensor measurements are taken as raw measurement in Kalman filter without differentiation. The essential competitive advantages of GMIS over the conventional error-state based strategies are: 1) the influences of the IMU measurement noises on the final navigation solutions are effectively mitigated because of the increased measurement redundancy upon the angular rate and acceleration of a rigid body; 2) The state and measurement vectors in the estimator with GMIS can be easily expanded to fuse multiple inertial sensors and all other types of measurements, e.g., delta positions; 3) one can directly perform error analysis upon both raw sensor data (measurement noise analysis) and virtual zero-mean process noise measurements (process noise analysis) through the corresponding measurement residuals of the individual measurements and the process noise measurements. iii) The a posteriori variance component estimation (VCE) was innovatively accomplished as an advanced analytical tool in the extended Kalman Filter employed by the GMIS, which makes possible the error analysis of the raw IMU measurements for the very first time, together with the individual independent components in the process noise vector

    Visual SLAM from image sequences acquired by unmanned aerial vehicles

    Get PDF
    This thesis shows that Kalman filter based approaches are sufficient for the task of simultaneous localization and mapping from image sequences acquired by unmanned aerial vehicles. Using solely direction measurements to solve the problem of simultaneous localization and mapping (SLAM) is an important part of autonomous systems. Because the need for real-time capable systems, recursive estimation techniques, Kalman filter based approaches are the main focus of interest. Unfortunately, the non-linearity of the triangulation using the direction measurements cause decrease of accuracy and consistency of the results. The first contribution of this work is a general derivation of the recursive update of the Kalman filter. This derivation is based on implicit measurement equations, having the classical iterative non-linear as well as the non-iterative and linear Kalman filter as specializations of our general derivation. Second, a new formulation of linear-motion models for the single camera state model and the sliding window camera state model are given, that make it possible to compute the prediction in a fully linear manner. The third major contribution is a novel method for the initialization of new object points in the Kalman filter. Empirical studies using synthetic and real data of an image sequence of a photogrammetric strip are made, that demonstrate and compare the influences of the initialization methods of new object points in the Kalman filter. Forth, the accuracy potential of monoscopic image sequences from unmanned aerial vehicles for autonomous localization and mapping is theoretically analyzed, which can be used for planning purposes.Visuelle gleichzeitige Lokalisierung und Kartierung aus Bildfolgen von unbemannten Flugkörpern Diese Arbeit zeigt, dass die Kalmanfilter basierte Lösung der Triangulation zur Lokalisierung und Kartierung aus Bildfolgen von unbemannten Flugkörpern realisierbar ist. Aufgrund von Echtzeitanforderungen autonomer Systeme erreichen rekursive Schätz-verfahren, insbesondere Kalmanfilter basierte Ansätze, große Beliebheit. Bedauerlicherweise treten dabei durch die Nichtlinearität der Triangulation einige Effekte auf, welche die Konsistenz und Genauigkeit der Lösung hinsichtlich der geschätzten Parameter maßgeblich beeinflussen. Der erste Beitrag dieser Arbeit besteht in der Herleitung eines generellen Verfahrens zum rekursiven Verbessern im Kalmanfilter mit impliziten Beobachtungsgleichungen. Wir zeigen, dass die klassischen Verfahren im Kalmanfilter eine Spezialisierung unseres Ansatzes darstellen. Im zweiten Beitrag erweitern wir die klassische Modellierung für ein Einkameramodell zu einem Mehrkameramodell im Kalmanfilter. Diese Erweiterung erlaubt es uns, die Prädiktion für eine lineares Bewegungsmodell vollkommen linear zu berechnen. In einem dritten Hauptbeitrag stellen wir ein neues Verfahren zur Initialisierung von Neupunkten im Kalmanfilter vor. Anhand von empirischen Untersuchungen unter Verwendung simulierter und realer Daten einer Bildfolge eines photogrammetrischen Streifens zeigen und vergleichen wir, welchen Einfluß die Initialisierungsmethoden für Neupunkte im Kalmanfilter haben und welche Genauigkeiten für diese Szenarien erreichbar sind. Am Beispiel von Bildfolgen eines unbemannten Flugkörpern zeigen wir in dieser Arbeit als vierten Beitrag, welche Genauigkeit zur Lokalisierung und Kartierung durch Triangulation möglich ist. Diese theoretische Analyse kann wiederum zu Planungszwecken verwendet werden

    Estimation of tropospheric wet delay from GNSS measurements

    Get PDF
    The determination of the zenith wet delay (ZWD) component can be a difficult task due to the dynamic nature of atmospheric water vapour. However, precise estimation of the ZWD is essential for high-precision Global Navigation Satellite System (GNSS) applications such as real-time positioning and Numerical Weather Prediction (NWP) modelling.The functional and stochastic models that can be used for the estimation of the tropospheric parameters from GNSS measurements are presented and discussed in this study. The focus is to determine the ZWD in an efficient manner in static mode. In GNSS, the estimation of the ZWD is directly impacted by the choice of stochastic model used in the estimation process. In this thesis, the rigorous Minimum Norm Quadratic Unbiased Estimation (MINQUE) method was investigated and compared with traditional models such as the equal-weighting model (EWM) and the elevationangle dependent model (EADM). A variation of the MINQUE method was also introduced. A simulation study of these models resulted in MINQUE outperforming the other stochastic models by at least 36% in resolving the height component. However, this superiority did not lead to better ZWD estimates. In fact, the EADM provided the most accurate set of ZWD estimates among all the models tested. The EADM also yielded the best ZWD estimates in the real data analyses for two independent baselines in Australia and in Europe, respectively.The study also assessed the validity of a baseline approach, with a reduced processing window size, to provide good ZWD estimates at Continuously Operating Reference Stations (CORS) in an efficient manner. Results show that if the a-priori station coordinates are accurately known, the baseline approach, along with a 2-hour processing window, can produce ZWD estimates that are statistically in good agreement with the estimates from external sources such as the radiosonde (RS), water vapour radiometer (WVR) and International GNSS Service (IGS) solutions. Resolving the ZWD from GNSS measurements in such a timely manner can aid NWP model in providing near real-time weather forecasts in the data assimilation process.In the real-time kinematic modelling of GNSS measurements, the first-order Gauss- Markov (GM) autocorrelation model is commonly used for the dynamic model in Kalman filtering. However, for the purpose of ZWD estimation, it was found that the GM model consistently underestimates the temporal correlations that exist among the ZWD measurements. Therefore, a new autocorrelation dynamic model is proposed in a form similar to that of a hyperbolic function. The proposed model initially requires a small number of autocorrelation estimates using the standard autocorrelation formulations. With these autocorrelation estimates, the least-squares method is then implemented to solve for the model’s parameter coefficients. Once solved, the model is then fully defined. The proposed model was shown to be able to follow the autocorrelation trend better than the GM model. Additionally, analysis of real data at an Australian IGS station has showed the proposed model performed better than the random-walk model, and just as well as the GM model. The proposed model was able to provide near real-time (i.e. 30 seconds interval) ZTD estimates to within 2 cm accuracy on average.The thesis also included an investigation into the several interpolation models for estimating missing ZWD observations that may take place during temporary breakdowns of GNSS stations, or malfunctions of RS and WVR equipments. Results indicated marginal differences between the polynomial regression models, linear interpolation, fast-Fourier transform and simple Kriging methods. However, the linear interpolation method, which is dependent on the two most recent data points, is preferable due to its simplicity. This result corresponded well with the autocorrelation analysis of the ZWD estimates where significant temporal correlations were observed for at most two hours.The study concluded with an evaluation of several trend and smoothing models to determine the best models for predicting ZWD estimates, which can help improve real-time kinematic (RTK) positioning by mitigating the tropospheric effect. The moving average (MA) and the single-exponential smoothing (SES) models were shown to be the best-performing prediction models overall. These two models were able to provide ZWD estimates with forecast errors of less 10% for up to 4 hours of prediction

    Trajectory determination and analysis in sports by satellite and inertial navigation

    Get PDF
    This research presents methods for performance analysis in sports through the integration of Global Positioning System (GPS) measurements with Inertial Navigation System (INS). The described approach focuses on strapdown inertial navigation using Micro-Electro-Mechanical System (MEMS) Inertial Measurement Units (IMU). A simple inertial error model is proposed and its relevance is proven by comparison to reference data. The concept is then extended to a setup employing several MEMS-IMUs in parallel. The performance of the system is validated with experiments in skiing and motorcycling. The position accuracy achieved with the integrated system varies from decimeter level with dual-frequency differential GPS (DGPS) to 0.7 m for low-cost, single-frequency DGPS. Unlike the position, the velocity accuracy (0.2 m/s) and orientation accuracy (1 – 2 deg) are almost insensitive to the choice of the receiver hardware. The orientation performance, however, is improved by 30 – 50% when integrating four MEMS-IMUs in skew-redundant configuration. Later part of this research introduces a methodology for trajectory comparison. It is shown that trajectories based on dual-frequency GPS positions can be directly modeled and compared using cubic spline smoothing, while those derived from single-frequency DGPS require additional filtering and matching

    Visual Odometry and Sparse Scene Reconstruction for UAVs with a Multi-Fisheye Camera System

    Get PDF
    Autonomously operating UAVs demand a fast localization for navigation, to actively explore unknown areas and to create maps. For pose estimation, many UAV systems make use of a combination of GPS receivers and inertial sensor units (IMU). However, GPS signal coverage may go down occasionally, especially in the close vicinity of objects, and precise IMUs are too heavy to be carried by lightweight UAVs. This and the high cost of high quality IMU motivate the use of inexpensive vision based sensors for localization using visual odometry or visual SLAM (simultaneous localization and mapping) techniques. The first contribution of this thesis is a more general approach to bundle adjustment with an extended version of the projective coplanarity equation which enables us to make use of omnidirectional multi-camera systems which may consist of fisheye cameras that can capture a large field of view with one shot. We use ray directions as observations instead of image points which is why our approach does not rely on a specific projection model assuming a central projection. In addition, our approach allows the integration and estimation of points at infinity, which classical bundle adjustments are not capable of. We show that the integration of far or infinitely far points stabilizes the estimation of the rotation angles of the camera poses. In its second contribution, we employ this approach to bundle adjustment in a highly integrated system for incremental pose estimation and mapping on light-weight UAVs. Based on the image sequences of a multi-camera system our system makes use of tracked feature points to incrementally build a sparse map and incrementally refines this map using the iSAM2 algorithm. Our system is able to optionally integrate GPS information on the level of carrier phase observations even in underconstrained situations, e.g. if only two satellites are visible, for georeferenced pose estimation. This way, we are able to use all available information in underconstrained GPS situations to keep the mapped 3D model accurate and georeferenced. In its third contribution, we present an approach for re-using existing methods for dense stereo matching with fisheye cameras, which has the advantage that highly optimized existing methods can be applied as a black-box without modifications even with cameras that have field of view of more than 180 deg. We provide a detailed accuracy analysis of the obtained dense stereo results. The accuracy analysis shows the growing uncertainty of observed image points of fisheye cameras due to increasing blur towards the image border. Core of the contribution is a rigorous variance component estimation which allows to estimate the variance of the observed disparities at an image point as a function of the distance of that point to the principal point. We show that this improved stochastic model provides a more realistic prediction of the uncertainty of the triangulated 3D points.Autonom operierende UAVs benötigen eine schnelle Lokalisierung zur Navigation, zur Exploration unbekannter Umgebungen und zur Kartierung. Zur Posenbestimmung verwenden viele UAV-Systeme eine Kombination aus GPS-Empfängern und Inertial-Messeinheiten (IMU). Die Verfügbarkeit von GPS-Signalen ist jedoch nicht überall gewährleistet, insbesondere in der Nähe abschattender Objekte, und präzise IMUs sind für leichtgewichtige UAVs zu schwer. Auch die hohen Kosten qualitativ hochwertiger IMUs motivieren den Einsatz von kostengünstigen bildgebenden Sensoren zur Lokalisierung mittels visueller Odometrie oder SLAM-Techniken zur simultanen Lokalisierung und Kartierung. Im ersten wissenschaftlichen Beitrag dieser Arbeit entwickeln wir einen allgemeineren Ansatz für die Bündelausgleichung mit einem erweiterten Modell für die projektive Kollinearitätsgleichung, sodass auch omnidirektionale Multikamerasysteme verwendet werden können, welche beispielsweise bestehend aus Fisheyekameras mit einer Aufnahme einen großen Sichtbereich abdecken. Durch die Integration von Strahlrichtungen als Beobachtungen ist unser Ansatz nicht von einem kameraspezifischen Abbildungsmodell abhängig solange dieses der Zentralprojektion folgt. Zudem erlaubt unser Ansatz die Integration und Schätzung von unendlich fernen Punkten, was bei klassischen Bündelausgleichungen nicht möglich ist. Wir zeigen, dass durch die Integration weit entfernter und unendlich ferner Punkte die Schätzung der Rotationswinkel der Kameraposen stabilisiert werden kann. Im zweiten Beitrag verwenden wir diesen entwickelten Ansatz zur Bündelausgleichung für ein System zur inkrementellen Posenschätzung und dünnbesetzten Kartierung auf einem leichtgewichtigen UAV. Basierend auf den Bildsequenzen eines Mulitkamerasystems baut unser System mittels verfolgter markanter Bildpunkte inkrementell eine dünnbesetzte Karte auf und verfeinert diese inkrementell mittels des iSAM2-Algorithmus. Unser System ist in der Lage optional auch GPS Informationen auf dem Level von GPS-Trägerphasen zu integrieren, wodurch sogar in unterbestimmten Situation - beispielsweise bei nur zwei verfügbaren Satelliten - diese Informationen zur georeferenzierten Posenschätzung verwendet werden können. Im dritten Beitrag stellen wir einen Ansatz zur Verwendung existierender Methoden für dichtes Stereomatching mit Fisheyekameras vor, sodass hoch optimierte existierende Methoden als Black Box ohne Modifzierungen sogar mit Kameras mit einem Gesichtsfeld von mehr als 180 Grad verwendet werden können. Wir stellen eine detaillierte Genauigkeitsanalyse basierend auf dem Ergebnis des dichten Stereomatchings dar. Die Genauigkeitsanalyse zeigt, wie stark die Genauigkeit beobachteter Bildpunkte bei Fisheyekameras zum Bildrand aufgrund von zunehmender Unschärfe abnimmt. Das Kernstück dieses Beitrags ist eine Varianzkomponentenschätzung, welche die Schätzung der Varianz der beobachteten Disparitäten an einem Bildpunkt als Funktion von der Distanz dieses Punktes zum Hauptpunkt des Bildes ermöglicht. Wir zeigen, dass dieses verbesserte stochastische Modell eine realistischere Prädiktion der Genauigkeiten der 3D Punkte ermöglicht

    Über die GPS-basierte Bestimmung troposphärischer Laufzeitverzögerungen

    Get PDF
    One major problem of precise GPS data analysis is that of modeling wetdelays with high precision. All conventional models have to fail in this task due to the impossibility of modeling wet delays solely from surface measurements like temperature and relative humidity. Actually, the non-hydrostatic component of the tropospheric propagation delay is highly influenced by the distribution of water vapor in the lower troposphere which cannot be sufficiently predicted with sole help of surface measurements. A work-around is to include atmospheric parameters as additional unknowns in the analysis of GPS data from permanent monitor stations that turns out to improve the quality of position estimates. Moreover, knowledge of zenith wet delays allows to obtain a highly interesting value for climatology and meteorology: integrated or precipitable water vapor being important for the energy balance of the atmosphere and holds share of more than 60% of the natural greenhouse effect. GPS can thereby contribute to the improvement of climate models and weather forecasting. This work outlines the application of ground-based GPS to climate research and meteorology without omitting the fact that precise GPS positioning can also highly benefit from using numerical weather models for tropospheric delay determination for applications where GPS troposphere estimation is not possible, for example kinematic and rapid static surveys. In this sense, the technique of GPS-derived tropospheric delays is seen as mutually improving both disciplines, precise positioning as well as meteorology and climatology

    Sensors, measurement fusion and missile trajectory optimisation

    Get PDF
    When considering advances in “smart” weapons it is clear that air-launched systems have adopted an integrated approach to meet rigorous requirements, whereas air-defence systems have not. The demands on sensors, state observation, missile guidance, and simulation for air-defence is the subject of this research. Historical reviews for each topic, justification of favoured techniques and algorithms are provided, using a nomenclature developed to unify these disciplines. Sensors selected for their enduring impact on future systems are described and simulation models provided. Complex internal systems are reduced to simpler models capable of replicating dominant features, particularly those that adversely effect state observers. Of the state observer architectures considered, a distributed system comprising ground based target and own-missile tracking, data up-link, and on-board missile measurement and track fusion is the natural choice for air-defence. An IMM is used to process radar measurements, combining the estimates from filters with different target dynamics. The remote missile state observer combines up-linked target tracks and missile plots with IMU and seeker data to provide optimal guidance information. The performance of traditional PN and CLOS missile guidance is the basis against which on-line trajectory optimisation is judged. Enhanced guidance laws are presented that demand more from the state observers, stressing the importance of time-to-go and transport delays in strap-down systems employing staring array technology. Algorithms for solving the guidance twopoint boundary value problems created from the missile state observer output using gradient projection in function space are presented. A simulation integrating these aspects was developed whose infrastructure, capable of supporting any dynamical model, is described in the air-defence context. MBDA have extended this work creating the Aircraft and Missile Integration Simulation (AMIS) for integrating different launchers and missiles. The maturity of the AMIS makes it a tool for developing pre-launch algorithms for modern air-launched missiles from modern military aircraft.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Grouping Uncertain Oriented Projective Geometric Entities with Application to Automatic Building Reconstruction

    Get PDF
    The fully automatic reconstruction of 3d scenes from a set of 2d images has always been a key issue in photogrammetry and computer vision and has not been solved satisfactory so far. Most of the current approaches match features between the images based on radiometric cues followed by a reconstruction using the image geometry. The motivation for this work is the conjecture that in the presence of highly redundant data it should be possible to recover the scene structure by grouping together geometric primitives in a bottom-up manner. Oriented projective geometry will be used throughout this work, which allows to represent geometric primitives, such as points, lines and planes in 2d and 3d space as well as projective cameras, together with their uncertainty. The first major contribution of the work is the use of uncertain oriented projective geometry, rather than uncertain projective geometry, that enables the representation of more complex compound entities, such as line segments and polygons in 2d and 3d space as well as 2d edgels and 3d facets. Within the uncertain oriented projective framework a procedure is developed, which allows to test pairwise relations between the various uncertain oriented projective entities. Again, the novelty lies in the possibility to check relations between the novel compound entities. The second major contribution of the work is the development of a data structure, specifically designed to enable performing the tests between large numbers of entities in an efficient manner. Being able to efficiently test relations between the geometric entities, a framework for grouping those entities together is developed. Various different grouping methods are discussed. The third major contribution of this work is the development of a novel grouping method that by analyzing the entropy change incurred by incrementally adding observations into an estimation is able to balance efficiency against robustness in order to achieve better grouping results. Finally the applicability of the proposed representations, tests and grouping methods for the task of purely geometry based building reconstruction from oriented aerial images is demonstrated. It will be shown that in the presence of highly redundant datasets it is possible to achieve reasonable reconstruction results by grouping together geometric primitives.Gruppierung unsicherer orientierter projektiver geometrischer Elemente mit Anwendung in der automatischen Gebäuderekonstruktion Die vollautomatische Rekonstruktion von 3D Szenen aus einer Menge von 2D Bildern war immer ein Hauptanliegen in der Photogrammetrie und Computer Vision und wurde bisher noch nicht zufriedenstellend gelöst. Die meisten aktuellen Ansätze ordnen Merkmale zwischen den Bildern basierend auf radiometrischen Eigenschaften zu. Daran schließt sich dann eine Rekonstruktion auf der Basis der Bildgeometrie an. Die Motivation für diese Arbeit ist die These, dass es möglich sein sollte, die Struktur einer Szene durch Gruppierung geometrischer Primitive zu rekonstruieren, falls die Eingabedaten genügend redundant sind. Orientierte projektive Geometrie wird in dieser Arbeit zur Repräsentation geometrischer Primitive, wie Punkten, Linien und Ebenen in 2D und 3D sowie projektiver Kameras, zusammen mit ihrer Unsicherheit verwendet.Der erste Hauptbeitrag dieser Arbeit ist die Verwendung unsicherer orientierter projektiver Geometrie, anstatt von unsicherer projektiver Geometrie, welche die Repräsentation von komplexeren zusammengesetzten Objekten, wie Liniensegmenten und Polygonen in 2D und 3D sowie 2D Edgels und 3D Facetten, ermöglicht. Innerhalb dieser unsicheren orientierten projektiven Repräsentation wird ein Verfahren zum testen paarweiser Relationen zwischen den verschiedenen unsicheren orientierten projektiven geometrischen Elementen entwickelt. Dabei liegt die Neuheit wieder in der Möglichkeit, Relationen zwischen den neuen zusammengesetzten Elementen zu prüfen. Der zweite Hauptbeitrag dieser Arbeit ist die Entwicklung einer Datenstruktur, welche speziell auf die effiziente Prüfung von solchen Relationen zwischen vielen Elementen ausgelegt ist. Die Möglichkeit zur effizienten Prüfung von Relationen zwischen den geometrischen Elementen erlaubt nun die Entwicklung eines Systems zur Gruppierung dieser Elemente. Verschiedene Gruppierungsmethoden werden vorgestellt. Der dritte Hauptbeitrag dieser Arbeit ist die Entwicklung einer neuen Gruppierungsmethode, die durch die Analyse der änderung der Entropie beim Hinzufügen von Beobachtungen in die Schätzung Effizienz und Robustheit gegeneinander ausbalanciert und dadurch bessere Gruppierungsergebnisse erzielt. Zum Schluss wird die Anwendbarkeit der vorgeschlagenen Repräsentationen, Tests und Gruppierungsmethoden für die ausschließlich geometriebasierte Gebäuderekonstruktion aus orientierten Luftbildern demonstriert. Es wird gezeigt, dass unter der Annahme von hoch redundanten Datensätzen vernünftige Rekonstruktionsergebnisse durch Gruppierung von geometrischen Primitiven erzielbar sind

    Grouping Uncertain Oriented Projective Geometric Entities with Application to Automatic Building Reconstruction

    Get PDF
    The fully automatic reconstruction of 3d scenes from a set of 2d images has always been a key issue in photogrammetry and computer vision and has not been solved satisfactory so far. Most of the current approaches match features between the images based on radiometric cues followed by a reconstruction using the image geometry. The motivation for this work is the conjecture that in the presence of highly redundant data it should be possible to recover the scene structure by grouping together geometric primitives in a bottom-up manner. Oriented projective geometry will be used throughout this work, which allows to represent geometric primitives, such as points, lines and planes in 2d and 3d space as well as projective cameras, together with their uncertainty. The first major contribution of the work is the use of uncertain oriented projective geometry, rather than uncertain projective geometry, that enables the representation of more complex compound entities, such as line segments and polygons in 2d and 3d space as well as 2d edgels and 3d facets. Within the uncertain oriented projective framework a procedure is developed, which allows to test pairwise relations between the various uncertain oriented projective entities. Again, the novelty lies in the possibility to check relations between the novel compound entities. The second major contribution of the work is the development of a data structure, specifically designed to enable performing the tests between large numbers of entities in an efficient manner. Being able to efficiently test relations between the geometric entities, a framework for grouping those entities together is developed. Various different grouping methods are discussed. The third major contribution of this work is the development of a novel grouping method that by analyzing the entropy change incurred by incrementally adding observations into an estimation is able to balance efficiency against robustness in order to achieve better grouping results. Finally the applicability of the proposed representations, tests and grouping methods for the task of purely geometry based building reconstruction from oriented aerial images is demonstrated. lt will be shown that in the presence of highly redundant datasets it is possible to achieve reasonable reconstruction results by grouping together geometric primitives.Gruppierung unsicherer orientierter projektiver geometrischer Elemente mit Anwendung in der automatischen Gebäuderekonstruktion Die vollautomatische Rekonstruktion von 3D Szenen aus einer Menge von 2D Bildern war immer ein Hauptanliegen in der Photogrammetrie und Computer Vision und wurde bisher noch nicht zufriedenstellend gelöst. Die meisten aktuellen Ansätze ordnen Merkmale zwischen den Bildern basierend auf radiometrischen Eigenschaften zu. Daran schließt sich dann eine Rekonstruktion auf der Basis der Bildgeometrie an. Die Motivation für diese Arbeit ist die These, dass es möglich sein sollte, die Struktur einer Szene durch Gruppierung geometrischer Primitive zu rekonstruieren, falls die Eingabedaten genügend redundant sind. Orientierte projektive Geometrie wird in dieser Arbeit zur Repräsentation geometrischer Primitive, wie Punkten, Linien und Ebenen in 2D und 3D sowie projektiver Kameras, zusammen mit ihrer Unsicherheit verwendet. Der erste Hauptbeitrag dieser Arbeit ist die Verwendung unsicherer orientierter projektiver Geometrie, anstatt von unsicherer projektiver Geometrie, welche die Repräsentation von komplexeren zusammengesetzten Objekten, wie Liniensegmenten und Polygonen in 2D und 3D sowie 2D Edgels und 3D Facetten, ermöglicht. Innerhalb dieser unsicheren orientierten projektiven Repräsentation wird ein Verfahren zum Testen paarweiser Relationen zwischen den verschiedenen unsicheren orientierten projektiven geometrischen Elementen entwickelt. Dabei liegt die Neuheit wieder in der Möglichkeit, Relationen zwischen den neuen zusammengesetzten Elementen zu prüfen. Der zweite Hauptbeitrag dieser Arbeit ist die Entwicklung einer Datenstruktur, welche speziell auf die effiziente Prüfung von solchen Relationen zwischen vielen Elementen ausgelegt ist. Die Möglichkeit zur effizienten Prüfung von Relationen zwischen den geometrischen Elementen erlaubt nun die Entwicklung eines Systems zur Gruppierung dieser Elemente. Verschiedene Gruppierungsmethoden werden vorgestellt. Der dritte Hauptbeitrag dieser Arbeit ist die Entwicklung einer neuen Gruppierungsmethode, die durch die Analyse der Änderung der Entropie beim Hinzufügen von Beobachtungen in die Schätzung Effizienz und Robustheit gegeneinander ausbalanciert und dadurch bessere Gruppierungsergebnisse erzielt. Zum Schluss wird die Anwendbarkeit der vorgeschlagenen Repräsentationen, Tests und Gruppierungsmethoden für die ausschließlich geometriebasierte Gebäuderekonstruktion aus orientierten Luftbildern demonstriert. Es wird gezeigt, dass unter der Annahme von hoch redundanten Datensätzen vernünftige Rekonstruktionsergebnisse durch Gruppierung von geometrischen Primitiven erzielbar sind
    corecore