131 research outputs found

    Target detection, tracking, and localization using multi-spectral image fusion and RF Doppler differentials

    Get PDF
    It is critical for defense and security applications to have a high probability of detection and low false alarm rate while operating over a wide variety of conditions. Sensor fusion, which is the the process of combining data from two or more sensors, has been utilized to improve the performance of a system by exploiting the strengths of each sensor. This dissertation presents algorithms to fuse multi-sensor data that improves system performance by increasing detection rates, lowering false alarms, and improving track performance. Furthermore, this dissertation presents a framework for comparing algorithm error for image registration which is a critical pre-processing step for multi-spectral image fusion. First, I present an algorithm to improve detection and tracking performance for moving targets in a cluttered urban environment by fusing foreground maps from multi-spectral imagery. Most research in image fusion consider visible and long-wave infrared bands; I examine these bands along with near infrared and mid-wave infrared. To localize and track a particular target of interest, I present an algorithm to fuse output from the multi-spectral image tracker with a constellation of RF sensors measuring a specific cellular emanation. The fusion algorithm matches the Doppler differential from the RF sensors with the theoretical Doppler Differential of the video tracker output by selecting the sensor pair that minimizes the absolute difference or root-mean-square difference. Finally, a framework to quantify shift-estimation error for both area- and feature-based algorithms is presented. By exploiting synthetically generated visible and long-wave infrared imagery, error metrics are computed and compared for a number of area- and feature-based shift estimation algorithms. A number of key results are presented in this dissertation. The multi-spectral image tracker improves the location accuracy of the algorithm while improving the detection rate and lowering false alarms for most spectral bands. All 12 moving targets were tracked through the video sequence with only one lost track that was later recovered. Targets from the multi-spectral tracking algorithm were correctly associated with their corresponding cellular emanation for all targets at lower measurement uncertainty using the root-mean-square difference while also having a high confidence ratio for selecting the true target from background targets. For the area-based algorithms and the synthetic air-field image pair, the DFT and ECC algorithms produces sub-pixel shift-estimation error in regions such as shadows and high contrast painted line regions. The edge orientation feature descriptors increase the number of sub-field estimates while improving the shift-estimation error compared to the Lowe descriptor

    ISAR image matching and three-dimensional scattering imaging based on extracted dominant scatterers

    Get PDF
    This paper studies inverse synthetic aperture radar (ISAR) image matching and three-dimensional (3D) scattering imaging based on extracted dominant scatterers. In the condition of a long baseline between two radars, it is easy for obvious rotation, scale, distortion, and shift to occur between two-dimensional (2D) radar images. These problems lead to the difficulty of radar-image matching, which cannot be resolved by motion compensation and cross-correlation. What is more, due to the anisotropy, existing image-matching algorithms, such as scale invariant feature transform (SIFT), do not adapt to ISAR images very well. In addition, the angle between the target rotation axis and the radar line of sight (LOS) cannot be neglected. If so, the calibration result will be smaller than the real projection size. Furthermore, this angle cannot be estimated by monostatic radar. Therefore, instead of matching image by image, this paper proposes a novel ISAR imaging matching and 3D imaging based on extracted scatterers to deal with these issues. First, taking advantage of ISAR image sparsity, radar images are converted into scattering point sets. Then, a coarse scatterer matching based on the random sampling consistency algorithm (RANSAC) is performed. The scatterer height and accurate affine transformation parameters are estimated iteratively. Based on matched scatterers, information such as the angle and 3D image can be obtained. Finally, experiments based on the electromagnetic simulation software CADFEKO have been conducted to demonstrate the effectiveness of the proposed algorithm

    3D reconstruction and motion estimation using forward looking sonar

    Get PDF
    Autonomous Underwater Vehicles (AUVs) are increasingly used in different domains including archaeology, oil and gas industry, coral reef monitoring, harbour’s security, and mine countermeasure missions. As electromagnetic signals do not penetrate underwater environment, GPS signals cannot be used for AUV navigation, and optical cameras have very short range underwater which limits their use in most underwater environments. Motion estimation for AUVs is a critical requirement for successful vehicle recovery and meaningful data collection. Classical inertial sensors, usually used for AUV motion estimation, suffer from large drift error. On the other hand, accurate inertial sensors are very expensive which limits their deployment to costly AUVs. Furthermore, acoustic positioning systems (APS) used for AUV navigation require costly installation and calibration. Moreover, they have poor performance in terms of the inferred resolution. Underwater 3D imaging is another challenge in AUV industry as 3D information is increasingly demanded to accomplish different AUV missions. Different systems have been proposed for underwater 3D imaging, such as planar-array sonar and T-configured 3D sonar. While the former features good resolution in general, it is very expensive and requires huge computational power, the later is cheaper implementation but requires long time for full 3D scan even in short ranges. In this thesis, we aim to tackle AUV motion estimation and underwater 3D imaging by proposing relatively affordable methodologies and study different parameters affecting their performance. We introduce a new motion estimation framework for AUVs which relies on the successive acoustic images to infer AUV ego-motion. Also, we propose an Acoustic Stereo Imaging (ASI) system for underwater 3D reconstruction based on forward looking sonars; the proposed system features cheaper implementation than planar array sonars and solves the delay problem in T configured 3D sonars

    3D reconstruction and object recognition from 2D SONAR data

    Get PDF
    Accurate and meaningful representations of the environment are required for autonomy in underwater applications. Thanks to favourable propagation properties in water, acoustic sensors are commonly preferred to video cameras and lasers but do not provide direct 3D information. This thesis addresses the 3D reconstruction of underwater scenes from 2D imaging SONAR data as well as the recognition of objects of interest in the reconstructed scene. We present two 3D reconstruction methods and two model-based object recognition methods. We evaluate our algorithms on multiple scenarios including data gathered by an AUV. We show the ability to reconstruct underwater environments at centimetre-level accuracy using 2D SONARs of any aperture. We demonstrate the recognition of structures of interest on a medium-sized oil-field type environment providing accurate yet low memory footprint semantic world models. We conclude that accurate 3D semantic representations of partially-structured marine environments can be obtained from commonly embedded 2D SONARs, enabling online world modelling, relocalisation and model-based applications

    On the use of smartphones as novel photogrammetric water gauging instruments: Developing tools for crowdsourcing water levels

    Get PDF
    The term global climate change is omnipresent since the beginning of the last decade. Changes in the global climate are associated with an increase in heavy rainfalls that can cause nearly unpredictable flash floods. Consequently, spatio-temporally high-resolution monitoring of rivers becomes increasingly important. Water gauging stations continuously and precisely measure water levels. However, they are rather expensive in purchase and maintenance and are preferably installed at water bodies relevant for water management. Small-scale catchments remain often ungauged. In order to increase the data density of hydrometric monitoring networks and thus to improve the prediction quality of flood events, new, flexible and cost-effective water level measurement technologies are required. They should be oriented towards the accuracy requirements of conventional measurement systems and facilitate the observation of water levels at virtually any time, even at the smallest rivers. A possible solution is the development of a photogrammetric smartphone application (app) for crowdsourcing water levels, which merely requires voluntary users to take pictures of a river section to determine the water level. Today’s smartphones integrate high-resolution cameras, a variety of sensors, powerful processors, and mass storage. However, they are designed for the mass market and use low-cost hardware that cannot comply with the quality of geodetic measurement technology. In order to investigate the potential for mobile measurement applications, research was conducted on the smartphone as a photogrammetric measurement instrument as part of the doctoral project. The studies deal with the geometric stability of smartphone cameras regarding device-internal temperature changes and with the accuracy potential of rotation parameters measured with smartphone sensors. The results show a high, temperature-related variability of the interior orientation parameters, which is why the calibration of the camera should be carried out during the immediate measurement. The results of the sensor investigations show considerable inaccuracies when measuring rotation parameters, especially the compass angle (errors up to 90° were observed). The same applies to position parameters measured by global navigation satellite system (GNSS) receivers built into smartphones. According to the literature, positional accuracies of about 5 m are possible in best conditions. Otherwise, errors of several 10 m are to be expected. As a result, direct georeferencing of image measurements using current smartphone technology should be discouraged. In consideration of the results, the water gauging app Open Water Levels (OWL) was developed, whose methodological development and implementation constituted the core of the thesis project. OWL enables the flexible measurement of water levels via crowdsourcing without requiring additional equipment or being limited to specific river sections. Data acquisition and processing take place directly in the field, so that the water level information is immediately available. In practice, the user captures a short time-lapse sequence of a river bank with OWL, which is used to calculate a spatio-temporal texture that enables the detection of the water line. In order to translate the image measurement into 3D object space, a synthetic, photo-realistic image of the situation is created from existing 3D data of the river section to be investigated. Necessary approximations of the image orientation parameters are measured by smartphone sensors and GNSS. The assignment of camera image and synthetic image allows for the determination of the interior and exterior orientation parameters by means of space resection and finally the transfer of the image-measured 2D water line into the 3D object space to derive the prevalent water level in the reference system of the 3D data. In comparison with conventionally measured water levels, OWL reveals an accuracy potential of 2 cm on average, provided that synthetic image and camera image exhibit consistent image contents and that the water line can be reliably detected. In the present dissertation, related geometric and radiometric problems are comprehensively discussed. Furthermore, possible solutions, based on advancing developments in smartphone technology and image processing as well as the increasing availability of 3D reference data, are presented in the synthesis of the work. The app Open Water Levels, which is currently available as a beta version and has been tested on selected devices, provides a basis, which, with continuous further development, aims to achieve a final release for crowdsourcing water levels towards the establishment of new and the expansion of existing monitoring networks.Der Begriff des globalen Klimawandels ist seit Beginn des letzten Jahrzehnts allgegenwärtig. Die Veränderung des Weltklimas ist mit einer Zunahme von Starkregenereignissen verbunden, die nahezu unvorhersehbare Sturzfluten verursachen können. Folglich gewinnt die raumzeitlich hochaufgelöste Überwachung von Fließgewässern zunehmend an Bedeutung. Pegelmessstationen erfassen kontinuierlich und präzise Wasserstände, sind jedoch in Anschaffung und Wartung sehr teuer und werden vorzugsweise an wasserwirtschaftlich-relevanten Gewässern installiert. Kleinere Gewässer bleiben häufig unbeobachtet. Um die Datendichte hydrometrischer Messnetze zu erhöhen und somit die Vorhersagequalität von Hochwasserereignissen zu verbessern, sind neue, kostengünstige und flexibel einsetzbare Wasserstandsmesstechnologien erforderlich. Diese sollten sich an den Genauigkeitsanforderungen konventioneller Messsysteme orientieren und die Beobachtung von Wasserständen zu praktisch jedem Zeitpunkt, selbst an den kleinsten Flüssen, ermöglichen. Ein Lösungsvorschlag ist die Entwicklung einer photogrammetrischen Smartphone-Anwendung (App) zum Crowdsourcing von Wasserständen mit welcher freiwillige Nutzer lediglich Bilder eines Flussabschnitts aufnehmen müssen, um daraus den Wasserstand zu bestimmen. Heutige Smartphones integrieren hochauflösende Kameras, eine Vielzahl von Sensoren, leistungsfähige Prozessoren und Massenspeicher. Sie sind jedoch für den Massenmarkt konzipiert und verwenden kostengünstige Hardware, die nicht der Qualität geodätischer Messtechnik entsprechen kann. Um das Einsatzpotential in mobilen Messanwendungen zu eruieren, sind Untersuchungen zum Smartphone als photogrammetrisches Messinstrument im Rahmen des Promotionsprojekts durchgeführt worden. Die Studien befassen sich mit der geometrischen Stabilität von Smartphone-Kameras bezüglich geräteinterner Temperaturänderungen und mit dem Genauigkeitspotential von mit Smartphone-Sensoren gemessenen Rotationsparametern. Die Ergebnisse zeigen eine starke, temperaturbedingte Variabilität der inneren Orientierungsparameter, weshalb die Kalibrierung der Kamera zum unmittelbaren Messzeitpunkt erfolgen sollte. Die Ergebnisse der Sensoruntersuchungen zeigen große Ungenauigkeiten bei der Messung der Rotationsparameter, insbesondere des Kompasswinkels (Fehler von bis zu 90° festgestellt). Selbiges gilt auch für Positionsparameter, gemessen durch in Smartphones eingebaute Empfänger für Signale globaler Navigationssatellitensysteme (GNSS). Wie aus der Literatur zu entnehmen ist, lassen sich unter besten Bedingungen Lagegenauigkeiten von etwa 5 m erreichen. Abseits davon sind Fehler von mehreren 10 m zu erwarten. Infolgedessen ist von einer direkten Georeferenzierung von Bildmessungen mittels aktueller Smartphone-Technologie abzusehen. Unter Berücksichtigung der gewonnenen Erkenntnisse wurde die Pegel-App Open Water Levels (OWL) entwickelt, deren methodische Entwicklung und Implementierung den Kern der Arbeit bildete. OWL ermöglicht die flexible Messung von Wasserständen via Crowdsourcing, ohne dabei zusätzliche Ausrüstung zu verlangen oder auf spezifische Flussabschnitte beschränkt zu sein. Datenaufnahme und Verarbeitung erfolgen direkt im Feld, so dass die Pegelinformationen sofort verfügbar sind. Praktisch nimmt der Anwender mit OWL eine kurze Zeitraffersequenz eines Flussufers auf, die zur Berechnung einer Raum-Zeit-Textur dient und die Erkennung der Wasserlinie ermöglicht. Zur Übersetzung der Bildmessung in den 3D-Objektraum wird aus vorhandenen 3D-Daten des zu untersuchenden Flussabschnittes ein synthetisches, photorealistisches Abbild der Aufnahmesituation erstellt. Erforderliche Näherungen der Bildorientierungsparameter werden von Smartphone-Sensoren und GNSS gemessen. Die Zuordnung von Kamerabild und synthetischem Bild erlaubt die Bestimmung der inneren und äußeren Orientierungsparameter mittels räumlichen Rückwärtsschnitt. Nach Rekonstruktion der Aufnahmesituation lässt sich die im Bild gemessene 2D-Wasserlinie in den 3D-Objektraum projizieren und der vorherrschende Wasserstand im Referenzsystem der 3D-Daten ableiten. Im Soll-Ist-Vergleich mit konventionell gemessenen Pegeldaten zeigt OWL ein erreichbares Genauigkeitspotential von durchschnittlich 2 cm, insofern synthetisches und reales Kamerabild einen möglichst konsistenten Bildinhalt aufweisen und die Wasserlinie zuverlässig detektiert werden kann. In der vorliegenden Dissertation werden damit verbundene geometrische und radiometrische Probleme ausführlich diskutiert sowie Lösungsansätze, auf der Basis fortschreitender Entwicklungen von Smartphone-Technologie und Bildverarbeitung sowie der zunehmenden Verfügbarkeit von 3D-Referenzdaten, in der Synthese der Arbeit vorgestellt. Mit der gegenwärtig als Betaversion vorliegenden und auf ausgewählten Geräten getesteten App Open Water Levels wurde eine Basis geschaffen, die mit kontinuierlicher Weiterentwicklung eine finale Freigabe für das Crowdsourcing von Wasserständen und damit den Aufbau neuer und die Erweiterung bestehender Monitoring-Netzwerke anstrebt

    Synthetic Aperture Radar (SAR) Meets Deep Learning

    Get PDF
    This reprint focuses on the application of the combination of synthetic aperture radars and depth learning technology. It aims to further promote the development of SAR image intelligent interpretation technology. A synthetic aperture radar (SAR) is an important active microwave imaging sensor, whose all-day and all-weather working capacity give it an important place in the remote sensing community. Since the United States launched the first SAR satellite, SAR has received much attention in the remote sensing community, e.g., in geological exploration, topographic mapping, disaster forecast, and traffic monitoring. It is valuable and meaningful, therefore, to study SAR-based remote sensing applications. In recent years, deep learning represented by convolution neural networks has promoted significant progress in the computer vision community, e.g., in face recognition, the driverless field and Internet of things (IoT). Deep learning can enable computational models with multiple processing layers to learn data representations with multiple-level abstractions. This can greatly improve the performance of various applications. This reprint provides a platform for researchers to handle the above significant challenges and present their innovative and cutting-edge research results when applying deep learning to SAR in various manuscript types, e.g., articles, letters, reviews and technical reports

    Image Processing Systems and Algorithms for estimating Deformations of Aircraft Structures in Flight

    Get PDF
    If you have ever been on an aircraft and looked at the window, you may have noticed the remarkable deformations of its wings. This observation actually conveys a lot of information about the aerodynamic efforts that are applied to the aircraft. Long before the first flight of an aircraft, manufacturers are able to predict its mechanical behavior in various scenarii depending for instance on the aircraft weight, speed or angle of attack, based on accurate theoretical models. As part of the aircraft certification procedure, these models have to be validated and refined through in-flight estimation of wing deformations. However, as the quality and accuracy of the wing models increase, the methods used to obtain the actual measurements should also evolve. In this work, a new system is developed and evaluated to estimate the 3D shape of a wing in flight. To answer the new needs of dense mapping, precision, or frequency, while introducing no disturbance on the wing aerodynamic behavior, this study is focusing on the methods of non-contact 3D reconstruction. After performing a detailed study about state-ofthe-art systems in this field, a photogrammetry approach using multiple cameras installed at the aircraft windows was retained, and a full algorithmic and hardware system was developed. Similarly to most standard photogrammetry methods, the proposed approach is based on Bundle Adjustment (BA), a classical method that simultaneously estimates camera positions and surrounding 3D scene. BA is an iterative optimization algorithm that aims at minimizing a non-convex and non-linear cost function. Therefore, one cannot guarantee its convergence to a global minimum, and the choice of the initial conditions is crucial in practical applications. Consequently, the application of photogrammetry to 3D wing reconstruction in flight is a very challenging problem, due to strong installation constraints, and highly varying environment with vibrations, luminosity changes, potential reflections and shadows. To face these challenges, this work presents a new constrained BA, which uses prior knowledge resulting from wing mechanical limits beyond which the wing would break, and improves reconstruction results as demonstrated through realistic tests. In a second step, an in-depth study of error sources and reconstruction uncertainty is provided in order to guarantee the quality of the 3D estimation, as well as the possibility of having a better interpretation of reconstruction errors. To this aim, all potential sources of uncertainty are evaluated, and propagated through the proposed framework using three approaches: analytical calculation, Monte-Carlo simulation, and experimental validation on synthetic images. The different implementations and results allowed one to conclude on the advantages and disadvantages of each method. They also prove that the developed system meets the expectations of Airbus. Finally, the designed system is validated on real tests with an A350-1000 of the flight test center in Airbus. These experimentations conducted in real conditions show the pertinence of the proposed solution with respect to the observed sources of uncertainty, and provide promising results

    Interferometric Synthetic Aperture RADAR and Radargrammetry towards the Categorization of Building Changes

    Get PDF
    The purpose of this work is the investigation of SAR techniques relying on multi image acquisition for fully automatic and rapid change detection analysis at building level. In particular, the benefits and limitations of a complementary use of two specific SAR techniques, InSAR and radargrammetry, in an emergency context are examined in term of quickness, globality and accuracy. The analysis is performed using spaceborne SAR data

    Image registration for sonar applications

    Get PDF
    This work develops techniques to estimate the motion of an underwater platform by processing data from an on-board sonar, such as a Forward Looking Sonar (FLS). Based on image registration, a universal algorithm has been developed and validated with in field datasets. The proposed algorithm gives a high quality registration to a fine (sub-pixel) precision using an adaptive filter and is suitable for both optical and acoustic images. The efficiency and quality of the result can be improved if an initial estimate of the motion is made. Therefore, a coarse (pixel-wide) registration algorithm is proposed, this is based on the assumption of local sparsity in the pixel motion between two images. Using a coarse and then fine registration, large displacements can be accommodated with a result that is to a sub-pixel precision. The registration process produces a displacement map (DM) between two images. From a sequence of DMs, an estimation of the sensor's motion is made. This is performed by a proposed fast searching and matching technique applied to a library of modelled DMs. Further, this technique exploits regularised splines to estimate the attitude and trajectory of the platform. To validate the results, a mosaic has been produced from three sets of in field data. Using a more detailed model of the acoustic propagation has the potential to improve the results further. As a step towards this a baseband underwater channel model has been developed. A physics simulator is used to characterise the channel at waymark points in a changing environment. A baseband equivalent representation of the time varying channel is then interpolated from these points. Processing in the baseband reduces the sample rate and hence reduces the run time for the model. A comparison to a more established channel model has been made to validate the results
    corecore