35 research outputs found

    Lunar Terrain and Albedo Reconstruction from Apollo Imagery

    Get PDF
    Generating accurate three dimensional planetary models and albedo maps is becoming increasingly more important as NASA plans more robotics missions to the Moon in the coming years. This paper describes a novel approach for separation of topography and albedo maps from orbital Lunar images. Our method uses an optimal Bayesian correlator to refine the stereo disparity map and generate a set of accurate digital elevation models (DEM). The albedo maps are obtained using a multi-image formation model that relies on the derived DEMs and the Lunar- Lambert reflectance model. The method is demonstrated on a set of high resolution scanned images from the Apollo era missions

    Evaluation of an Area-Based matching algorithm with advanced shape models

    Get PDF
    Nowadays, the scientific institutions involved in planetary mapping are working on new strategies to produce accurate high resolution DTMs from space images at planetary scale, usually dealing with extremely large data volumes. From a methodological point of view, despite the introduction of a series of new algorithms for image matching (e.g. the Semi Global Matching) that yield superior results (especially because they produce usually smooth and continuous surfaces) with lower processing times, the preference in this field still goes to well established area-based matching techniques. Many efforts are consequently directed to improve each phase of the photogrammetric process, from image pre-processing to DTM interpolation. In this context, the Dense Matcher software (DM) developed at the University of Parma has been recently optimized to cope with very high resolution images provided by the most recent missions (LROC NAC and HiRISE) focusing the efforts mainly to the improvement of the correlation phase and the process automation. Important changes have been made to the correlation algorithm, still maintaining its high performance in terms of precision and accuracy, by implementing an advanced version of the Least Squares Matching (LSM) algorithm. In particular, an iterative algorithm has been developed to adapt the geometric transformation in image resampling using different shape functions as originally proposed by other authors in different applications

    New experimental techniques for fracture testing of highly deformable materials

    Get PDF
    A new experimental method for measuring strain fields in highly deformable materials has been developed. This technique is based on an in-house developed Digital Image Correlation (DIC) system capable of accurately capturing localized or non-uniform strain distributions. Thanks to the implemented algorithm based on a Semi-Global Matching (SGM) approach, it is possible to constraint the regularity of the displacement field in order to significantly improve the reliability of the evaluated strains, especially in highly deformable materials. Being originally introduced for Digital Surface Modelling from stereo pairs, SGM is conceived for performing a one-dimensional search of displacements between images, but here a novel implementation for 2D displacement solution space is introduced. SGM approach is compared with the previously in-house developed implementation based on a local Least Squares Matching (LSM) approach. A comparison with the open source code Ncorr and with some FEM results is also presented. The investigation using the present DIC method focuses on 2D full-field strain maps of plain and notched specimens under tensile loading made of two different highly deformable materials: hot mix asphalt and thermoplastic composites for 3D-printing applications. In the latter specimens, an elliptical hole is introduced to assess the potentiality of the method in experimentally capturing high strain gradients in mixed-mode fracture situations

    Autonomous Repeat Image Feature Tracking (autoRIFT) and Its Application for Tracking Ice Displacement

    Get PDF
    In this paper, we build on past efforts with regard to the implementation of an efficient feature tracking algorithm for the mass processing of satellite images. This generic open-source feature tracking routine can be applied to any type of imagery to measure sub-pixel displacements between images. The routine consists of a feature tracking module (autoRIFT) that enhances computational efficiency and a geocoding module (Geogrid) that mitigates problems found in existing geocoding algorithms. When applied to satellite imagery, autoRIFT can run on a grid in the native image coordinates (such as radar or map) and, when used in conjunction with the Geogrid module, on a user-defined grid in geographic Cartesian coordinates such as Universal Transverse Mercator or Polar Stereographic. To validate the efficiency and accuracy of this approach, we demonstrate its use for tracking ice motion by using ESA’s Sentinel-1A/B radar data (seven pairs) and NASA’s Landsat-8 optical data (seven pairs) collected over Greenland’s Jakobshavn Isbræ glacier in 2017. Feature-tracked velocity errors are characterized over stable surfaces, where the best Sentinel-1A/B pair with a 6 day separation has errors in X/Y of 12 m/year or 39 m/year, compared to 22 m/year or 31 m/year for Landsat-8 with a 16-day separation. Different error sources for radar and optical image pairs are investigated, where the seasonal variation and the error dependence on the temporal baseline are analyzed. Estimated velocities were compared with reference velocities derived from DLR’s TanDEM-X SAR/InSAR data over the fast-moving glacier outlet, where Sentinel-1 results agree within 4% compared to 3–7% for Landsat-8. A comprehensive apples-to-apples comparison is made with regard to runtime and accuracy between multiple implementations of the proposed routine and the widely-used “dense ampcor" program from NASA/JPL’s ISCE software. autoRIFT is shown to provide two orders of magnitude of runtime improvement with a 20% improvement in accuracy

    GLAcier Feature Tracking testkit (GLAFT): a statistically and physically based framework for evaluating glacier velocity products derived from optical satellite image feature tracking

    Get PDF
    Glacier velocity measurements are essential to understand ice flow mechanics, monitor natural hazards, and make accurate projections of future sea-level rise. Despite these important applications, the method most commonly used to derive glacier velocity maps, feature tracking, relies on empirical parameter choices that rarely account for glacier physics or uncertainty. Here we test two statistics- and physics-based metrics to evaluate velocity maps derived from optical satellite images of Kaskawulsh Glacier, Yukon, Canada, using a range of existing feature-tracking workflows. Based on inter-comparisons with ground truth data, velocity maps with metrics falling within our recommended ranges contain fewer erroneous measurements and more spatially correlated noise than velocity maps with metrics that deviate from those ranges. Thus, these metric ranges are suitable for refining feature-tracking workflows and evaluating the resulting velocity products. We have released an open-source software package for computing and visualizing these metrics, the GLAcier Feature Tracking testkit (GLAFT).</p

    Development of a SGM-based multi-view reconstruction framework for aerial imagery

    Get PDF
    Advances in the technology of digital airborne camera systems allow for the observation of surfaces with sampling rates in the range of a few centimeters. In combination with novel matching approaches, which estimate depth information for virtually every pixel, surface reconstructions of impressive density and precision can be generated. Therefore, image based surface generation meanwhile is a serious alternative to LiDAR based data collection for many applications. Surface models serve as primary base for geographic products as for example map creation, production of true-ortho photos or visualization purposes within the framework of virtual globes. The goal of the presented theses is the development of a framework for the fully automatic generation of 3D surface models based on aerial images - both standard nadir as well as oblique views. This comprises several challenges. On the one hand dimensions of aerial imagery is considerable and the extend of the areas to be reconstructed can encompass whole countries. Beside scalability of methods this also requires decent processing times and efficient handling of the given hardware resources. Moreover, beside high precision requirements, a high degree of automation has to be guaranteed to limit manual interaction as much as possible. Due to the advantages of scalability, a stereo method is utilized in the presented thesis. The approach for dense stereo is based on an adapted version of the semi global matching (SGM) algorithm. Following a hierarchical approach corresponding image regions and meaningful disparity search ranges are identified. It will be verified that, dependent on undulations of the scene, time and memory demands can be reduced significantly, by up to 90% within some of the conducted tests. This enables the processing of aerial datasets on standard desktop machines in reasonable times even for large fields of depth. Stereo approaches generate disparity or depth maps, in which redundant depth information is available. To exploit this redundancy, a method for the refinement of stereo correspondences is proposed. Thereby redundant observations across stereo models are identified, checked for geometric consistency and their reprojection error is minimized. This way outliers are removed and precision of depth estimates is improved. In order to generate consistent surfaces, two algorithms for depth map fusion were developed. The first fusion strategy aims for the generation of 2.5D height models, also known as digital surface models (DSM). The proposed method improves existing methods regarding quality in areas of depth discontinuities, for example at roof edges. Utilizing benchmarks designed for the evaluation of image based DSM generation we show that the developed approaches favorably compare to state-of-the-art algorithms and that height precisions of few GSDs can be achieved. Furthermore, methods for the derivation of meshes based on DSM data are discussed. The fusion of depth maps for 3D scenes, as e.g. frequently required during evaluation of high resolution oblique aerial images in complex urban environments, demands for a different approach since scenes can in general not be represented as height fields. Moreover, depths across depth maps possess varying precision and sampling rates due to variances in image scale, errors in orientation and other effects. Within this thesis a median-based fusion methodology is proposed. By using geometry-adaptive triangulation of depth maps depth-wise normals are extracted and, along the point coordinates are filtered and fused using tree structures. The output of this method are oriented points which then can be used to generate meshes. Precision and density of the method will be evaluated using established multi-view benchmarks. Beside the capability to process close range datasets, results for large oblique airborne data sets will be presented. The report closes with a summary, discussion of limitations and perspectives regarding improvements and enhancements. The implemented algorithms are core elements of the commercial software package SURE, which is freely available for scientific purposes

    Erkennung bewegter Objekte durch raum-zeitliche Bewegungsanalyse

    Get PDF
    Driver assistance systems of the future, that will support the driver in complex driving situations, require a thorough understanding of the car's environment. This includes not only the comprehension of the infrastructure, but also the precise detection and measurement of other moving traffic participants. In this thesis, a novel principle is presented and investigated in detail, that allows the reconstruction of the 3d motion field from the image sequence obtained by a stereo camera system. Given correspondences of stereo measurements over time, this principle estimates the 3d position and the 3d motion vector of selected points using Kalman Filters, resulting in a real-time estimation of the observed motion field. Since the state vector of the Kalman Filter consists of six elements, this principle is called 6d-Vision. To estimate the absolute motion field, the ego-motion of the moving observer must be known precisely. Since cars are usually not equipped with high-end inertial sensors, a novel algorithm to estimate the ego-motion from the image sequence is presented. Based on a Kalman Filter, it is able to support even complex vehicle models, and takes advantage of all available data, namely the previously estimated motion field and eventually available inertial sensors. As the 6d-Vision principle is not restricted to particular algorithms to obtain the image measurements, various optical flow and stereo algorithms are evaluated. In particular, a novel dense stereo algorithm is presented, that gives excellent precision results and runs at real-time. In addition, two novel scene flow algorithms are introduced, that measure the optical flow and stereo information in a combined approach, yielding more precise and robust results than a separate analysis of the two information sources. The application of the 6d-Vision principle to real-world data is illustrated throughout the thesis. As practical applications usually require an object understanding, rather than a 3d motion field, a simple, yet efficient algorithm to detect and track moving objects is presented. This algorithm was successfully implemented in a demonstrator vehicle, that performs an autonomous braking resp. steering manoeuvre to avoid collisions with moving pedestrians.Fahrerassistenzsysteme der Zukunft, die den Fahrer in kritischen Situationen unterstützen sollen, benötigen ein umfangreiches Verständnis der Fahrzeugumgebung. Dieses umfasst nicht nur die Erkennung und Interpretation der Infrastruktur, sondern auch die Detektion und präzise Vermessung anderer Verkehrsteilnehmer. In dieser Arbeit wird ein neues Verfahren vorgestellt und ausführlich untersucht, welches die Rekonstruktion des 3d-Bewegungsfeldes aus Stereo-Bildsequenzen erlaubt. Auf Basis zeitlicher Korrespondenzen von Stereo-Messungen wird sowohl die 3d-Position, als auch der 3d-Geschwindigkeitsvektor einzelner Punkte mit Hilfe von Kalman Filtern geschätzt. Dies erlaubt die Schätzung des beobachteten Bewegungsfeldes in Echtzeit. Da der geschätzte Zustandsvektor sechs Elemente umfasst, wurde dieses Verfahren 6d-Vision genannt. Um das absolute Bewegungsfeld zu schätzen muss die Eigenbewegung des Beobachters bekannt sein. Da Fahrzeuge in der Regel nicht mit einer hoch-präzisen Intertialsensorik ausgestattet sind, muss die Eigenbewegung aus der Bildfolge bestimmt werden. In dieser Arbeit wird dazu ein neuer Algorithmus vorgestellt und untersucht, der mit Hilfe eines Kalman Filters die Eigenbewegung schätzt, und sich optimal in den Datenverarbeitungsprozess des 6d-Vision Verfahrens integriert. Da das 6d-Vision Verfahren nicht auf bestimmte Bildverarbeitungsalgorithmen beschränkt ist, werden in dieser Arbeit verschiedene Algorithmen zur Bestimmung des Optischen Flusses und der Stereo-Korrespondenzen im Hinblick auf Genauigkeit und Robustheit untersucht. Hierbei wird ein neues dichtes Stereo-Verfahren vorgestellt, das im Hinblick auf Genauigkeit sehr gute Ergebnisse erzielt und zudem in Echtzeit läuft. Daneben werden zwei neue Scene-Flow-Algorithmen vorgestellt, die in einem kombinierten Verfahren den Optischen Fluß und Stereo-Korrespondenzen bestimmen, und einer getrennten Analyse hinsichtlich Genauigkeit und Robustheit überlegen sind. Das Verfahren wurde ausführlich auf der Straße getestet und stellt heute eine wichtige Informationsgrundlage für verschiedene Anwendungen dar. Beispielhaft wird in dieser Arbeit auf ein Versuchsfahrzeug eingegangen, das ein autonomes Brems- bzw. Ausweichmanöver durchführt, um eine drohende Kollision mit einem Fußgänger zu vermeiden

    3-D Cloud Morphology and Evolution Derived from Hemispheric Stereo Cameras

    Get PDF
    Clouds play a key role in the Earth-atmosphere system as they reflect incoming solar radiation back to space, while absorbing and emitting longwave radiation. A significant challenge for observation and modeling pose cumulus clouds due to their relatively small size that can reach several hundreds up to a few thousand meters, their often complex 3-D shapes and highly dynamic life-cycle. Common instruments employed to study clouds include cloud radars, lidar-ceilometers, (microwave-)radiometers, but also satellite and airborne observations (in-situ and remote), all of which lack either sufficient sensitivity or a spatial or temporal resolution for a comprehensive observation. This thesis investigates the feasibility of a ground-based network of hemispheric stereo cameras to retrieve detailed 3-D cloud geometries, which are needed for validation of simulated cloud fields and parametrization in numerical models. Such camera systems, which offer a hemispheric field of view and a temporal resolution in the range of seconds and less, have the potential to fill the remaining gap of cloud observations to a considerable degree and allow to derive critical information about size, morphology, spatial distribution and life-cycle of individual clouds and the local cloud field. The technical basis for the 3-D cloud morphology retrieval is the stereo reconstruction: a cloud is synchronously recorded by a pair of cameras, which are separated by a few hundred meters, so that mutually visible areas of the cloud can be reconstructed via triangulation. Location and orientation of each camera system was obtained from a satellite-navigation system, detected stars in night sky images and mutually visible cloud features in the images. The image point correspondences required for 3-D triangulation were provided primarily by a dense stereo matching algorithm that allows to reconstruct an object with high degree of spatial completeness, which can improve subsequent analysis. The experimental setup in the vicinity of the Jülich Observatory for Cloud Evolution (JOYCE) included a pair of hemispheric sky cameras; it was later extended by another pair to reconstruct clouds from different view perspectives and both were separated by several kilometers. A comparison of the cloud base height (CBH) at zenith obtained from the stereo cameras and a lidar-ceilometer showed a typical bias of mostly below 2% of the lidar-derived CBH, but also a few occasions between 3-5%. Typical standard deviations of the differences ranged between 50 m (1.5 % of CBH) for altocumulus clouds and between 7% (123 m) and 10% (165 m) for cumulus and strato-cumulus clouds. A comparison of the estimated 3-D cumulus boundary at near-zenith to the sensed 2-D reflectivity profiles from a 35-GHz cloud radar revealed typical differences between 35 - 81 m. For clouds at larger distances (> 2 km) both signals can deviate significantly, which can in part be explained by a lower reconstruction accuracy for the low-contrast areas of a cloud base, but also with the insufficient sensitivity of the cloud radar if the cloud condensate is dominated by very small droplets or diluted with environmental air. For sequences of stereo images, the 3-D cloud reconstructions from the stereo analysis can be combined with the motion and tracking information from an optical flow routine in order to derive 3-D motion and deformation vectors of clouds. This allowed to estimate atmospheric motion in case of cloud layers with an accuracy of 1 ms-1 in velocity and 7° to 10° in direction. The fine-grained motion data was also used to detect and quantify cloud motion patterns of individual cumuli, such as deformations under vertical wind-shear. The potential of the proposed method lies in an extended analysis of life-cycle and morphology of cumulus clouds. This is illustrated in two show cases where developing cumulus clouds were reconstructed from two different view perspectives. In the first case study, a moving cloud was tracked and analyzed, while being subject to vertical wind shear. The highly tilted cloud body was captured and its vertical profile was quantified to obtain measures like vertically resolved diameter or tilting angle. The second case study shows a life-cycle analysis of a developing cumulus, including a time-series of relevant geometric aspects, such as perimeter, vertically projected area, diameter, thickness and further derived statistics like cloud aspect ratio or perimeter scaling. The analysis confirms some aspects of cloud evolution, such as the pulse-like formation of cumulus and indicates that cloud aspect ratio (size vs height) can be described by a power-law functional relationship for an individual life-cycle.Wolken haben einen maßgeblichen Einfluss auf den Strahlungshaushalt der Erde, da sie solare Strahlung effektiv reflektieren, aber von der Erde emittierte langwellige Strahlung sowohl absorbieren als auch ihrerseits wieder emittieren. Darüber hinaus stellen Cumulus-Wolken wegen ihrer verhältnismäßig kleinen Ausdehnung von wenigen hundert bis einigen tausend Metern sowie ihres dynamischen Lebenszyklus nach wie vor eine große Herausforderung für Beobachtung und Modellierung dar. Gegenwärtig für deren Erforschung im Einsatz befindliche Instrumente wie Lidar-Ceilometer, Wolkenradar, Mikrowellenradiometer oder auch satellitengestützte Beobachtungen stellen die für eine umfassende Erforschung dieser Wolken erforderliche räumliche und zeitliche Abdeckung nicht zur Verfügung. In dieser Arbeit wird untersucht, inwieweit eine bodengebundene Beobachtung von Wolken mit hemisphärisch projizierenden Wolkenkameras geeignet ist detaillierte 3-D Wolkengeometrien zu rekonstruieren um daraus Informationen über Größe, Morphologie und Lebenszyklus einzelner Wolken und des lokalen Wolkenfeldes abzuleiten. Grundlage für die Erfassung der 3-D Wolkengeometrien in dieser Arbeit ist die 3-D Stereorekonstruktion, bei der eine Wolke von jeweils zwei im Abstand von mehreren Hundert Metern aufgestellten, synchron aufnehmenden Kameras abgebildet wird. Beidseitig sichtbare Teile einer Wolke können so mittels Triangulation rekonstruiert werden. Fischaugen-Objektive ermöglichen das hemisphärische Sichtfeld der Wolkenkameras. Während die Positionsbestimmung der Kameras mit Hilfe eines Satelliten-Navigationssystems durchgeführt wurde, konnte die absolute Orientierung der Kameras im Raum mit Hilfe von detektierten Sternen bestimmt werden, die als Referenzpunkte dienten. Die für eine Stereoanalyse wichtige relative Orientierung zweier Kameras wurde anschließend unter Zuhilfenahme von Punktkorrespondenzen zwischen den Stereobildern verfeinert. Für die Stereoanalyse wurde primär ein Bildanalyse-Algorithmus eingesetzt, welcher sich durch eine hohe geometrische Vollständigkeit auszeichnet und auch 3-D Informationen für Bildregionen mit geringem Kontrast liefert. In ausgewählten Fällen wurden die so rekonstruierten Wolkengeometrien zudem mit einem präzisen Mehrbild-Stereo-Verfahren verglichen. Eine möglichst vollständige 3-D Wolkengeometrie ist vorteilhaft für eine darauffolgende Analyse, die eine Segmentierung und Identifizierung einzelner Wolken, deren raum-zeitliche Verfolgung oder die Ableitung geometrischer Größen umfasst. Der experimentelle Aufbau im Umfeld des Jülich Observatory for Cloud Evolution (JOYCE) umfasste zuerst eine, später zwei Stereokameras, die jeweils mehrere Kilometer entfernt installiert wurden um unterschiedliche Wolkenpartien rekonstruieren zu können. Ein Vergleich zwischen Stereorekonstruktion und Lidar-Ceilometer zeigte typische Standardabweichungen der Wolkenbasishöhendifferenz von 50 m (1.5 %) bei mittelhoher Altocumulus-Bewölkung und 123 m (7 %) bis 165 m (10 %) bei heterogener Cumulus- und Stratocumulus-Bewölkung. Gleichzeitig wich die rekonstruierte Wolkenbasishöhe im Durchschnitt meist nicht weiter als 2 %, in Einzelfällen 3-5 % vom entsprechenden Wert des Lidars ab. Im Vergleich zur abgeleiteten Cumulus-Morphologie aus den 2-D Reflektivitätsprofilen des Wolkenradars, zeigten sich im Zenit-Bereich typische Differenzen zwischen 35 und 81 m. Bei weiter entfernten Wolken (> 2 km) können sich Stereorekonstruktion und Reflektivitätssignal stark unterscheiden, was neben einer abnehmenden geometrischen Genauigkeit der Stereorekonstruktion in kontrastarmen Bereichen insbesondere mit einer oftmals unzureichenden Sensitivität des Radars bei kleinen Wolkentröpfchen erklärt werden kann, wie man sie an der Wolkenbasis und in den Randbereichen von Wolken findet. Die Kombination von Stereoanalyse und der Bewegungsinformation innerhalb einer Bildsequenz erlaubt die Bestimmung von Wolkenzug- und -deformationsvektoren. Neben der Verfolgung einzelner Wolkenstrukturen und der Erfassung von Wolkendynamik (beispielsweise der Deformation von Wolken durch Windscherung), kann im Fall von stratiformen Wolken Windgeschwindigkeit und -richtung abgeschätzt werden. Ein Vergleich mit Beobachtungen eines Wind-Lidars zeigte hierfür typische Abweichungen der Windgeschwindigkeit von 1 ms-1 und der Windrichtung von 7° to 10°. Ein besonderer Mehrwert der Methode liegt in einer tiefergehenden Analyse von Morphologie und Lebenszyklus von Cumulus-Wolken. Dies wurde anhand zweier exemplarischer Fallstudien gezeigt, in denen die 3-D-Rekonstruktionen zweier entfernt aufgestellter Stereokameras kombiniert wurden. Im ersten Fall wurde ein sich unter vertikaler Windscherung entwickelnder Cumulus von zwei Seiten aufgenommen, was eine geometrische Erfassung des stark durch Scherung geneigten Wolkenkörpers ermöglichte. Kennwerte wie Vertikalprofil, Neigungswinkel der Wolke und Durchmesser einzelner Höhenschichten wurden abgeschätzt. Der zweite Fall zeigte eine statistische Analyse eines sich entwickelnden Cumulus über seinen Lebenszyklus hinweg. Dies erlaubte die Erstellung einer Zeitreihe mit relevanten Kennzahlen wie äquivalenter Durchmesser, vertikale Ausdehnung, Perimeter oder abgeleitete Größen wie Aspektrate oder Perimeter-Skalierung. Während die Analyse bisherige Ergebnisse aus Simulationen und satellitengestützten Beobachtungen bestätigt, erlaubt diese aber eine Erweiterung auf die Ebene individueller Wolken und der Ableitung funktionaler Zusammenhänge wie zum Beispiel dem Verhältnis von Wolkendurchmesser und vertikaler Dimension

    Assessment of the CORONA series of satellite imagery for landscape archaeology: a case study from the Orontes valley, Syria

    Get PDF
    In 1995, a large database of satellite imagery with worldwide coverage taken from 1960 until 1972 was declassified. The main advantages of this imagery known as CORONA that made it attractive for archaeology were its moderate cost and its historical value. The main disadvantages were its unknown quality, format, geometry and the limited base of known applications. This thesis has sought to explore the properties and potential of CORONA imagery and thus enhance its value for applications in landscape archaeology. In order to ground these investigations in a real dataset, the properties and characteristics of CORONA imagery were explored through the case study of a landscape archaeology project working in the Orontes Valley, Syria. Present-day high-resolution IKONOS imagery was integrated within the study and assessed alongside CORONA imagery. The combination of these two image datasets was shown to provide a powerful set of tools for investigating past archaeological landscape in the Middle East. The imagery was assessed qualitatively through photointerpretation for its ability to detect archaeological remains, and quantitatively through the extraction of height information after the creation of stereomodels. The imagery was also assessed spectrally through fieldwork and spectroradiometric analysis, and for its Multiple View Angle (MVA) capability through visual and statistical analysis. Landscape archaeology requires a variety of data to be gathered from a large area, in an effective and inexpensive way. This study demonstrates an effective methodology for the deployment of CORONA and IKONOS imagery and raises a number of technical points of which the archaeological researcher community need to be aware of. Simultaneously, it identified certain limitations of the data and suggested solutions for the more effective exploitation of the strengths of CORONA imagery

    Attenuating Stereo Pixel-Locking via Affine Window Adaptation

    No full text
    For real-time stereo vision systems, the standard method for estimating sub-pixel stereo disparity given an initial integer disparity map involves fitting parabolas to a matching cost function aggregated over rectangular windows. This results in a phenomenon known as 'pixel-locking,' which produces artificially-peaked histograms of sub-pixel disparity. These peaks correspond to the introduction of erroneous ripples or waves in the 3D reconstruction of truly Rat surfaces. Since stereo vision is a common input modality for autonomous vehicles, these inaccuracies can pose a problem for safe, reliable navigation. This paper proposes a new method for sub-pixel stereo disparity estimation, based on ideas from Lucas-Kanade tracking and optical flow, which substantially reduces the pixel-locking effect. In addition, it has the ability to correct much larger initial disparity errors than previous approaches and is more general as it applies not only to the ground plane
    corecore