9,291 research outputs found

    Simultaneous Spectral-Spatial Feature Selection and Extraction for Hyperspectral Images

    Full text link
    In hyperspectral remote sensing data mining, it is important to take into account of both spectral and spatial information, such as the spectral signature, texture feature and morphological property, to improve the performances, e.g., the image classification accuracy. In a feature representation point of view, a nature approach to handle this situation is to concatenate the spectral and spatial features into a single but high dimensional vector and then apply a certain dimension reduction technique directly on that concatenated vector before feed it into the subsequent classifier. However, multiple features from various domains definitely have different physical meanings and statistical properties, and thus such concatenation hasn't efficiently explore the complementary properties among different features, which should benefit for boost the feature discriminability. Furthermore, it is also difficult to interpret the transformed results of the concatenated vector. Consequently, finding a physically meaningful consensus low dimensional feature representation of original multiple features is still a challenging task. In order to address the these issues, we propose a novel feature learning framework, i.e., the simultaneous spectral-spatial feature selection and extraction algorithm, for hyperspectral images spectral-spatial feature representation and classification. Specifically, the proposed method learns a latent low dimensional subspace by projecting the spectral-spatial feature into a common feature space, where the complementary information has been effectively exploited, and simultaneously, only the most significant original features have been transformed. Encouraging experimental results on three public available hyperspectral remote sensing datasets confirm that our proposed method is effective and efficient

    Road Condition Mapping by Integration of Laser Scanning, RGB Imaging and Spectrometry

    Get PDF
    Roads are important infrastructure and are primary means of transportation. Control and maintenance of roads are substantial as the pavement surface deforms and deteriorates due to heavy load and influences of weather. Acquiring detailed information about the pavement condition is a prerequisite for proper planning of road pavement maintenance and rehabilitation. Many companies detect and localize the road pavement distresses manually, either by on-site inspection or by digitizing laser data and imagery captured by mobile mapping. The automation of road condition mapping using laser data and colour images is a challenge. Beyond that, the mapping of material properties of the road pavement surface with spectrometers has not yet been investigated. This study aims at automatic mapping of road surface condition including distress and material properties by integrating laser scanning, RGB imaging and spectrometry. All recorded data are geo-referenced by means of GNSS/ INS. Methods are developed for pavement distress detection that cope with a variety of different weather and asphalt conditions. Further objective is to analyse and map the material properties of the pavement surface using spectrometry data. No standard test data sets are available for benchmarking developments on road condition mapping. Therefore, all data have been recorded with a mobile mapping van which is set up for the purpose of this research. The concept for detecting and localizing the four main pavement distresses, i.e. ruts, potholes, cracks and patches is the following: ruts and potholes are detected using laser scanning data, cracks and patches using RGB images. For each of these pavement distresses, two or more methods are developed, implemented, compared to each other and evaluated to identify the most successful method. With respect to the material characteristics, spectrometer data of road sections are classified to indicate pavement quality. As a spectrometer registers almost a reflectivity curve in VIS, NIR and SWIR wavelength, indication of aging can be derived. After detection and localization of the pavement distresses and pavement quality classes, the road condition map is generated by overlaying all distresses and quality classes. As a preparatory step for rut and pothole detection, the road surface is extracted from mobile laser scanning data based on a height jump criterion. For the investigation on rut detection, all scanlines are processed. With an approach based on iterative 1D polynomial fitting, ruts are successfully detected. For streets with the width of 6 m to 10 m, a 6th order polynomial is found to be most suitable. By 1D cross-correlation, the centre of the rut is localized. An alternative method using local curvature shows a high sensitivity to the shape and width of a rut and is less successful. For pothole detection, the approach based on polynomial fitting generalized to two dimensions. As an alternative, a procedure using geodesic morphological reconstruction is investigated. Bivariate polynomial fitting encounters problems with overshoot at the boundary of the regions. The detection is very successful using geodesic morphology. For the detection of pavement cracks, three methods using rotation invariant kernels are investigated. Line Filter, High-pass Filter and Modified Local Binary Pattern kernels are implemented. A conceptual aspect of the procedure is to achieve a high degree of completeness. The most successful variant is the Line Filter for which the highest degree of completeness of 81.2 % is achieved. Two texture measures, the gradient magnitude and the local standard deviation are employed to detect pavement patches. As patches may differ with respect to homogeneity and may not always have a dark border with the intact pavement surface, the method using the local standard deviation is more suitable for detecting the patches. Linear discriminant analysis is utilized for asphalt pavement quality analysis and classification. Road pavement sections of ca. 4 m length are classified into two classes, namely: “Good” and “Bad” with the overall accuracy of 77.6 %. The experimental investigations show that the developed methods for automatic distress detection are very successful. By 1D polynomial fitting on laser scanlines, ruts are detected. In addition to ruts also pavement depressions like shoving can be revealed. The extraction of potholes is less demanding. As potholes appear relatively rare in the road networks of a city, the road segments which are affected by potholes are selected interactively. While crack detection by Line Filter works very well, the patch detection is more challenging as patches sometimes look very similar to the intact surface. The spectral classification of pavement sections contributes to road condition mapping as it gives hints on aging of the road pavement.Straßen bilden die primären Transportwege für Personen und Güter und sind damit ein wichtiger Bestandteil der Infrastruktur. Der Aufwand für Instandhaltung und Wartung der Straßen ist erheblich, da sich die Fahrbahnoberfläche verformt und durch starke Belastung und Wettereinflüsse verschlechtert. Die Erfassung detaillierter Informationen über den Fahrbahnzustand ist Voraussetzung für eine sachgemäße Planung der Fahrbahnsanierung und -rehabilitation. Viele Unternehmen detektieren und lokalisieren die Fahrbahnschäden manuell entweder durch Vor-Ort-Inspektion oder durch Digitalisierung von Laserdaten und Bildern aus mobiler Datenerfassung. Eine Automatisierung der Straßenkartierung mit Laserdaten und Farbbildern steht noch in den Anfängen. Zudem werden bisher noch nicht die Alterungszustände der Asphaltdecke mit Hilfe der Spektrometrie bewertet. Diese Studie zielt auf den automatischen Prozess der Straßenzustandskartierung einschließlich der Straßenschäden und der Materialeigenschaften durch Integration von Laserscanning, RGB-Bilderfassung und Spektrometrie ab. Alle aufgezeichneten Daten werden mit GNSS / INS georeferenziert. Es werden Methoden für die Erkennung von Straßenschäden entwickelt, die sich an unterschiedliche Datenquellen bei unterschiedlichem Wetter- und Asphaltzustand anpassen können. Ein weiteres Ziel ist es, die Materialeigenschaften der Fahrbahnoberfläche mittels Spektrometrie-Daten zu analysieren und abzubilden. Derzeit gibt es keine standardisierten Testdatensätze für die Evaluierung von Verfahren zur Straßenzustandsbeschreibung. Deswegen wurden alle Daten, die in dieser Studie Verwendung finden, mit einem eigens für diesen Forschungszweck konfigurierten Messfahrzeug aufgezeichnet. Das Konzept für die Detektion und Lokalisierung der wichtigsten vier Arten von Straßenschäden, nämlich Spurrillen, Schlaglöcher, Risse und Flickstellen ist das folgende: Spurrillen und Schlaglöcher werden aus Laserdaten extrahiert, Risse und Flickstellen aus RGB- Bildern. Für jede dieser Straßenschäden werden mindestens zwei Methoden entwickelt, implementiert, miteinander verglichen und evaluiert um festzustellen, welche Methode die erfolgreichste ist. Im Hinblick auf die Materialeigenschaften werden Spektrometriedaten der Straßenabschnitte klassifiziert, um die Qualität des Straßenbelages zu bewerten. Da ein Spektrometer nahezu eine kontinuierliche Reflektivitätskurve im VIS-, NIR- und SWIR-Wellenlängenbereich aufzeichnet, können Merkmale der Asphaltalterung abgeleitet werden. Nach der Detektion und Lokalisierung der Straßenschäden und der Qualitätsklasse des Straßenbelages wird der übergreifende Straßenzustand mit Hilfe von Durchschlagsregeln als Kombination aller Zustandswerte und Qualitätsklassen ermittelt. In einem vorbereitenden Schritt für die Spurrillen- und Schlaglocherkennung wird die Straßenoberfläche aus mobilen Laserscanning-Daten basierend auf einem Höhensprung-Kriterium extrahiert. Für die Untersuchung zur Spurrillen-Erkennung werden alle Scanlinien verarbeitet. Mit einem Ansatz, der auf iterativer 1D-Polynomanpassung basiert, werden Spurrillen erfolgreich erkannt. Für eine Straßenbreite von 8-10m erweist sich ein Polynom sechsten Grades als am besten geeignet. Durch 1D-Kreuzkorrelation wird die Mitte der Spurrille erkannt. Eine alternative Methode, die die lokale Krümmung des Querprofils benutzt, erweist sich als empfindlich gegenüber Form und Breite einer Spurrille und ist weniger erfolgreich. Zur Schlaglocherkennung wird der Ansatz, der auf Polynomanpassung basiert, auf zwei Dimensionen verallgemeinert. Als Alternative wird eine Methode untersucht, die auf der Geodätischen Morphologischen Rekonstruktion beruht. Bivariate Polynomanpassung führt zu Überschwingen an den Rändern der Regionen. Die Detektion mit Hilfe der Geodätischen Morphologischen Rekonstruktion ist dagegen sehr erfolgreich. Zur Risserkennung werden drei Methoden untersucht, die rotationsinvariante Kerne verwenden. Linienfilter, Hochpassfilter und Lokale Binäre Muster werden implementiert. Ein Ziel des Konzeptes zur Risserkennung ist es, eine hohe Vollständigkeit zu erreichen. Die erfolgreichste Variante ist das Linienfilter, für das mit 81,2 % der höchste Grad an Vollständigkeit erzielt werden konnte. Zwei Texturmaße, nämlich der Betrag des Grauwert-Gradienten und die lokale Standardabweichung werden verwendet, um Flickstellen zu entdecken. Da Flickstellen hinsichtlich der Homogenität variieren können und nicht immer eine dunkle Grenze mit dem intakten Straßenbelag aufweisen, ist diejenige Methode, welche die lokale Standardabweichung benutzt, besser zur Erkennung von Flickstellen geeignet. Lineare Diskriminanzanalyse wird zur Analyse der Asphaltqualität und zur Klassifikation benutzt. Straßenabschnitte von ca. 4m Länge werden zwei Klassen („Gut“ und „Schlecht“) mit einer gesamten Accuracy von 77,6 % zugeordnet. Die experimentellen Untersuchungen zeigen, dass die entwickelten Methoden für die automatische Entdeckung von Straßenschäden sehr erfolgreich sind. Durch 1D Polynomanpassung an Laser-Scanlinien werden Spurrillen entdeckt. Zusätzlich zu Spurrillen werden auch Unebenheiten des Straßenbelages wie Aufschiebungen detektiert. Die Extraktion von Schlaglöchern ist weniger anspruchsvoll. Da Schlaglöcher relativ selten in den Straßennetzen von Städten auftreten, werden die Straßenabschnitte mit Schlaglöchern interaktiv ausgewählt. Während die Rissdetektion mit Linienfiltern sehr gut funktioniert, ist die Erkennung von Flickstellen eine größere Herausforderung, da Flickstellen manchmal der intakten Straßenoberfläche sehr ähnlich sehen. Die spektrale Klassifizierung der Straßenabschnitte trägt zur Straßenzustandsbewertung bei, indem sie Hinweise auf den Alterungszustand des Straßenbelages liefert

    Very High Resolution (VHR) Satellite Imagery: Processing and Applications

    Get PDF
    Recently, growing interest in the use of remote sensing imagery has appeared to provide synoptic maps of water quality parameters in coastal and inner water ecosystems;, monitoring of complex land ecosystems for biodiversity conservation; precision agriculture for the management of soils, crops, and pests; urban planning; disaster monitoring, etc. However, for these maps to achieve their full potential, it is important to engage in periodic monitoring and analysis of multi-temporal changes. In this context, very high resolution (VHR) satellite-based optical, infrared, and radar imaging instruments provide reliable information to implement spatially-based conservation actions. Moreover, they enable observations of parameters of our environment at greater broader spatial and finer temporal scales than those allowed through field observation alone. In this sense, recent very high resolution satellite technologies and image processing algorithms present the opportunity to develop quantitative techniques that have the potential to improve upon traditional techniques in terms of cost, mapping fidelity, and objectivity. Typical applications include multi-temporal classification, recognition and tracking of specific patterns, multisensor data fusion, analysis of land/marine ecosystem processes and environment monitoring, etc. This book aims to collect new developments, methodologies, and applications of very high resolution satellite data for remote sensing. The works selected provide to the research community the most recent advances on all aspects of VHR satellite remote sensing

    Automatic segmentation and reconstruction of traffic accident scenarios from mobile laser scanning data

    Get PDF
    Virtual reconstruction of historic sites, planning of restorations and attachments of new building parts, as well as forest inventory are few examples of fields that benefit from the application of 3D surveying data. Originally using 2D photo based documentation and manual distance measurements, the 3D information obtained from multi camera and laser scanning systems realizes a noticeable improvement regarding the surveying times and the amount of generated 3D information. The 3D data allows a detailed post processing and better visualization of all relevant spatial information. Yet, for the extraction of the required information from the raw scan data and for the generation of useable visual output, time-consuming, complex user-based data processing is still required, using the commercially available 3D software tools. In this context, the automatic object recognition from 3D point cloud and depth data has been discussed in many different works. The developed tools and methods however, usually only focus on a certain kind of object or the detection of learned invariant surface shapes. Although the resulting methods are applicable for certain practices of data segmentation, they are not necessarily suitable for arbitrary tasks due to the varying requirements of the different fields of research. This thesis presents a more widespread solution for automatic scene reconstruction from 3D point clouds, targeting street scenarios, specifically for the task of traffic accident scene analysis and documentation. The data, obtained by sampling the scene using a mobile scanning system is evaluated, segmented, and finally used to generate detailed 3D information of the scanned environment. To realize this aim, this work adapts and validates various existing approaches on laser scan segmentation regarding the application on accident relevant scene information, including road surfaces and markings, vehicles, walls, trees and other salient objects. The approaches are therefore evaluated regarding their suitability and limitations for the given tasks, as well as for possibilities concerning the combined application together with other procedures. The obtained knowledge is used for the development of new algorithms and procedures to allow a satisfying segmentation and reconstruction of the scene, corresponding to the available sampling densities and precisions. Besides the segmentation of the point cloud data, this thesis presents different visualization and reconstruction methods to achieve a wider range of possible applications of the developed system for data export and utilization in different third party software tools

    DeepGlobe 2018: A Challenge to Parse the Earth through Satellite Images

    Full text link
    We present the DeepGlobe 2018 Satellite Image Understanding Challenge, which includes three public competitions for segmentation, detection, and classification tasks on satellite images. Similar to other challenges in computer vision domain such as DAVIS and COCO, DeepGlobe proposes three datasets and corresponding evaluation methodologies, coherently bundled in three competitions with a dedicated workshop co-located with CVPR 2018. We observed that satellite imagery is a rich and structured source of information, yet it is less investigated than everyday images by computer vision researchers. However, bridging modern computer vision with remote sensing data analysis could have critical impact to the way we understand our environment and lead to major breakthroughs in global urban planning or climate change research. Keeping such bridging objective in mind, DeepGlobe aims to bring together researchers from different domains to raise awareness of remote sensing in the computer vision community and vice-versa. We aim to improve and evaluate state-of-the-art satellite image understanding approaches, which can hopefully serve as reference benchmarks for future research in the same topic. In this paper, we analyze characteristics of each dataset, define the evaluation criteria of the competitions, and provide baselines for each task.Comment: Dataset description for DeepGlobe 2018 Challenge at CVPR 201

    Developing a three-dimensional city modeling with the absence of elevation data

    Get PDF
    The past few decades have witnessed steady innovations in remote sensing technologies; however, elevation data needed for creating 3D city models are not reachable for several regions in all over the world. Many developed states still without proper nationwide elevation measurements dataset for developing sufficient 3D city models. The current paper addresses the possibility of producing 3D models for areas without elevation data but with footprints, measurements collected from government departments and volunteered individuals. The study aims to investigate and evaluate a different approach to create three-dimensional city models based on data that existed in open-source maps when elevation measurements are not available. The proposed approach can be divided into two stages: footprint and shadow data collection, and height estimation. At first, the footprint information and shadow area are manually gathered from satellite images, then the building height is predicted based on rooftop and shadow data. SketchUp, a 3D design software, is employed as an efficient tool for creating the 3D virtual city model. To develop such a model, the software utilizes procedural modeling in addition to an image-based approach. The developed model can produce a satisfactory and realistic virtual scene within a short time and for a large area. The 3D city modeling resulted from estimated heights is considered as a rational provisional solution at areas where elevation data are not available or are out-dated

    Towards automatic modeling of buildings in informal settlements from aerial photographs using deformable active contour models (snakes)

    Get PDF
    Bibliography: leaves 177-187.This dissertation presents a novel system for semi-automatic modeling of buildings in informal settlement areas from aerial photographs. The building extraction strategy is developed and implememed with the aim of generatinga a desk top Informal Settlement Geographic lnformation System (ISGIS) using felf developed and available PC-based GIS tools to serve novice users informal settlement areas
    corecore