309 research outputs found

    Mapping shoreline indicators on a sandy beach with supervised edge detection of soil moisture differences

    Get PDF
    This study describes a method to map shoreline indicators on a sandy beach. The hypothesis is that, on this beach, spectral albedo is predominantly determined by moisture content and water lines can, therefore, be detected as albedo contrasts. A laboratory experiment is performed to relate moisture content to image albedo, and supervised edge detection is subsequently used to map the shoreline indicators with remote sensing imagery. The algorithm is tested with data from visible, near-infrared and shortwave-infrared wavelength regions. These results are compared to shoreline indicators obtained by a field survey and a shoreline indicator derived from a digital elevation model. Both the water line present when the imagery was acquired, as well as the maximum extent of the last flood, can be detected as a single edge. Older high water lines are confused with the last high water line and appear dispersed, as there are multiple debris lines present on the beach. The low water line, usually in saturated sand, also appears dispersed due to the presence of channels and troughs. Shorelines are constant moving boundaries, which is why shoreline indicators are used as a proxy. Unlike a mathematical indicator that is based on an elevation model, our method is more sensitive to the dynamic nature of shorelines. Supervised edge-detection is a technique for generating reproducible measurements of shoreline indicator positions over time, and aids in the monitoring of coastline migration

    Road Condition Mapping by Integration of Laser Scanning, RGB Imaging and Spectrometry

    Get PDF
    Roads are important infrastructure and are primary means of transportation. Control and maintenance of roads are substantial as the pavement surface deforms and deteriorates due to heavy load and influences of weather. Acquiring detailed information about the pavement condition is a prerequisite for proper planning of road pavement maintenance and rehabilitation. Many companies detect and localize the road pavement distresses manually, either by on-site inspection or by digitizing laser data and imagery captured by mobile mapping. The automation of road condition mapping using laser data and colour images is a challenge. Beyond that, the mapping of material properties of the road pavement surface with spectrometers has not yet been investigated. This study aims at automatic mapping of road surface condition including distress and material properties by integrating laser scanning, RGB imaging and spectrometry. All recorded data are geo-referenced by means of GNSS/ INS. Methods are developed for pavement distress detection that cope with a variety of different weather and asphalt conditions. Further objective is to analyse and map the material properties of the pavement surface using spectrometry data. No standard test data sets are available for benchmarking developments on road condition mapping. Therefore, all data have been recorded with a mobile mapping van which is set up for the purpose of this research. The concept for detecting and localizing the four main pavement distresses, i.e. ruts, potholes, cracks and patches is the following: ruts and potholes are detected using laser scanning data, cracks and patches using RGB images. For each of these pavement distresses, two or more methods are developed, implemented, compared to each other and evaluated to identify the most successful method. With respect to the material characteristics, spectrometer data of road sections are classified to indicate pavement quality. As a spectrometer registers almost a reflectivity curve in VIS, NIR and SWIR wavelength, indication of aging can be derived. After detection and localization of the pavement distresses and pavement quality classes, the road condition map is generated by overlaying all distresses and quality classes. As a preparatory step for rut and pothole detection, the road surface is extracted from mobile laser scanning data based on a height jump criterion. For the investigation on rut detection, all scanlines are processed. With an approach based on iterative 1D polynomial fitting, ruts are successfully detected. For streets with the width of 6 m to 10 m, a 6th order polynomial is found to be most suitable. By 1D cross-correlation, the centre of the rut is localized. An alternative method using local curvature shows a high sensitivity to the shape and width of a rut and is less successful. For pothole detection, the approach based on polynomial fitting generalized to two dimensions. As an alternative, a procedure using geodesic morphological reconstruction is investigated. Bivariate polynomial fitting encounters problems with overshoot at the boundary of the regions. The detection is very successful using geodesic morphology. For the detection of pavement cracks, three methods using rotation invariant kernels are investigated. Line Filter, High-pass Filter and Modified Local Binary Pattern kernels are implemented. A conceptual aspect of the procedure is to achieve a high degree of completeness. The most successful variant is the Line Filter for which the highest degree of completeness of 81.2 % is achieved. Two texture measures, the gradient magnitude and the local standard deviation are employed to detect pavement patches. As patches may differ with respect to homogeneity and may not always have a dark border with the intact pavement surface, the method using the local standard deviation is more suitable for detecting the patches. Linear discriminant analysis is utilized for asphalt pavement quality analysis and classification. Road pavement sections of ca. 4 m length are classified into two classes, namely: “Good” and “Bad” with the overall accuracy of 77.6 %. The experimental investigations show that the developed methods for automatic distress detection are very successful. By 1D polynomial fitting on laser scanlines, ruts are detected. In addition to ruts also pavement depressions like shoving can be revealed. The extraction of potholes is less demanding. As potholes appear relatively rare in the road networks of a city, the road segments which are affected by potholes are selected interactively. While crack detection by Line Filter works very well, the patch detection is more challenging as patches sometimes look very similar to the intact surface. The spectral classification of pavement sections contributes to road condition mapping as it gives hints on aging of the road pavement.Straßen bilden die primären Transportwege für Personen und Güter und sind damit ein wichtiger Bestandteil der Infrastruktur. Der Aufwand für Instandhaltung und Wartung der Straßen ist erheblich, da sich die Fahrbahnoberfläche verformt und durch starke Belastung und Wettereinflüsse verschlechtert. Die Erfassung detaillierter Informationen über den Fahrbahnzustand ist Voraussetzung für eine sachgemäße Planung der Fahrbahnsanierung und -rehabilitation. Viele Unternehmen detektieren und lokalisieren die Fahrbahnschäden manuell entweder durch Vor-Ort-Inspektion oder durch Digitalisierung von Laserdaten und Bildern aus mobiler Datenerfassung. Eine Automatisierung der Straßenkartierung mit Laserdaten und Farbbildern steht noch in den Anfängen. Zudem werden bisher noch nicht die Alterungszustände der Asphaltdecke mit Hilfe der Spektrometrie bewertet. Diese Studie zielt auf den automatischen Prozess der Straßenzustandskartierung einschließlich der Straßenschäden und der Materialeigenschaften durch Integration von Laserscanning, RGB-Bilderfassung und Spektrometrie ab. Alle aufgezeichneten Daten werden mit GNSS / INS georeferenziert. Es werden Methoden für die Erkennung von Straßenschäden entwickelt, die sich an unterschiedliche Datenquellen bei unterschiedlichem Wetter- und Asphaltzustand anpassen können. Ein weiteres Ziel ist es, die Materialeigenschaften der Fahrbahnoberfläche mittels Spektrometrie-Daten zu analysieren und abzubilden. Derzeit gibt es keine standardisierten Testdatensätze für die Evaluierung von Verfahren zur Straßenzustandsbeschreibung. Deswegen wurden alle Daten, die in dieser Studie Verwendung finden, mit einem eigens für diesen Forschungszweck konfigurierten Messfahrzeug aufgezeichnet. Das Konzept für die Detektion und Lokalisierung der wichtigsten vier Arten von Straßenschäden, nämlich Spurrillen, Schlaglöcher, Risse und Flickstellen ist das folgende: Spurrillen und Schlaglöcher werden aus Laserdaten extrahiert, Risse und Flickstellen aus RGB- Bildern. Für jede dieser Straßenschäden werden mindestens zwei Methoden entwickelt, implementiert, miteinander verglichen und evaluiert um festzustellen, welche Methode die erfolgreichste ist. Im Hinblick auf die Materialeigenschaften werden Spektrometriedaten der Straßenabschnitte klassifiziert, um die Qualität des Straßenbelages zu bewerten. Da ein Spektrometer nahezu eine kontinuierliche Reflektivitätskurve im VIS-, NIR- und SWIR-Wellenlängenbereich aufzeichnet, können Merkmale der Asphaltalterung abgeleitet werden. Nach der Detektion und Lokalisierung der Straßenschäden und der Qualitätsklasse des Straßenbelages wird der übergreifende Straßenzustand mit Hilfe von Durchschlagsregeln als Kombination aller Zustandswerte und Qualitätsklassen ermittelt. In einem vorbereitenden Schritt für die Spurrillen- und Schlaglocherkennung wird die Straßenoberfläche aus mobilen Laserscanning-Daten basierend auf einem Höhensprung-Kriterium extrahiert. Für die Untersuchung zur Spurrillen-Erkennung werden alle Scanlinien verarbeitet. Mit einem Ansatz, der auf iterativer 1D-Polynomanpassung basiert, werden Spurrillen erfolgreich erkannt. Für eine Straßenbreite von 8-10m erweist sich ein Polynom sechsten Grades als am besten geeignet. Durch 1D-Kreuzkorrelation wird die Mitte der Spurrille erkannt. Eine alternative Methode, die die lokale Krümmung des Querprofils benutzt, erweist sich als empfindlich gegenüber Form und Breite einer Spurrille und ist weniger erfolgreich. Zur Schlaglocherkennung wird der Ansatz, der auf Polynomanpassung basiert, auf zwei Dimensionen verallgemeinert. Als Alternative wird eine Methode untersucht, die auf der Geodätischen Morphologischen Rekonstruktion beruht. Bivariate Polynomanpassung führt zu Überschwingen an den Rändern der Regionen. Die Detektion mit Hilfe der Geodätischen Morphologischen Rekonstruktion ist dagegen sehr erfolgreich. Zur Risserkennung werden drei Methoden untersucht, die rotationsinvariante Kerne verwenden. Linienfilter, Hochpassfilter und Lokale Binäre Muster werden implementiert. Ein Ziel des Konzeptes zur Risserkennung ist es, eine hohe Vollständigkeit zu erreichen. Die erfolgreichste Variante ist das Linienfilter, für das mit 81,2 % der höchste Grad an Vollständigkeit erzielt werden konnte. Zwei Texturmaße, nämlich der Betrag des Grauwert-Gradienten und die lokale Standardabweichung werden verwendet, um Flickstellen zu entdecken. Da Flickstellen hinsichtlich der Homogenität variieren können und nicht immer eine dunkle Grenze mit dem intakten Straßenbelag aufweisen, ist diejenige Methode, welche die lokale Standardabweichung benutzt, besser zur Erkennung von Flickstellen geeignet. Lineare Diskriminanzanalyse wird zur Analyse der Asphaltqualität und zur Klassifikation benutzt. Straßenabschnitte von ca. 4m Länge werden zwei Klassen („Gut“ und „Schlecht“) mit einer gesamten Accuracy von 77,6 % zugeordnet. Die experimentellen Untersuchungen zeigen, dass die entwickelten Methoden für die automatische Entdeckung von Straßenschäden sehr erfolgreich sind. Durch 1D Polynomanpassung an Laser-Scanlinien werden Spurrillen entdeckt. Zusätzlich zu Spurrillen werden auch Unebenheiten des Straßenbelages wie Aufschiebungen detektiert. Die Extraktion von Schlaglöchern ist weniger anspruchsvoll. Da Schlaglöcher relativ selten in den Straßennetzen von Städten auftreten, werden die Straßenabschnitte mit Schlaglöchern interaktiv ausgewählt. Während die Rissdetektion mit Linienfiltern sehr gut funktioniert, ist die Erkennung von Flickstellen eine größere Herausforderung, da Flickstellen manchmal der intakten Straßenoberfläche sehr ähnlich sehen. Die spektrale Klassifizierung der Straßenabschnitte trägt zur Straßenzustandsbewertung bei, indem sie Hinweise auf den Alterungszustand des Straßenbelages liefert

    Derivation of forest inventory parameters from high-resolution satellite imagery for the Thunkel area, Northern Mongolia. A comparative study on various satellite sensors and data analysis techniques.

    Get PDF
    With the demise of the Soviet Union and the transition to a market economy starting in the 1990s, Mongolia has been experiencing dramatic changes resulting in social and economic disparities and an increasing strain on its natural resources. The situation is exacerbated by a changing climate, the erosion of forestry related administrative structures, and a lack of law enforcement activities. Mongolia’s forests have been afflicted with a dramatic increase in degradation due to human and natural impacts such as overexploitation and wildfire occurrences. In addition, forest management practices are far from being sustainable. In order to provide useful information on how to viably and effectively utilise the forest resources in the future, the gathering and analysis of forest related data is pivotal. Although a National Forest Inventory was conducted in 2016, very little reliable and scientifically substantiated information exists related to a regional or even local level. This lack of detailed information warranted a study performed in the Thunkel taiga area in 2017 in cooperation with the GIZ. In this context, we hypothesise that (i) tree species and composition can be identified utilising the aerial imagery, (ii) tree height can be extracted from the resulting canopy height model with accuracies commensurate with field survey measurements, and (iii) high-resolution satellite imagery is suitable for the extraction of tree species, the number of trees, and the upscaling of timber volume and basal area based on the spectral properties. The outcomes of this study illustrate quite clearly the potential of employing UAV imagery for tree height extraction (R2 of 0.9) as well as for species and crown diameter determination. However, in a few instances, the visual interpretation of the aerial photographs were determined to be superior to the computer-aided automatic extraction of forest attributes. In addition, imagery from various satellite sensors (e.g. Sentinel-2, RapidEye, WorldView-2) proved to be excellently suited for the delineation of burned areas and the assessment of tree vigour. Furthermore, recently developed sophisticated classifying approaches such as Support Vector Machines and Random Forest appear to be tailored for tree species discrimination (Overall Accuracy of 89%). Object-based classification approaches convey the impression to be highly suitable for very high-resolution imagery, however, at medium scale, pixel-based classifiers outperformed the former. It is also suggested that high radiometric resolution bears the potential to easily compensate for the lack of spatial detectability in the imagery. Quite surprising was the occurrence of dark taiga species in the riparian areas being beyond their natural habitat range. The presented results matrix and the interpretation key have been devised as a decision tool and/or a vademecum for practitioners. In consideration of future projects and to facilitate the improvement of the forest inventory database, the establishment of permanent sampling plots in the Mongolian taigas is strongly advised.2021-06-0

    Recognition and Classification of Ancient Dwellings based on Elastic Grid and GLCM

    Get PDF
    Rectangle algorithm is designed to extract ancient dwellings from village satellite images according to their pixel features and shape features. For these unrecognized objects, we need to distinguish them by further extracting texture features of them. In order to get standardized sample, three pre-process operations including rotating operation, scaling operation, and clipping operation are designed to unify their sizes and directions

    ASTER, ALI and Hyperion sensors data for lithological mapping and ore minerals exploration

    Get PDF

    Framework for Automatic Identification of Paper Watermarks with Chain Codes

    Get PDF
    Title from PDF of title page viewed May 21, 2018Dissertation advisor: Reza DerakhshaniVitaIncludes bibliographical references (pages 220-235)Thesis (Ph.D.)--School of Computing and Engineering. University of Missouri--Kansas City, 2017In this dissertation, I present a new framework for automated description, archiving, and identification of paper watermarks found in historical documents and manuscripts. The early manufacturers of paper have introduced the embedding of identifying marks and patterns as a sign of a distinct origin and perhaps as a signature of quality. Thousands of watermarks have been studied, classified, and archived. Most of the classification categories are based on image similarity and are searchable based on a set of defined contextual descriptors. The novel method presented here is for automatic classification, identification (matching) and retrieval of watermark images based on chain code descriptors (CC). The approach for generation of unique CC includes a novel image preprocessing method to provide a solution for rotation and scale invariant representation of watermarks. The unique codes are truly reversible, providing high ratio lossless compression, fast searching, and image matching. The development of a novel distance measure for CC comparison is also presented. Examples for the complete process are given using the recently acquired watermarks digitized with hyper-spectral imaging of Summa Theologica, the work of Antonino Pierozzi (1389 – 1459). The performance of the algorithm on large datasets is demonstrated using watermarks datasets from well-known library catalogue collections.Introduction -- Paper and paper watermarks -- Automatic identification of paper watermarks -- Rotation, Scale and translation invariant chain code -- Comparison of RST_Invariant chain code -- Automatic identification of watermarks with chain codes -- Watermark composite feature vector -- Summary -- Appendix A. Watermarks from the Bernstein Collection used in this study -- Appendix B. The original and transformed images of watermarks -- Appendix C. The transformed and scaled images of watermarks -- Appendix D. Example of chain cod

    Very High Resolution (VHR) Satellite Imagery: Processing and Applications

    Get PDF
    Recently, growing interest in the use of remote sensing imagery has appeared to provide synoptic maps of water quality parameters in coastal and inner water ecosystems;, monitoring of complex land ecosystems for biodiversity conservation; precision agriculture for the management of soils, crops, and pests; urban planning; disaster monitoring, etc. However, for these maps to achieve their full potential, it is important to engage in periodic monitoring and analysis of multi-temporal changes. In this context, very high resolution (VHR) satellite-based optical, infrared, and radar imaging instruments provide reliable information to implement spatially-based conservation actions. Moreover, they enable observations of parameters of our environment at greater broader spatial and finer temporal scales than those allowed through field observation alone. In this sense, recent very high resolution satellite technologies and image processing algorithms present the opportunity to develop quantitative techniques that have the potential to improve upon traditional techniques in terms of cost, mapping fidelity, and objectivity. Typical applications include multi-temporal classification, recognition and tracking of specific patterns, multisensor data fusion, analysis of land/marine ecosystem processes and environment monitoring, etc. This book aims to collect new developments, methodologies, and applications of very high resolution satellite data for remote sensing. The works selected provide to the research community the most recent advances on all aspects of VHR satellite remote sensing

    Hyperspectral Data Acquisition and Its Application for Face Recognition

    Get PDF
    Current face recognition systems are rife with serious challenges in uncontrolled conditions: e.g., unrestrained lighting, pose variations, accessories, etc. Hyperspectral imaging (HI) is typically employed to counter many of those challenges, by incorporating the spectral information within different bands. Although numerous methods based on hyperspectral imaging have been developed for face recognition with promising results, three fundamental challenges remain: 1) low signal to noise ratios and low intensity values in the bands of the hyperspectral image specifically near blue bands; 2) high dimensionality of hyperspectral data; and 3) inter-band misalignment (IBM) correlated with subject motion during data acquisition. This dissertation concentrates mainly on addressing the aforementioned challenges in HI. First, to address low quality of the bands of the hyperspectral image, we utilize a custom light source that has more radiant power at shorter wavelengths and properly adjust camera exposure times corresponding to lower transmittance of the filter and lower radiant power of our light source. Second, the high dimensionality of spectral data imposes limitations on numerical analysis. As such, there is an emerging demand for robust data compression techniques with lows of less relevant information to manage real spectral data. To cope with these challenging problems, we describe a reduced-order data modeling technique based on local proper orthogonal decomposition in order to compute low-dimensional models by projecting high-dimensional clusters onto subspaces spanned by local reduced-order bases. Third, we investigate 11 leading alignment approaches to address IBM correlated with subject motion during data acquisition. To overcome the limitations of the considered alignment approaches, we propose an accurate alignment approach ( A3) by incorporating the strengths of point correspondence and a low-rank model. In addition, we develop two qualitative prediction models to assess the alignment quality of hyperspectral images in determining improved alignment among the conducted alignment approaches. Finally, we show that the proposed alignment approach leads to promising improvement on face recognition performance of a probabilistic linear discriminant analysis approach
    corecore