5,088 research outputs found

    Airborne LiDAR for DEM generation: some critical issues

    Get PDF
    Airborne LiDAR is one of the most effective and reliable means of terrain data collection. Using LiDAR data for DEM generation is becoming a standard practice in spatial related areas. However, the effective processing of the raw LiDAR data and the generation of an efficient and high-quality DEM remain big challenges. This paper reviews the recent advances of airborne LiDAR systems and the use of LiDAR data for DEM generation, with special focus on LiDAR data filters, interpolation methods, DEM resolution, and LiDAR data reduction. Separating LiDAR points into ground and non-ground is the most critical and difficult step for DEM generation from LiDAR data. Commonly used and most recently developed LiDAR filtering methods are presented. Interpolation methods and choices of suitable interpolator and DEM resolution for LiDAR DEM generation are discussed in detail. In order to reduce the data redundancy and increase the efficiency in terms of storage and manipulation, LiDAR data reduction is required in the process of DEM generation. Feature specific elements such as breaklines contribute significantly to DEM quality. Therefore, data reduction should be conducted in such a way that critical elements are kept while less important elements are removed. Given the highdensity characteristic of LiDAR data, breaklines can be directly extracted from LiDAR data. Extraction of breaklines and integration of the breaklines into DEM generation are presented

    Robust Modular Feature-Based Terrain-Aided Visual Navigation and Mapping

    Get PDF
    The visual feature-based Terrain-Aided Navigation (TAN) system presented in this thesis addresses the problem of constraining inertial drift introduced into the location estimate of Unmanned Aerial Vehicles (UAVs) in GPS-denied environment. The presented TAN system utilises salient visual features representing semantic or human-interpretable objects (roads, forest and water boundaries) from onboard aerial imagery and associates them to a database of reference features created a-priori, through application of the same feature detection algorithms to satellite imagery. Correlation of the detected features with the reference features via a series of the robust data association steps allows a localisation solution to be achieved with a finite absolute bound precision defined by the certainty of the reference dataset. The feature-based Visual Navigation System (VNS) presented in this thesis was originally developed for a navigation application using simulated multi-year satellite image datasets. The extension of the system application into the mapping domain, in turn, has been based on the real (not simulated) flight data and imagery. In the mapping study the full potential of the system, being a versatile tool for enhancing the accuracy of the information derived from the aerial imagery has been demonstrated. Not only have the visual features, such as road networks, shorelines and water bodies, been used to obtain a position ’fix’, they have also been used in reverse for accurate mapping of vehicles detected on the roads into an inertial space with improved precision. Combined correction of the geo-coding errors and improved aircraft localisation formed a robust solution to the defense mapping application. A system of the proposed design will provide a complete independent navigation solution to an autonomous UAV and additionally give it object tracking capability

    Methodology and Algorithms for Pedestrian Network Construction

    Get PDF
    With the advanced capabilities of mobile devices and the success of car navigation systems, interest in pedestrian navigation systems is on the rise. A critical component of any navigation system is a map database which represents a network (e.g., road networks in car navigation systems) and supports key functionality such as map display, geocoding, and routing. Road networks, mainly due to the popularity of car navigation systems, are well defined and publicly available. However, in pedestrian navigation systems, as well as other applications including urban planning and physical activities studies, road networks do not adequately represent the paths that pedestrians usually travel. Currently, there are no techniques to automatically construct pedestrian networks, impeding research and development of applications requiring pedestrian data. This coupled with the increased demand for pedestrian networks is the prime motivation for this dissertation which is focused on development of a methodology and algorithms that can construct pedestrian networks automatically. A methodology, which involves three independent approaches, network buffering (using existing road networks), collaborative mapping (using GPS traces collected by volunteers), and image processing (using high-resolution satellite and laser imageries) was developed. Experiments were conducted to evaluate the pedestrian networks constructed by these approaches with a pedestrian network baseline as a ground truth. The results of the experiments indicate that these three approaches, while differing in complexity and outcome, are viable for automatically constructing pedestrian networks

    Automatic road network extraction in suburban areas from aerial images

    Get PDF
    [no abstract

    Unsupervised multi-scale change detection from SAR imagery for monitoring natural and anthropogenic disasters

    Get PDF
    Thesis (Ph.D.) University of Alaska Fairbanks, 2017Radar remote sensing can play a critical role in operational monitoring of natural and anthropogenic disasters. Despite its all-weather capabilities, and its high performance in mapping, and monitoring of change, the application of radar remote sensing in operational monitoring activities has been limited. This has largely been due to: (1) the historically high costs associated with obtaining radar data; (2) slow data processing, and delivery procedures; and (3) the limited temporal sampling that was provided by spaceborne radar-based satellites. Recent advances in the capabilities of spaceborne Synthetic Aperture Radar (SAR) sensors have developed an environment that now allows for SAR to make significant contributions to disaster monitoring. New SAR processing strategies that can take full advantage of these new sensor capabilities are currently being developed. Hence, with this PhD dissertation, I aim to: (i) investigate unsupervised change detection techniques that can reliably extract signatures from time series of SAR images, and provide the necessary flexibility for application to a variety of natural, and anthropogenic hazard situations; (ii) investigate effective methods to reduce the effects of speckle and other noise on change detection performance; (iii) automate change detection algorithms using probabilistic Bayesian inferencing; and (iv) ensure that the developed technology is applicable to current, and future SAR sensors to maximize temporal sampling of a hazardous event. This is achieved by developing new algorithms that rely on image amplitude information only, the sole image parameter that is available for every single SAR acquisition. The motivation and implementation of the change detection concept are described in detail in Chapter 3. In the same chapter, I demonstrated the technique's performance using synthetic data as well as a real-data application to map wildfire progression. I applied Radiometric Terrain Correction (RTC) to the data to increase the sampling frequency, while the developed multiscaledriven approach reliably identified changes embedded in largely stationary background scenes. With this technique, I was able to identify the extent of burn scars with high accuracy. I further applied the application of the change detection technology to oil spill mapping. The analysis highlights that the approach described in Chapter 3 can be applied to this drastically different change detection problem with only little modification. While the core of the change detection technique remained unchanged, I made modifications to the pre-processing step to enable change detection from scenes of continuously varying background. I introduced the Lipschitz regularity (LR) transformation as a technique to normalize the typically dynamic ocean surface, facilitating high performance oil spill detection independent of environmental conditions during image acquisition. For instance, I showed that LR processing reduces the sensitivity of change detection performance to variations in surface winds, which is a known limitation in oil spill detection from SAR. Finally, I applied the change detection technique to aufeis flood mapping along the Sagavanirktok River. Due to the complex nature of aufeis flooded areas, I substituted the resolution-preserving speckle filter used in Chapter 3 with curvelet filters. In addition to validating the performance of the change detection results, I also provide evidence of the wealth of information that can be extracted about aufeis flooding events once a time series of change detection information was extracted from SAR imagery. A summary of the developed change detection techniques is conducted and suggested future work is presented in Chapter 6

    Remote Sensing

    Get PDF
    This dual conception of remote sensing brought us to the idea of preparing two different books; in addition to the first book which displays recent advances in remote sensing applications, this book is devoted to new techniques for data processing, sensors and platforms. We do not intend this book to cover all aspects of remote sensing techniques and platforms, since it would be an impossible task for a single volume. Instead, we have collected a number of high-quality, original and representative contributions in those areas

    Automation of road feature extraction from high resolution images

    Get PDF
    Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial TechnologiesThe detection of road features from remotely sensed images has become a critical factor in maintaining a reliable and updated road network in a country to provide a base reference for transportation, emergency planning, and navigation. With the recent advances of convolutional neural networks in image processing, several publications are devoted to the development of a method for automatically extract roads from satellite images. However, a reliable feature extraction method has not yet been developed with the desired accuracy and precision, and always seems to be a proportionality between the accuracy and the complexity of these developed methods. The aim of this study was therefore to develop an accurate road extraction method without compromising computational efficiency. In this paper, a semantic segmentation neural network that combines the strengths of transfer learning and U-net architecture is proposed with a minimal network complexity. Further, post-processing based on morphological operations and regional properties of the extracted segments were used to remove the noises from the final output. The results have been compared with different automatic classification and segmentation methods and the results of the proposed method produced an F1 score of 0.83 and high accuracy of 95.57%, more accurate and precise than all the other models for the freely available Massachusetts dataset. Finally, the developed method stood superior to the preexisting methods in terms of performance measure and network complexity

    Semi-automatic Road Extraction from Very High Resolution Remote Sensing Imagery by RoadModeler

    Get PDF
    Accurate and up-to-date road information is essential for both effective urban planning and disaster management. Today, very high resolution (VHR) imagery acquired by airborne and spaceborne imaging sensors is the primary source for the acquisition of spatial information of increasingly growing road networks. Given the increased availability of the aerial and satellite images, it is necessary to develop computer-aided techniques to improve the efficiency and reduce the cost of road extraction tasks. Therefore, automation of image-based road extraction is a very active research topic. This thesis deals with the development and implementation aspects of a semi-automatic road extraction strategy, which includes two key approaches: multidirectional and single-direction road extraction. It requires a human operator to initialize a seed circle on a road and specify a extraction approach before the road is extracted by automatic algorithms using multiple vision cues. The multidirectional approach is used to detect roads with different materials, widths, intersection shapes, and degrees of noise, but sometimes it also interprets parking lots as road areas. Different from the multidirectional approach, the single-direction approach can detect roads with few mistakes, but each seed circle can only be used to detect one road. In accordance with this strategy, a RoadModeler prototype was developed. Both aerial and GeoEye-1 satellite images of seven different types of scenes with various road shapes in rural, downtown, and residential areas were used to evaluate the performance of the RoadModeler. The experimental results demonstrated that the RoadModeler is reliable and easy-to-use by a non-expert operator. Therefore, the RoadModeler is much better than the object-oriented classification. Its average road completeness, correctness, and quality achieved 94%, 97%, and 94%, respectively. These results are higher than those of Hu et al. (2007), which are 91%, 90%, and 85%, respectively. The successful development of the RoadModeler suggests that the integration of multiple vision cues potentially offers a solution to simple and fast acquisition of road information. Recommendations are given for further research to be conducted to ensure that this progress goes beyond the prototype stage and towards everyday use

    Automatic Reconstruction of Urban Objects from Mobile Laser Scanner Data

    Get PDF
    Aktuelle 3D-Stadtmodelle werden immer wichtiger in verschiedenen städtischen Anwendungsbereichen. Im Moment dienen sie als Grundlage bei der Stadtplanung, virtuellem Tourismus und Navigationssystemen. Mittlerweile ist der Bedarf an 3D-Gebäudemodellen dramatisch gestiegen. Der Grund dafür sind hauptsächlich Navigationssysteme und Onlinedienste wie Google Earth. Die Mehrheit der Untersuchungen zur Rekonstruktion von Gebäudemodellen von Luftaufnahmen konzentriert sich ausschließlich auf Dachmodellierung. Jedoch treiben Anwendungen wie Virtuelle Realität und Navigationssysteme die Nachfrage nach detaillieren Gebäudemodellen, die nicht nur die geometrischen Aspekte sondern auch semantische Informationen beinhalten, stark an. Urbanisierung und Industrialisierung beeinflussen das Wachstum von urbaner Vegetation drastisch, welche als ein wesentlicher Teil des Lebensraums angesehen wird. Aus diesem Grund werden Aufgaben wie der Ökosystemüberwachung, der Verbesserung der Planung und des Managements von urbanen Regionen immer mehr Aufmerksamkeit geschenkt. Gleichermaßen hat die Erkennung und Modellierung von Bäumen im Stadtgebiet sowie die kontinuierliche Überprüfung ihrer Inventurparameter an Bedeutung gewonnen. Die steigende Nachfrage nach 3D-Gebäudemodellen, welche durch Fassadeninformation ergänzt wurden, und Informationen über einzelne Bäume im städtischen Raum erfordern effiziente Extraktions- und Rekonstruktionstechniken, die hochgradig automatisiert sind. In diesem Zusammenhang ist das Wissen über die geometrische Form jedes Objektteils ein wichtiger Aspekt. Heutzutage, wird das Mobile Laser Scanning (MLS) vermehrt eingesetzt um Objekte im städtischen Umfeld zu erfassen und es entwickelt sich zur Hauptquelle von Daten für die Modellierung von urbanen Objekten. Eine Vielzahl von Objekten wurde schon mit Daten von MLS rekonstruiert. Außerdem wurden bereits viele Methoden für die Verarbeitung von MLS-Daten mit dem Ziel urbane Objekte zu erkennen und zu rekonstruieren vorgeschlagen. Die 3D-Punkwolke einer städtischen Szene stellt eine große Menge von Messungen dar, die viele Objekte von verschiedener Größe umfasst, komplexe und unvollständige Strukturen sowie Löcher (Rauschen und Datenlücken) enthält und eine inhomogene Punktverteilung aufweist. Aus diesem Grund ist die Verarbeitung von MLS-Punktwolken im Hinblick auf die Extrahierung und Modellierung von wesentlichen und charakteristischen Fassadenstrukturen sowie Bäumen von großer Bedeutung. In der Arbeit werden zwei neue Methoden für die Rekonstruktion von Gebäudefassaden und die Extraktion von Bäumen aus MLS-Punktwolken vorgestellt, sowie ihre Anwendbarkeit in der städtischen Umgebung analysiert. Die erste Methode zielt auf die Rekonstruktion von Gebäudefassaden mit expliziter semantischer Information, wie beispielsweise Fenster, Türen, und Balkone. Die Rekonstruktion läuft vollautomatisch ab. Zu diesem Zweck werden einige Algorithmen vorgestellt, die auf dem Vorwissen über die geometrische Form und das Arrangement von Fassadenmerkmalen beruhen. Die initiale Klassifikation, mit welcher die Punkte in Objektpunkte und Bodenpunkte unterschieden werden, wird über eine lokale Höhenhistogrammanalyse zusammen mit einer planaren Region-Growing-Methode erzielt. Die Punkte, die als zugehörig zu Objekten klassifiziert werden, werden anschließend in Ebenen segmentiert, welche als Basiselemente der Merkmalserkennung angesehen werden können. Information über die Gebäudestruktur kann in Form von Regeln und Bedingungen erfasst werden, welche die wesentlichen Steuerelemente bei der Erkennung der Fassadenmerkmale und der Rekonstruktion des geometrischen Modells darstellen. Um Merkmale wie Fenster oder Türen zu erkennen, die sich an der Gebäudewand befinden, wurde eine löcherbasierte Methode implementiert. Einige Löcher, die durch Verdeckungen entstanden sind, können anschließend durch einen neuen regelbasierten Algorithmus eliminiert werden. Außenlinien der Merkmalsränder werden durch ein Polygon verbunden, welches das geometrische Modell repräsentiert, indem eine Methode angewendet wird, die auf geometrischen Primitiven basiert. Dabei werden die topologischen Relationen unter Beachtung des Vorwissens über die primitiven Formen analysiert. Mögliche Außenlinien können von den Kantenpunkten bestimmt werden, welche mit einer winkelbasierten Methode detektiert werden können. Wiederkehrende Muster und Ähnlichkeiten werden ausgenutzt um geometrische und topologische Ungenauigkeiten des rekonstruierten Modells zu korrigieren. Neben der Entwicklung des Schemas zur Rekonstruktion des 3D-Fassadenmodells, sind die Segmentierung einzelner Bäume und die Ableitung von Attributen der städtischen Bäume im Fokus der Untersuchung. Die zweite Methode zielt auf die Extraktion von individuellen Bäumen aus den Restpunktwolken. Vorwissen über Bäume, welches speziell auf urbane Regionen zugeschnitten ist, wird im Extraktionsprozess verwendet. Der formbasierte Ansatz zur Extraktion von Einzelbäumen besteht aus einer Reihe von Schritten. In jedem Schritt werden Objekte in Abhängigkeit ihrer geometrischen Merkmale gefunden. Stämme werden unter Ausnutzung der Hauptrichtung der Punktverteilung identifiziert. Dafür werden Punktsegmente gesucht, die einen Teil des Baumstamms repräsentieren. Das Ergebnis des Algorithmus sind segmentierte Bäume, welche genutzt werden können um genaue Informationen über die Größe und Position jedes einzelnen Baumes abzuleiten. Einige Beispiele der Ergebnisse werden in der Arbeit angeführt. Die Zuverlässigkeit der Algorithmen und der Methoden im Allgemeinen wurden unter Verwendung von drei Datensätzen, die mit verschiedenen Laserscannersystemen aufgenommen wurden, verifiziert. Die Untersuchung zeigt auch das Potential sowie die Einschränkungen der entwickelten Methoden wenn sie auf verschiedenen Datensätzen angewendet werden. Die Ergebnisse beider Methoden wurden quantitativ bewertet unter Verwendung einer Menge von Maßen, die die Qualität der Fassadenrekonstruktion und Baumextraktion betreffen wie Vollständigkeit und Genauigkeit. Die Genauigkeit der Fassadenrekonstruktion, der Baumstammdetektion, der Erfassung von Baumkronen, sowie ihre Einschränkungen werden diskutiert. Die Ergebnisse zeigen, dass MLS-Punktwolken geeignet sind um städtische Objekte detailreich zu dokumentieren und dass mit automatischen Rekonstruktionsmethoden genaue Messungen der wichtigsten Attribute der Objekte, wie Fensterhöhe und -breite, Flächen, Stammdurchmesser, Baumhöhe und Kronenfläche, erzielt werden können. Der gesamte Ansatz ist geeignet für die Rekonstruktion von Gebäudefassaden und für die korrekte Extraktion von Bäumen sowie ihre Unterscheidung zu anderen urbanen Objekten wie zum Beispiel Straßenschilder oder Leitpfosten. Aus diesem Grund sind die beiden Methoden angemessen um Daten von heterogener Qualität zu verarbeiten. Des Weiteren bieten sie flexible Frameworks für das viele Erweiterungen vorstellbar sind.Up-to-date 3D urban models are becoming increasingly important in various urban application areas, such as urban planning, virtual tourism, and navigation systems. Many of these applications often demand the modelling of 3D buildings, enriched with façade information, and also single trees among other urban objects. Nowadays, Mobile Laser Scanning (MLS) technique is being progressively used to capture objects in urban settings, thus becoming a leading data source for the modelling of these two urban objects. The 3D point clouds of urban scenes consist of large amounts of data representing numerous objects with significant size variability, complex and incomplete structures, and holes (noise and data gaps) or variable point densities. For this reason, novel strategies on processing of mobile laser scanning point clouds, in terms of the extraction and modelling of salient façade structures and trees, are of vital importance. The present study proposes two new methods for the reconstruction of building façades and the extraction of trees from MLS point clouds. The first method aims at the reconstruction of building façades with explicit semantic information such as windows, doors and balconies. It runs automatically during all processing steps. For this purpose, several algorithms are introduced based on the general knowledge on the geometric shape and structural arrangement of façade features. The initial classification has been performed using a local height histogram analysis together with a planar growing method, which allows for classifying points as object and ground points. The point cloud that has been labelled as object points is segmented into planar surfaces that could be regarded as the main entity in the feature recognition process. Knowledge of the building structure is used to define rules and constraints, which provide essential guidance for recognizing façade features and reconstructing their geometric models. In order to recognise features on a wall such as windows and doors, a hole-based method is implemented. Some holes that resulted from occlusion could subsequently be eliminated by means of a new rule-based algorithm. Boundary segments of a feature are connected into a polygon representing the geometric model by introducing a primitive shape based method, in which topological relations are analysed taking into account the prior knowledge about the primitive shapes. Possible outlines are determined from the edge points detected from the angle-based method. The repetitive patterns and similarities are exploited to rectify geometrical and topological inaccuracies of the reconstructed models. Apart from developing the 3D façade model reconstruction scheme, the research focuses on individual tree segmentation and derivation of attributes of urban trees. The second method aims at extracting individual trees from the remaining point clouds. Knowledge about trees specially pertaining to urban areas is used in the process of tree extraction. An innovative shape based approach is developed to transfer this knowledge to machine language. The usage of principal direction for identifying stems is introduced, which consists of searching point segments representing a tree stem. The output of the algorithm is, segmented individual trees that can be used to derive accurate information about the size and locations of each individual tree. The reliability of the two methods is verified against three different data sets obtained from different laser scanner systems. The results of both methods are quantitatively evaluated using a set of measures pertaining to the quality of the façade reconstruction and tree extraction. The performance of the developed algorithms referring to the façade reconstruction, tree stem detection and the delineation of individual tree crowns as well as their limitations are discussed. The results show that MLS point clouds are suited to document urban objects rich in details. From the obtained results, accurate measurements of the most important attributes relevant to the both objects (building façades and trees), such as window height and width, area, stem diameter, tree height, and crown area are obtained acceptably. The entire approach is suitable for the reconstruction of building façades and for the extracting trees correctly from other various urban objects, especially pole-like objects. Therefore, both methods are feasible to cope with data of heterogeneous quality. In addition, they provide flexible frameworks, from which many extensions can be envisioned
    corecore