1,149 research outputs found

    The role of earth observation in an integrated deprived area mapping “system” for low-to-middle income countries

    Get PDF
    Urbanization in the global South has been accompanied by the proliferation of vast informal and marginalized urban areas that lack access to essential services and infrastructure. UN-Habitat estimates that close to a billion people currently live in these deprived and informal urban settlements, generally grouped under the term of urban slums. Two major knowledge gaps undermine the efforts to monitor progress towards the corresponding sustainable development goal (i.e., SDG 11—Sustainable Cities and Communities). First, the data available for cities worldwide is patchy and insufficient to differentiate between the diversity of urban areas with respect to their access to essential services and their specific infrastructure needs. Second, existing approaches used to map deprived areas (i.e., aggregated household data, Earth observation (EO), and community-driven data collection) are mostly siloed, and, individually, they often lack transferability and scalability and fail to include the opinions of different interest groups. In particular, EO-based-deprived area mapping approaches are mostly top-down, with very little attention given to ground information and interaction with urban communities and stakeholders. Existing top-down methods should be complemented with bottom-up approaches to produce routinely updated, accurate, and timely deprived area maps. In this review, we first assess the strengths and limitations of existing deprived area mapping methods. We then propose an Integrated Deprived Area Mapping System (IDeAMapS) framework that leverages the strengths of EO- and community-based approaches. The proposed framework offers a way forward to map deprived areas globally, routinely, and with maximum accuracy to support SDG 11 monitoring and the needs of different interest groups

    Remote Sensing of Riparian Areas and Invasive Species

    Get PDF
    Riparian areas are critical landscape features situated between terrestrial and aquatic environments, which provide a host of ecosystem functions and services. Although important to the environmental health of an ecosystem, riparian areas have been degraded by anthropogenic disturbances. These routine disturbances have decreased the resiliency of riparian areas and increased their vulnerability to invasive plant species. Invasive plant species are non-native species which cause harm to the ecosystem and thrive in riparian areas due to the access to optimal growing conditions.Remote sensing provides an opportunity to manage riparian habitats at a regional and local level with imagery collected by satellites and unmanned aerial systems (UAS). The aim of this study was two-fold: firstly, to investigate riparian delineation methods using moderate resolution satellite imagery; and secondly, the feasibility of UAS to detect the invasive plant Fallopia japonica (Japanese Knotweed) within the defined areas. I gathered imagery from the Landsat 8 OLI and Sentinel-2 satellites to complete the regional level study and collected UAS imagery at a study site in northern New Hampshire for the local level portion. I obtained a modest overall accuracy from the regional riparian classification of 59% using the Sentinel-2 imagery. The local invasive species classification yielded thematic maps with overall accuracies of up to 70%, which is comparable to other studies with the same focus species. Remote sensing is a valuable tool in the management of riparian habitat and invasive plant species

    TaLAM: Mapping Land Cover in Lowlands and Uplands with Satellite Imagery

    Get PDF
    End-of-Project ReportThe Towards Land Cover Accounting and Monitoring (TaLAM) project is part of Ireland’s response to creating a national land cover mapping programme. Its aims are to demonstrate how the new digital map of Ireland, Prime2, from Ordnance Survey Ireland (OSI), can be combined with satellite imagery to produce land cover maps

    Extracting Agricultural Fields from Remote Sensing Imagery Using Graph-Based Growing Contours

    Get PDF
    Knowledge of the location and extent of agricultural fields is required for many applications, including agricultural statistics, environmental monitoring, and administrative policies. Furthermore, many mapping applications, such as object-based classification, crop type distinction, or large-scale yield prediction benefit significantly from the accurate delineation of fields. Still, most existing field maps and observation systems rely on historic administrative maps or labor-intensive field campaigns. These are often expensive to maintain and quickly become outdated, especially in regions of frequently changing agricultural patterns. However, exploiting openly available remote sensing imagery (e.g., from the European Union’s Copernicus programme) may allow for frequent and efficient field mapping with minimal human interaction. We present a new approach to extracting agricultural fields at the sub-pixel level. It consists of boundary detection and a field polygon extraction step based on a newly developed, modified version of the growing snakes active contours model we refer to as graph-based growing contours. This technique is capable of extracting complex networks of boundaries present in agricultural landscapes, and is largely automatic with little supervision required. The whole detection and extraction process is designed to work independently of sensor type, resolution, or wavelength. As a test case, we applied the method to two regions of interest in a study area in the northern Germany using multi-temporal Sentinel-2 imagery. Extracted fields were compared visually and quantitatively to ground reference data. The technique proved reliable in producing polygons closely matching reference data, both in terms of boundary location and statistical proxies such as median field size and total acreage

    Statistical and image processing techniques for remote sensing in agricultural monitoring and mapping

    Get PDF
    Throughout most of history, increasing agricultural production has been largely driven by expanded land use, and – especially in the 19th and 20th century – by technological innovation in breeding, genetics and agrochemistry as well as intensification through mechanization and industrialization. More recently, information technology, digitalization and automation have started to play a more significant role in achieving higher productivity with lower environmental impact and reduced use of resources. This includes two trends on opposite scales: precision farming applying detailed observations on sub-field level to support local management, and large-scale agricultural monitoring observing regional patterns in plant health and crop productivity to help manage macroeconomic and environmental trends. In both contexts, remote sensing imagery plays a crucial role that is growing due to decreasing costs and increasing accessibility of both data and means of processing and analysis. The large archives of free imagery with global coverage, can be expected to further increase adoption of remote sensing techniques in coming years. This thesis addresses multiple aspects of remote sensing in agriculture by presenting new techniques in three distinct research topics: (1) remote sensing data assimilation in dynamic crop models; (2) agricultural field boundary detection from remote sensing observations; and (3) contour extraction and field polygon creation from remote sensing imagery. These key objectives are achieved through combining methods of probability analysis, uncertainty quantification, evolutionary learning and swarm intelligence, graph theory, image processing, deep learning and feature extraction. Four new techniques have been developed. Firstly, a new data assimilation technique based on statistical distance metrics and probability distribution analysis to achieve a flexible representation of model- and measurement-related uncertainties. Secondly, a method for detecting boundaries of agricultural fields based on remote sensing observations designed to only rely on image-based information in multi-temporal imagery. Thirdly, an improved boundary detection approach based on deep learning techniques and a variety of image features. Fourthly, a new active contours method called Graph-based Growing Contours (GGC) that allows automatized extractionof complex boundary networks from imagery. The new approaches are tested and evaluated on multiple study areas in the states of Schleswig-Holstein, Niedersachsen and Sachsen-Anhalt, Germany, based on combine harvester measurements, cadastral data and manual mappings. All methods were designed with flexibility and applicability in mind. They proved to perform similarly or better than other existing methods and showed potential for large-scale application and their synergetic use. Thanks to low data requirements and flexible use of inputs, their application is neither constrained to the specific applications presented here nor the use of a specific type of sensor or imagery. This flexibility, in theory, enables their use even outside of the field of remote sensing.Landwirtschaftliche Produktivitätssteigerung wurde historisch hauptsächlich durch Erschließung neuer Anbauflächen und später, insbesondere im 19. und 20. Jahrhundert, durch technologische Innovation in Züchtung, Genetik und Agrarchemie sowie Intensivierung in Form von Mechanisierung und Industrialisierung erreicht. In jüngerer Vergangenheit spielen jedoch Informationstechnologie, Digitalisierung und Automatisierung zunehmend eine größere Rolle, um die Produktivität bei reduziertem Umwelteinfluss und Ressourcennutzung weiter zu steigern. Daraus folgen zwei entgegengesetzte Trends: Zum einen Precision Farming, das mithilfe von Detailbeobachtungen die lokale Feldarbeit unterstützt, und zum anderen großskalige landwirtschaftliche Beobachtung von Bestands- und Ertragsmustern zur Analyse makroökonomischer und ökologischer Trends. In beiden Fällen spielen Fernerkundungsdaten eine entscheidende Rolle und gewinnen dank sinkender Kosten und zunehmender Verfügbarkeit, sowohl der Daten als auch der Möglichkeiten zu ihrer Verarbeitung und Analyse, weiter an Bedeutung. Die Verfügbarkeit großer, freier Archive von globaler Abdeckung werden in den kommenden Jahren voraussichtlich zu einer zunehmenden Verwendung führen. Diese Dissertation behandelt mehrere Aspekte der Fernerkundungsanwendung in der Landwirtschaft und präsentiert neue Methoden zu drei Themenbereichen: (1) Assimilation von Fernerkundungsdaten in dynamischen Agrarmodellen; (2) Erkennung von landwirtschaftlichen Feldgrenzen auf Basis von Fernerkundungsbeobachtungen; und (3) Konturextraktion und Erstellung von Polygonen aus Fernerkundungsaufnahmen. Zur Bearbeitung dieser Zielsetzungen werden verschiedene Techniken aus der Wahrscheinlichkeitsanalyse, Unsicherheitsquantifizierung, dem evolutionären Lernen und der Schwarmintelligenz, der Graphentheorie, dem Bereich der Bildverarbeitung, Deep Learning und Feature-Extraktion kombiniert. Es werden vier neue Methoden vorgestellt. Erstens, eine neue Methode zur Datenassimilation basierend auf statistischen Distanzmaßen und Wahrscheinlichkeitsverteilungen zur flexiblen Abbildung von Modell- und Messungenauigkeiten. Zweitens, eine neue Technik zur Erkennung von Feldgrenzen, ausschließlich auf Basis von Bildinformationen aus multi-temporalen Fernerkundungsdaten. Drittens, eine verbesserte Feldgrenzenerkennung basierend auf Deep Learning Methoden und verschiedener Bildmerkmale. Viertens, eine neue Aktive Kontur Methode namens Graph-based Growing Contours (GGC), die es erlaubt, komplexe Netzwerke von Konturen aus Bildern zu extrahieren. Alle neuen Ansätze werden getestet und evaluiert anhand von Mähdreschermessungen, Katasterdaten und manuellen Kartierungen in verschiedenen Testregionen in den Bundesländern Schleswig-Holstein, Niedersachsen und Sachsen-Anhalt. Alle vorgestellten Methoden sind auf Flexibilität und Anwendbarkeit ausgelegt. Im Vergleich zu anderen Methoden zeigten sie vergleichbare oder bessere Ergebnisse und verdeutlichten das Potenzial zur großskaligen Anwendung sowie kombinierter Verwendung. Dank der geringen Anforderungen und der flexiblen Verwendung verschiedener Eingangsdaten ist die Nutzung nicht nur auf die hier beschriebenen Anwendungen oder bestimmte Sensoren und Bilddaten beschränkt. Diese Flexibilität erlaubt theoretisch eine breite Anwendung, auch außerhalb der Fernerkundung

    A systematic review of the use of Deep Learning in Satellite Imagery for Agriculture

    Full text link
    Agricultural research is essential for increasing food production to meet the requirements of an increasing population in the coming decades. Recently, satellite technology has been improving rapidly and deep learning has seen much success in generic computer vision tasks and many application areas which presents an important opportunity to improve analysis of agricultural land. Here we present a systematic review of 150 studies to find the current uses of deep learning on satellite imagery for agricultural research. Although we identify 5 categories of agricultural monitoring tasks, the majority of the research interest is in crop segmentation and yield prediction. We found that, when used, modern deep learning methods consistently outperformed traditional machine learning across most tasks; the only exception was that Long Short-Term Memory (LSTM) Recurrent Neural Networks did not consistently outperform Random Forests (RF) for yield prediction. The reviewed studies have largely adopted methodologies from generic computer vision, except for one major omission: benchmark datasets are not utilised to evaluate models across studies, making it difficult to compare results. Additionally, some studies have specifically utilised the extra spectral resolution available in satellite imagery, but other divergent properties of satellite images - such as the hugely different scales of spatial patterns - are not being taken advantage of in the reviewed studies.Comment: 25 pages, 2 figures and lots of large tables. Supplementary materials section included here in main pd

    Derivation of forest inventory parameters from high-resolution satellite imagery for the Thunkel area, Northern Mongolia. A comparative study on various satellite sensors and data analysis techniques.

    Get PDF
    With the demise of the Soviet Union and the transition to a market economy starting in the 1990s, Mongolia has been experiencing dramatic changes resulting in social and economic disparities and an increasing strain on its natural resources. The situation is exacerbated by a changing climate, the erosion of forestry related administrative structures, and a lack of law enforcement activities. Mongolia’s forests have been afflicted with a dramatic increase in degradation due to human and natural impacts such as overexploitation and wildfire occurrences. In addition, forest management practices are far from being sustainable. In order to provide useful information on how to viably and effectively utilise the forest resources in the future, the gathering and analysis of forest related data is pivotal. Although a National Forest Inventory was conducted in 2016, very little reliable and scientifically substantiated information exists related to a regional or even local level. This lack of detailed information warranted a study performed in the Thunkel taiga area in 2017 in cooperation with the GIZ. In this context, we hypothesise that (i) tree species and composition can be identified utilising the aerial imagery, (ii) tree height can be extracted from the resulting canopy height model with accuracies commensurate with field survey measurements, and (iii) high-resolution satellite imagery is suitable for the extraction of tree species, the number of trees, and the upscaling of timber volume and basal area based on the spectral properties. The outcomes of this study illustrate quite clearly the potential of employing UAV imagery for tree height extraction (R2 of 0.9) as well as for species and crown diameter determination. However, in a few instances, the visual interpretation of the aerial photographs were determined to be superior to the computer-aided automatic extraction of forest attributes. In addition, imagery from various satellite sensors (e.g. Sentinel-2, RapidEye, WorldView-2) proved to be excellently suited for the delineation of burned areas and the assessment of tree vigour. Furthermore, recently developed sophisticated classifying approaches such as Support Vector Machines and Random Forest appear to be tailored for tree species discrimination (Overall Accuracy of 89%). Object-based classification approaches convey the impression to be highly suitable for very high-resolution imagery, however, at medium scale, pixel-based classifiers outperformed the former. It is also suggested that high radiometric resolution bears the potential to easily compensate for the lack of spatial detectability in the imagery. Quite surprising was the occurrence of dark taiga species in the riparian areas being beyond their natural habitat range. The presented results matrix and the interpretation key have been devised as a decision tool and/or a vademecum for practitioners. In consideration of future projects and to facilitate the improvement of the forest inventory database, the establishment of permanent sampling plots in the Mongolian taigas is strongly advised.2021-06-0

    Image Classification of Marine-Terminating Outlet Glaciers using Deep Learning Methods

    Get PDF
    A wealth of research has focused on elucidating the key controls on mass loss from the Greenland and Antarctic ice sheets in response to climate forcing, specifically in relation to the drivers of marine-terminating outlet glacier change. Despite the burgeoning availability of medium resolution satellite data, the manual methods traditionally used to monitor change of marine-terminating outlet glaciers from satellite imagery are time-consuming and can be subjective, especially where a mélange of icebergs and sea-ice exists at the terminus. To address this, recent advances in deep learning applied to image processing have created a new frontier in the field of automated delineation of glacier termini. However, at this stage, there remains a paucity of research on the use of deep learning for pixel-level semantic image classification of outlet glacier environments. This project develops and tests a two-phase deep learning approach based on a well-established convolutional neural network (CNN) called VGG16 for automated classification of Sentinel-2 satellite images. The novel workflow, termed CNN-Supervised Classification (CSC), was originally developed for fluvial settings but is adapted here to produce multi-class outputs for test imagery of glacial environments containing marine-terminating outlet glaciers in eastern Greenland. Results show mean F1 scores up to 95% for in-sample test imagery and 93% for out-of-sample test imagery, with significant improvements over traditional pixel-based methods such as band ratio techniques. This demonstrates the robustness of the deep learning workflow for automated classification despite the complex characteristics of the imagery. Future research could focus on the integration of deep learning classification workflows with platforms such as Google Earth Engine (GEE), to classify imagery more efficiently and produce datasets for a range of glacial applications without the need for substantial prior experience in coding or deep learning
    • …
    corecore