3,115 research outputs found

    Learning Aerial Image Segmentation from Online Maps

    Get PDF
    This study deals with semantic segmentation of high-resolution (aerial) images where a semantic class label is assigned to each pixel via supervised classification as a basis for automatic map generation. Recently, deep convolutional neural networks (CNNs) have shown impressive performance and have quickly become the de-facto standard for semantic segmentation, with the added benefit that task-specific feature design is no longer necessary. However, a major downside of deep learning methods is that they are extremely data-hungry, thus aggravating the perennial bottleneck of supervised classification, to obtain enough annotated training data. On the other hand, it has been observed that they are rather robust against noise in the training labels. This opens up the intriguing possibility to avoid annotating huge amounts of training data, and instead train the classifier from existing legacy data or crowd-sourced maps which can exhibit high levels of noise. The question addressed in this paper is: can training with large-scale, publicly available labels replace a substantial part of the manual labeling effort and still achieve sufficient performance? Such data will inevitably contain a significant portion of errors, but in return virtually unlimited quantities of it are available in larger parts of the world. We adapt a state-of-the-art CNN architecture for semantic segmentation of buildings and roads in aerial images, and compare its performance when using different training data sets, ranging from manually labeled, pixel-accurate ground truth of the same city to automatic training data derived from OpenStreetMap data from distant locations. We report our results that indicate that satisfying performance can be obtained with significantly less manual annotation effort, by exploiting noisy large-scale training data.Comment: Published in IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSIN

    The role of earth observation in an integrated deprived area mapping โ€œsystemโ€ for low-to-middle income countries

    Get PDF
    Urbanization in the global South has been accompanied by the proliferation of vast informal and marginalized urban areas that lack access to essential services and infrastructure. UN-Habitat estimates that close to a billion people currently live in these deprived and informal urban settlements, generally grouped under the term of urban slums. Two major knowledge gaps undermine the efforts to monitor progress towards the corresponding sustainable development goal (i.e., SDG 11โ€”Sustainable Cities and Communities). First, the data available for cities worldwide is patchy and insufficient to differentiate between the diversity of urban areas with respect to their access to essential services and their specific infrastructure needs. Second, existing approaches used to map deprived areas (i.e., aggregated household data, Earth observation (EO), and community-driven data collection) are mostly siloed, and, individually, they often lack transferability and scalability and fail to include the opinions of different interest groups. In particular, EO-based-deprived area mapping approaches are mostly top-down, with very little attention given to ground information and interaction with urban communities and stakeholders. Existing top-down methods should be complemented with bottom-up approaches to produce routinely updated, accurate, and timely deprived area maps. In this review, we first assess the strengths and limitations of existing deprived area mapping methods. We then propose an Integrated Deprived Area Mapping System (IDeAMapS) framework that leverages the strengths of EO- and community-based approaches. The proposed framework offers a way forward to map deprived areas globally, routinely, and with maximum accuracy to support SDG 11 monitoring and the needs of different interest groups

    CNN๊ธฐ๋ฐ˜์˜ FusionNet ์‹ ๊ฒฝ๋ง๊ณผ ๋†์ง€ ๊ฒฝ๊ณ„์ถ”์ถœ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ด์šฉํ•œ ํ† ์ง€ํ”ผ๋ณต๋ถ„๋ฅ˜๋ชจ๋ธ ๊ฐœ๋ฐœ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (์„์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๋†์—…์ƒ๋ช…๊ณผํ•™๋Œ€ํ•™ ์ƒํƒœ์กฐ๊ฒฝ.์ง€์—ญ์‹œ์Šคํ…œ๊ณตํ•™๋ถ€(์ง€์—ญ์‹œ์Šคํ…œ๊ณตํ•™์ „๊ณต), 2021. 2. ์†ก์ธํ™.ํ† ์ง€์ด์šฉ์ด ๋น ๋ฅด๊ฒŒ ๋ณ€ํ™”ํ•จ์— ๋”ฐ๋ผ, ํ† ์ง€ ํ”ผ๋ณต์— ๋Œ€ํ•œ ๊ณต๊ฐ„์ •๋ณด๋ฅผ ๋‹ด๊ณ  ์žˆ๋Š” ํ† ์ง€ ํ”ผ๋ณต ์ง€๋„์˜ ์‹ ์†ํ•œ ์ตœ์‹ ํ™”๋Š” ํ•„์ˆ˜์ ์ด๋‹ค. ํ•˜์ง€๋งŒ, ํ˜„ ํ† ์ง€ ํ”ผ๋ณต ์ง€๋„๋Š” ๋งŽ์€ ์‹œ๊ฐ„๊ณผ ๋…ธ๋™๋ ฅ์„ ์š”๊ตฌํ•˜๋Š” manual digitizing ๋ฐฉ๋ฒ•์œผ๋กœ ์ œ์ž‘๋จ์— ๋”ฐ๋ผ, ํ† ์ง€ ํ”ผ๋ณต ์ง€๋„์˜ ์—…๋ฐ์ดํŠธ ๋ฐ ๋ฐฐํฌ์— ๊ธด ์‹œ๊ฐ„ ๊ฐ„๊ฒฉ์ด ๋ฐœ์ƒํ•˜๋Š” ์‹ค์ •์ด๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” convolutional neural network (CNN) ๊ธฐ๋ฐ˜์˜ ์ธ๊ณต์‹ ๊ฒฝ๋ง์„ ์ด์šฉํ•˜์—ฌ high-resolution remote sensing (HRRS) ์˜์ƒ์œผ๋กœ๋ถ€ํ„ฐ ํ† ์ง€ ํ”ผ๋ณต์„ ๋ถ„๋ฅ˜ํ•˜๋Š” ๋ชจ๋ธ์„ ๊ฐœ๋ฐœํ•˜๊ณ , ํŠนํžˆ ๋†์ง€ ๊ฒฝ๊ณ„์ถ”์ถœ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ ์šฉํ•˜์—ฌ ๋†์—…์ง€์—ญ์—์„œ ๋ถ„๋ฅ˜ ์ •ํ™•๋„๋ฅผ ๊ฐœ์„ ํ•˜๊ณ ์ž ํ•˜์˜€๋‹ค. ๊ฐœ๋ฐœ๋œ ํ† ์ง€ ํ”ผ๋ณต ๋ถ„๋ฅ˜๋ชจ๋ธ์€ ์ „์ฒ˜๋ฆฌ(pre-processing) ๋ชจ๋“ˆ, ํ† ์ง€ ํ”ผ๋ณต ๋ถ„๋ฅ˜(land cover classification) ๋ชจ๋“ˆ, ๊ทธ๋ฆฌ๊ณ  ํ›„์ฒ˜๋ฆฌ(post-processing) ๋ชจ๋“ˆ์˜ ์„ธ ๋ชจ๋“ˆ๋กœ ๊ตฌ์„ฑ๋œ๋‹ค. ์ „์ฒ˜๋ฆฌ ๋ชจ๋“ˆ์€ ์ž…๋ ฅ๋œ HRRS ์˜์ƒ์„ 75%์”ฉ ์ค‘์ฒฉ ๋ถ„ํ• ํ•˜์—ฌ ๊ด€์ ์„ ๋‹ค์–‘ํ™”ํ•˜๋Š” ๋ชจ๋“ˆ๋กœ, ํ•œ ๊ด€์ ์—์„œ ํ† ์ง€ ํ”ผ๋ณต์„ ๋ถ„๋ฅ˜ํ•  ๋•Œ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ๋Š” ์˜ค๋ถ„๋ฅ˜๋ฅผ ์ค„์ด๊ณ ์ž ํ•˜์˜€๋‹ค. ํ† ์ง€ ํ”ผ๋ณต ๋ถ„๋ฅ˜ ๋ชจ๋“ˆ์€ FusionNet model ๊ตฌ์กฐ๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ๊ฐœ๋ฐœ๋˜์—ˆ๊ณ , ์ด๋Š” ๋ถ„ํ• ๋œ HRRS ์ด๋ฏธ์ง€์˜ ํ”ฝ์…€๋ณ„๋กœ ์ตœ์  ํ† ์ง€ ํ”ผ๋ณต์„ ๋ถ€์—ฌํ•˜๋„๋ก ์„ค๊ณ„๋˜์—ˆ๋‹ค. ํ›„์ฒ˜๋ฆฌ ๋ชจ๋“ˆ์€ ํ”ฝ์…€๋ณ„ ์ตœ์ข… ํ† ์ง€ ํ”ผ๋ณต์„ ๊ฒฐ์ •ํ•˜๋Š” ๋ชจ๋“ˆ๋กœ, ๋ถ„ํ• ๋œ HRRS ์ด๋ฏธ์ง€์˜ ๋ถ„๋ฅ˜๊ฒฐ๊ณผ๋ฅผ ์ทจํ•ฉํ•˜์—ฌ ์ตœ๋นˆ๊ฐ’์„ ์ตœ์ข… ํ† ์ง€ ํ”ผ๋ณต์œผ๋กœ ๊ฒฐ์ •ํ•œ๋‹ค. ์ถ”๊ฐ€๋กœ ๋†์ง€์—์„œ๋Š” ๋†์ง€๊ฒฝ๊ณ„๋ฅผ ์ถ”์ถœํ•˜๊ณ , ํ•„์ง€๋ณ„ ๋ถ„๋ฅ˜๋œ ํ† ์ง€ ํ”ผ๋ณต์„ ์ง‘๊ณ„ํ•˜์—ฌ ํ•œ ํ•„์ง€์— ๊ฐ™์€ ํ† ์ง€ ํ”ผ๋ณต์„ ๋ถ€์—ฌํ•˜์˜€๋‹ค. ๊ฐœ๋ฐœ๋œ ํ† ์ง€ ํ”ผ๋ณต ๋ถ„๋ฅ˜๋ชจ๋ธ์€ ์ „๋ผ๋‚จ๋„ ์ง€์—ญ(๋ฉด์ : 547 km2)์˜ 2018๋…„ ์ •์‚ฌ์˜์ƒ๊ณผ ํ† ์ง€ ํ”ผ๋ณต ์ง€๋„๋ฅผ ์ด์šฉํ•˜์—ฌ ํ•™์Šต๋˜์—ˆ๋‹ค. ํ† ์ง€ ํ”ผ๋ณต ๋ถ„๋ฅ˜๋ชจ๋ธ ๊ฒ€์ฆ์€ ํ•™์Šต์ง€์—ญ๊ณผ ์‹œ๊ฐ„, ๊ณต๊ฐ„์ ์œผ๋กœ ๊ตฌ๋ถ„๋œ, 2018๋…„ ์ „๋ผ๋‚จ๋„ ์ˆ˜๋ถ๋ฉด๊ณผ 2016๋…„ ์ถฉ์ฒญ๋ถ๋„ ๋Œ€์†Œ๋ฉด์˜ ๋‘ ๊ฒ€์ฆ์ง€์—ญ์—์„œ ์ˆ˜ํ–‰๋˜์—ˆ๋‹ค. ๊ฐ ๊ฒ€์ฆ์ง€์—ญ์—์„œ overall accuracy๋Š” 0.81, 0.71๋กœ ์ง‘๊ณ„๋˜์—ˆ๊ณ , kappa coefficients๋Š” 0.75, 0.64๋กœ ์‚ฐ์ •๋˜์–ด substantial ์ˆ˜์ค€์˜ ํ† ์ง€ ํ”ผ๋ณต ๋ถ„๋ฅ˜ ์ •ํ™•๋„๋ฅผ ํ™•์ธํ•˜์˜€๋‹ค. ํŠนํžˆ, ๊ฐœ๋ฐœ๋œ ๋ชจ๋ธ์€ ํ•„์ง€ ๊ฒฝ๊ณ„๋ฅผ ๊ณ ๋ คํ•œ ๋†์—…์ง€์—ญ์—์„œ overall accuracy 0.89, kappa coefficient 0.81๋กœ almost perfect ์ˆ˜์ค€์˜ ์šฐ์ˆ˜ํ•œ ๋ถ„๋ฅ˜ ์ •ํ™•๋„๋ฅผ ๋ณด์˜€๋‹ค. ์ด์— ๊ฐœ๋ฐœ๋œ ํ† ์ง€ ํ”ผ๋ณต ๋ถ„๋ฅ˜๋ชจ๋ธ์€ ํŠนํžˆ ๋†์—…์ง€์—ญ์—์„œ ํ˜„ ํ† ์ง€ ํ”ผ๋ณต ๋ถ„๋ฅ˜ ๋ฐฉ๋ฒ•์„ ์ง€์›ํ•˜์—ฌ ํ† ์ง€ ํ”ผ๋ณต ์ง€๋„์˜ ๋น ๋ฅด๊ณ  ์ •ํ™•ํ•œ ์ตœ์‹ ํ™”์— ๊ธฐ์—ฌํ•  ์ˆ˜ ์žˆ์„ ๊ฒƒ์œผ๋กœ ์ƒ๊ฐ๋œ๋‹ค.The rapid update of land cover maps is necessary because spatial information of land cover is widely used in various areas. However, these maps have been released or updated in the interval of several years primarily owing to the manual digitizing method, which is time-consuming and labor-intensive. This study was aimed to develop a land cover classification model using the concept of a convolutional neural network (CNN) that classifies land cover labels from high-resolution remote sensing (HRRS) images and to increase the classification accuracy in agricultural areas using the parcel boundary extraction algorithm. The developed model comprises three modules, namely the pre-processing, land cover classification, and post-processing modules. The pre-processing module diversifies the perspective of the HRRS images by separating images with 75% overlaps to reduce the misclassification that can occur in a single image. The land cover classification module was designed based on the FusionNet model structure, and the optimal land cover type was assigned for each pixel of the separated HRRS images. The post-processing module determines the ultimate land cover types for each pixel unit by summing up the several-perspective classification results and aggregating the pixel-classification result for the parcel-boundary unit in agricultural areas. The developed model was trained with land cover maps and orthographic images (area: 547 km2) from the Jeonnam province in Korea. Model validation was conducted with two spatially and temporally different sites including Subuk-myeon of Jeonnam province in 2018 and Daseo-myeon of Chungbuk province in 2016. In the respective validation sites, the models overall accuracies were 0.81 and 0.71, and kappa coefficients were 0.75 and 0.64, implying substantial model performance. The model performance was particularly better when considering parcel boundaries in agricultural areas, exhibiting an overall accuracy of 0.89 and kappa coefficient 0.81 (almost perfect). It was concluded that the developed model may help perform rapid and accurate land cover updates especially for agricultural areas.Chapter 1. Introduction 1 1.1. Study background 1 1.2. Objective of thesis 4 Chapter 2. Literature review 6 2.1. Development of remote sensing technique 6 2.2. Land cover segmentation 9 2.3. Land boundary extraction 13 Chapter 3. Development of the land cover classification model 15 3.1. Conceptual structure of the land cover classification model 15 3.2. Pre-processing module 16 3.3. CNN based land cover classification module 17 3.4. Post processing module 22 3.4.1 Determination of land cover in a pixel unit 22 3.4.2 Aggregation of land cover to parcel boundary 24 Chapter 4. Verification of the land cover classification model 30 4.1. Study area and data acquisition 31 4.1.1. Training area 31 4.1.2. Verification area 32 4.1.3. Data acquisition 33 4.2. Training the land cover classification model 36 4.3. Verification method 37 4.3.1. The performance measurement methods of land cover classification model 37 4.3.2. Accuracy estimation methods of agricultural parcel boundary 39 4.3.3. Comparison of boundary based classification result with ERDAS Imagine 41 4.4. Verification of land cover classification model 42 4.4.1. Performance of land cover classification at the child subcategory 42 4.4.2. Classification accuracy of the aggregated land cover to main category 46 4.4.3. Classification accuracy of boundary based aggregation in agricultural area 57 Chapter 5. Conclusions 71 Reference 73 ๊ตญ ๋ฌธ ์ดˆ ๋ก 83Maste

    Large-Scale Mapping of Human Activity using Geo-Tagged Videos

    Full text link
    This paper is the first work to perform spatio-temporal mapping of human activity using the visual content of geo-tagged videos. We utilize a recent deep-learning based video analysis framework, termed hidden two-stream networks, to recognize a range of activities in YouTube videos. This framework is efficient and can run in real time or faster which is important for recognizing events as they occur in streaming video or for reducing latency in analyzing already captured video. This is, in turn, important for using video in smart-city applications. We perform a series of experiments to show our approach is able to accurately map activities both spatially and temporally. We also demonstrate the advantages of using the visual content over the tags/titles.Comment: Accepted at ACM SIGSPATIAL 201

    Automated extraction of water bodies from NIR and RGB aerial imagery in northern Alaska using supervised and unsupervised machine learning techniques

    Get PDF
    Thawing and freezing of permafrost ground are affected by various reasons: air temperature, vegetation, snow accumulation, subsurface physical properties, and moisture. Due to the rising of air temperature, the permafrost temperature and the thermokarst activity increase. Thermokarst instability causes an imbalance for the hydrology system, topography, soils, sediment and nutrient cycle to lakes and streams. Hence the lakes and ponds are ubiquitous in permafrost region. The plants and animals fulfil their nutrient needs from water in the environment. Other animals acquire their needs from the plants and animals that they consume. Therefore the influence of degradation of lakes and ponds strongly affect biogeochemical cycles. This research aims to implement an automated workflow to map the water bodies caused by permafrost thawing. The scientific challenge is to test the machine learning techniques adaptability to assist the observation and mapping of the water bodies using aerial imagery. The study area is mainly located in northern Alaska and consists of five different locations: Ikpikpuk, Teschekpuk Central, Teshekpuk East, Tesheckpuk West, Meade East, and Meade West. To estimate the degradation of the high centred polygons distribution and potential degradation of ice wedges, I mapped the polygonal terrain and ice-wedge melt ponds using areal photogrammetry data of NIR and RGB bands captured by Thaw Trend Air 2019 flight campaign. The techniques used are unsupervised K-mean classification, supervised segment mean shift, and supervised random forest classification to model the water polygons from airborne photogrammetry. There are two phases to perform the machine learning classification; the first step is to test the accuracy of each technique and get to a conclusion about the most adapted method. The second is to prepare the Orthomosaic data, run the chosen workflow, and visualize the final results. The morphology filter with opening option application and clean boundary filters are practical before classification as they sharpen the image features. The conclusion is to use the Random forest classification as it was helpful in all NIR Orthomosaics; however, the RGB images required downsampling to provide adequate accuracy

    Spatially adaptive semiโ€supervised learning with Gaussian processes for hyperspectral data analysis

    Full text link
    This paper presents a semiโ€supervised learning algorithm called Gaussian process expectationโ€maximization (GPโ€EM), for classification of landcover based on hyperspectral data analysis. Model parameters for each land cover class are first estimated by a supervised algorithm using Gaussian process regressions to find spatially adaptive parameters, and the estimated parameters are then used to initialize a spatially adaptive mixtureโ€ofโ€Gaussians model. The mixture model is updated by expectationโ€maximization iterations using the unlabeled data, and the spatially adaptive parameters for unlabeled instances are obtained by Gaussian process regressions with soft assignments. Spatially and temporally distant hyperspectral images taken from the Botswana area by the NASA EOโ€1 satellite are used for experiments. Detailed empirical evaluations show that the proposed framework performs significantly better than all previously reported results by a wide variety of alternative approaches and algorithms on the same datasets. ยฉ 2011 Wiley Periodicals, Inc. Statistical Analysis and Data Mining 4: 358โ€“371, 2011Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/87150/1/10119_ftp.pd

    A Comparison of the Classification of Vegetation Characteristics by Spectral Mixture Analysis and Standard Classifiers on Remotely Sensed Imagery within the Siberia Region

    Get PDF
    As an alternative to the traditional method of inferring vegetation cover characteristics from satellite data by classifying each pixel into a specific land cover type based on predefined classification schemes, the Spectral Mixture Analysis (SMA) method is applied to images of the Siberia region. A linear mixture model was applied to determine proportional estimates of land cover for, (a) agriculture and floodplain soils, (b) broadleaf, and (c) conifer classes, in pixels of 30 m resolution Landsat data. In order to evaluate the areal estimates, results were compared with ground truth data, as well as those estimates derived from more sophisticated method of image classification, providing improved estimates of endmember values and subpixel areal estimates of vegetation cover classes than the traditional approach of using predefined classification schemes with discrete numbers of cover types. This technique enables the estimation of proportional land cover type in a single pixel and could potentially serve as a tool for deriving improved estimates of vegetation parameters that are necessary for modeling carbon processes

    Assessing the role of EO in biodiversity monitoring: options for integrating in-situ observations with EO within the context of the EBONE concept

    Get PDF
    The European Biodiversity Observation Network (EBONE) is a European contribution on terrestrial monitoring to GEO BON, the Group on Earth Observations Biodiversity Observation Network. EBONEโ€™s aims are to develop a system of biodiversity observation at regional, national and European levels by assessing existing approaches in terms of their validity and applicability starting in Europe, then expanding to regions in Africa. The objective of EBONE is to deliver: 1. A sound scientific basis for the production of statistical estimates of stock and change of key indicators; 2. The development of a system for estimating past changes and forecasting and testing policy options and management strategies for threatened ecosystems and species; 3. A proposal for a cost-effective biodiversity monitoring system. There is a consensus that Earth Observation (EO) has a role to play in monitoring biodiversity. With its capacity to observe detailed spatial patterns and variability across large areas at regular intervals, our instinct suggests that EO could deliver the type of spatial and temporal coverage that is beyond reach with in-situ efforts. Furthermore, when considering the emerging networks of in-situ observations, the prospect of enhancing the quality of the information whilst reducing cost through integration is compelling. This report gives a realistic assessment of the role of EO in biodiversity monitoring and the options for integrating in-situ observations with EO within the context of the EBONE concept (cfr. EBONE-ID1.4). The assessment is mainly based on a set of targeted pilot studies. Building on this assessment, the report then presents a series of recommendations on the best options for using EO in an effective, consistent and sustainable biodiversity monitoring scheme. The issues that we faced were many: 1. Integration can be interpreted in different ways. One possible interpretation is: the combined use of independent data sets to deliver a different but improved data set; another is: the use of one data set to complement another dataset. 2. The targeted improvement will vary with stakeholder group: some will seek for more efficiency, others for more reliable estimates (accuracy and/or precision); others for more detail in space and/or time or more of everything. 3. Integration requires a link between the datasets (EO and in-situ). The strength of the link between reflected electromagnetic radiation and the habitats and their biodiversity observed in-situ is function of many variables, for example: the spatial scale of the observations; timing of the observations; the adopted nomenclature for classification; the complexity of the landscape in terms of composition, spatial structure and the physical environment; the habitat and land cover types under consideration. 4. The type of the EO data available varies (function of e.g. budget, size and location of region, cloudiness, national and/or international investment in airborne campaigns or space technology) which determines its capability to deliver the required output. EO and in-situ could be combined in different ways, depending on the type of integration we wanted to achieve and the targeted improvement. We aimed for an improvement in accuracy (i.e. the reduction in error of our indicator estimate calculated for an environmental zone). Furthermore, EO would also provide the spatial patterns for correlated in-situ data. EBONE in its initial development, focused on three main indicators covering: (i) the extent and change of habitats of European interest in the context of a general habitat assessment; (ii) abundance and distribution of selected species (birds, butterflies and plants); and (iii) fragmentation of natural and semi-natural areas. For habitat extent, we decided that it did not matter how in-situ was integrated with EO as long as we could demonstrate that acceptable accuracies could be achieved and the precision could consistently be improved. The nomenclature used to map habitats in-situ was the General Habitat Classification. We considered the following options where the EO and in-situ play different roles: using in-situ samples to re-calibrate a habitat map independently derived from EO; improving the accuracy of in-situ sampled habitat statistics, by post-stratification with correlated EO data; and using in-situ samples to train the classification of EO data into habitat types where the EO data delivers full coverage or a larger number of samples. For some of the above cases we also considered the impact that the sampling strategy employed to deliver the samples would have on the accuracy and precision achieved. Restricted access to European wide species data prevented work on the indicator โ€˜abundance and distribution of speciesโ€™. With respect to the indicator โ€˜fragmentationโ€™, we investigated ways of delivering EO derived measures of habitat patterns that are meaningful to sampled in-situ observations
    • โ€ฆ
    corecore