4,572 research outputs found

    A spatiotemporal object-oriented data model for landslides (LOOM)

    Get PDF
    LOOM (landslide object-oriented model) is here presented as a data structure for landslide inventories based on the object-oriented paradigm. It aims at the effective storage, in a single dataset, of the complex spatial and temporal relations between landslides recorded and mapped in an area and at their manipulation. Spatial relations are handled through a hierarchical classification based on topological rules and two levels of aggregation are defined: (i) landslide complexes, grouping spatially connected landslides of the same type, and (ii) landslide systems, merging landslides of any type sharing a spatial connection. For the aggregation procedure, a minimal functional interaction between landslide objects has been defined as a spatial overlap between objects. Temporal characterization of landslides is achieved by assigning to each object an exact date or a time range for its occurrence, integrating both the time frame and the event-based approaches. The sum of spatial integrity and temporal characterization ensures the storage of vertical relations between landslides, so that the superimposition of events can be easily retrieved querying the temporal dataset. The here proposed methodology for landslides inventorying has been tested on selected case studies in the Cilento UNESCO Global Geopark (Italy). We demonstrate that the proposed LOOM model avoids data fragmentation or redundancy and topological inconsistency between the digital data and the real-world features. This application revealed to be powerful for the reconstruction of the gravity-induced deformation history of hillslopes, thus for the prediction of their evolution

    The Profiling Potential of Computer Vision and the Challenge of Computational Empiricism

    Full text link
    Computer vision and other biometrics data science applications have commenced a new project of profiling people. Rather than using 'transaction generated information', these systems measure the 'real world' and produce an assessment of the 'world state' - in this case an assessment of some individual trait. Instead of using proxies or scores to evaluate people, they increasingly deploy a logic of revealing the truth about reality and the people within it. While these profiling knowledge claims are sometimes tentative, they increasingly suggest that only through computation can these excesses of reality be captured and understood. This article explores the bases of those claims in the systems of measurement, representation, and classification deployed in computer vision. It asks if there is something new in this type of knowledge claim, sketches an account of a new form of computational empiricism being operationalised, and questions what kind of human subject is being constructed by these technological systems and practices. Finally, the article explores legal mechanisms for contesting the emergence of computational empiricism as the dominant knowledge platform for understanding the world and the people within it

    Self-supervised remote sensing feature learning: Learning Paradigms, Challenges, and Future Works

    Full text link
    Deep learning has achieved great success in learning features from massive remote sensing images (RSIs). To better understand the connection between feature learning paradigms (e.g., unsupervised feature learning (USFL), supervised feature learning (SFL), and self-supervised feature learning (SSFL)), this paper analyzes and compares them from the perspective of feature learning signals, and gives a unified feature learning framework. Under this unified framework, we analyze the advantages of SSFL over the other two learning paradigms in RSIs understanding tasks and give a comprehensive review of the existing SSFL work in RS, including the pre-training dataset, self-supervised feature learning signals, and the evaluation methods. We further analyze the effect of SSFL signals and pre-training data on the learned features to provide insights for improving the RSI feature learning. Finally, we briefly discuss some open problems and possible research directions.Comment: 24 pages, 11 figures, 3 table

    Segmentation and Classification of Remotely Sensed Images: Object-Based Image Analysis

    Get PDF
    Land-use-and-land-cover (LULC) mapping is crucial in precision agriculture, environmental monitoring, disaster response, and military applications. The demand for improved and more accurate LULC maps has led to the emergence of a key methodology known as Geographic Object-Based Image Analysis (GEOBIA). The core idea of the GEOBIA for an object-based classification system (OBC) is to change the unit of analysis from single-pixels to groups-of-pixels called `objects\u27 through segmentation. While this new paradigm solved problems and improved global accuracy, it also raised new challenges such as the loss of accuracy in categories that are less abundant, but potentially important. Although this trade-off may be acceptable in some domains, the consequences of such an accuracy loss could be potentially fatal in others (for instance, landmine detection). This thesis proposes a method to improve OBC performance by eliminating such accuracy losses. Specifically, we examine the two key players of an OBC system : Hierarchical Segmentation and Supervised Classification. Further, we propose a model to understand the source of accuracy errors in minority categories and provide a method called Scale Fusion to eliminate those errors. This proposed fusion method involves two stages. First, the characteristic scale for each category is estimated through a combination of segmentation and supervised classification. Next, these estimated scales (segmentation maps) are fused into one combined-object-map. Classification performance is evaluated by comparing results of the multi-cut-and-fuse approach (proposed) to the traditional single-cut (SC) scale selection strategy. Testing on four different data sets revealed that our proposed algorithm improves accuracy on minority classes while performing just as well on abundant categories. Another active obstacle, presented by today\u27s remotely sensed images, is the volume of information produced by our modern sensors with high spatial and temporal resolution. For instance, over this decade, it is projected that 353 earth observation satellites from 41 countries are to be launched. Timely production of geo-spatial information, from these large volumes, is a challenge. This is because in the traditional methods, the underlying representation and information processing is still primarily pixel-based, which implies that as the number of pixels increases, so does the computational complexity. To overcome this bottleneck, created by pixel-based representation, this thesis proposes a dart-based discrete topological representation (DBTR), where the DBTR differs from pixel-based methods in its use of a reduced boundary based representation. Intuitively, the efficiency gains arise from the observation that, it is lighter to represent a region by its boundary (darts) than by its area (pixels). We found that our implementation of DBTR, not only improved our computational efficiency, but also enhanced our ability to encode and extract spatial information. Overall, this thesis presents solutions to two problems of an object-based classification system: accuracy and efficiency. Our proposed Scale Fusion method demonstrated improvements in accuracy, while our dart-based topology representation (DBTR) showed improved efficiency in the extraction and encoding of spatial information

    The applications of neural network in mapping, modeling and change detection using remotely sensed data

    Full text link
    Thesis (Ph.D.)--Boston UniversityAdvances in remote sensing and associated capabilities are expected to proceed in a number of ways in the era of the Earth Observing System (EOS). More complex multitemporal, multi-source data sets will become available, requiring more sophisticated analysis methods. This research explores the applications of artificial neural networks in land-cover mapping, forward and inverse canopy modeling and change detection. For land-cover mapping a multi-layer feed-forward neural network produced 89% classification accuracy using a single band of multi-angle data from the Advanced Solidstate Array Spectroradiometer (ASAS). The principal results include the following: directional radiance measurements contain much useful information for discrimination among land-cover classes; the combination of multi-angle and multi-spectral data improves the overall classification accuracy compared with a single multi-angle band; and neural networks can successfully learn class discrimination from directional data or multi-domain data. Forward canopy modeling shows that a multi-layer feed-forward neural network is able to predict the bidirectional reflectance distribution function (BRDF) of different canopy sites with 90% accuracy. Analysis of the signal captured by the network indicates that the canopy structural parameters, and illumination and viewing geometry, are essential for predicting the BRDF of vegetated surfaces. The inverse neural network model shows that the R2 between the network-predicted canopy parameters and the actual canopy parameters is 0.85 for canopy density and 0.75 for both the crown shape and the height parameters. [TRUNCATED

    Sedimentological characterization of Antarctic moraines using UAVs and Structure-from-Motion photogrammetry

    Get PDF
    In glacial environments particle-size analysis of moraines provides insights into clast origin, transport history, depositional mechanism and processes of reworking. Traditional methods for grain-size classification are labour-intensive, physically intrusive and are limited to patch-scale (1m2) observation. We develop emerging, high-resolution ground- and unmanned aerial vehicle-based ‘Structure-from-Motion’ (UAV-SfM) photogrammetry to recover grain-size information across an moraine surface in the Heritage Range, Antarctica. SfM data products were benchmarked against equivalent datasets acquired using terrestrial laser scanning, and were found to be accurate to within 1.7 and 50mm for patch- and site-scale modelling, respectively. Grain-size distributions were obtained through digital grain classification, or ‘photo-sieving’, of patch-scale SfM orthoimagery. Photo-sieved distributions were accurate to <2mm compared to control distributions derived from dry sieving. A relationship between patch-scale median grain size and the standard deviation of local surface elevations was applied to a site-scale UAV-SfM model to facilitate upscaling and the production of a spatially continuous map of the median grain size across a 0.3 km2 area of moraine. This highly automated workflow for site scale sedimentological characterization eliminates much of the subjectivity associated with traditional methods and forms a sound basis for subsequent glaciological process interpretation and analysis
    • …
    corecore