3,561 research outputs found

    Differential recruitment of brain networks following route and cartographic map learning of spatial environments.

    Get PDF
    An extensive neuroimaging literature has helped characterize the brain regions involved in navigating a spatial environment. Far less is known, however, about the brain networks involved when learning a spatial layout from a cartographic map. To compare the two means of acquiring a spatial representation, participants learned spatial environments either by directly navigating them or learning them from an aerial-view map. While undergoing functional magnetic resonance imaging (fMRI), participants then performed two different tasks to assess knowledge of the spatial environment: a scene and orientation dependent perceptual (SOP) pointing task and a judgment of relative direction (JRD) of landmarks pointing task. We found three brain regions showing significant effects of route vs. map learning during the two tasks. Parahippocampal and retrosplenial cortex showed greater activation following route compared to map learning during the JRD but not SOP task while inferior frontal gyrus showed greater activation following map compared to route learning during the SOP but not JRD task. We interpret our results to suggest that parahippocampal and retrosplenial cortex were involved in translating scene and orientation dependent coordinate information acquired during route learning to a landmark-referenced representation while inferior frontal gyrus played a role in converting primarily landmark-referenced coordinates acquired during map learning to a scene and orientation dependent coordinate system. Together, our results provide novel insight into the different brain networks underlying spatial representations formed during navigation vs. cartographic map learning and provide additional constraints on theoretical models of the neural basis of human spatial representation

    Geographic features recognition for heritage landscape mapping – Case study: The Banda Islands, Maluku, Indonesia

    Get PDF
    This study examines methods of geographic features recognition from historic maps using CNN and OBIA. These two methods are compared to reveal which one is most suitable to be applied to the historic maps dataset of the Banda Islands, Indonesia. The characteristics of cartographic images become the main challenge in this study. The geographic features are divided into buildings, coastline, and fortress. The results show that CNN is superior to OBIA in terms of statistical performance. Buildings and coastline give excellent results for CNN analysis, while fortress is harder to be interpreted by the model. On the other hand, OBIA reveals a very satisfying result is very depending on the maps’ scales. In the aspect of technical procedure, OBIA offers easier steps in pre-processing, in-process and post-processing/finalisation which can be an advantage for a wide range of users over CNN

    Inferring Implicit 3D Representations from Human Figures on Pictorial Maps

    Full text link
    In this work, we present an automated workflow to bring human figures, one of the most frequently appearing entities on pictorial maps, to the third dimension. Our workflow is based on training data and neural networks for single-view 3D reconstruction of real humans from photos. We first let a network consisting of fully connected layers estimate the depth coordinate of 2D pose points. The gained 3D pose points are inputted together with 2D masks of body parts into a deep implicit surface network to infer 3D signed distance fields (SDFs). By assembling all body parts, we derive 2D depth images and body part masks of the whole figure for different views, which are fed into a fully convolutional network to predict UV images. These UV images and the texture for the given perspective are inserted into a generative network to inpaint the textures for the other views. The textures are enhanced by a cartoonization network and facial details are resynthesized by an autoencoder. Finally, the generated textures are assigned to the inferred body parts in a ray marcher. We test our workflow with 12 pictorial human figures after having validated several network configurations. The created 3D models look generally promising, especially when considering the challenges of silhouette-based 3D recovery and real-time rendering of the implicit SDFs. Further improvement is needed to reduce gaps between the body parts and to add pictorial details to the textures. Overall, the constructed figures may be used for animation and storytelling in digital 3D maps.Comment: to be published in 'Cartography and Geographic Information Science

    An Evolutionary Approach to Adaptive Image Analysis for Retrieving and Long-term Monitoring Historical Land Use from Spatiotemporally Heterogeneous Map Sources

    Get PDF
    Land use changes have become a major contributor to the anthropogenic global change. The ongoing dispersion and concentration of the human species, being at their orders unprecedented, have indisputably altered Earth’s surface and atmosphere. The effects are so salient and irreversible that a new geological epoch, following the interglacial Holocene, has been announced: the Anthropocene. While its onset is by some scholars dated back to the Neolithic revolution, it is commonly referred to the late 18th century. The rapid development since the industrial revolution and its implications gave rise to an increasing awareness of the extensive anthropogenic land change and led to an urgent need for sustainable strategies for land use and land management. By preserving of landscape and settlement patterns at discrete points in time, archival geospatial data sources such as remote sensing imagery and historical geotopographic maps, in particular, could give evidence of the dynamic land use change during this crucial period. In this context, this thesis set out to explore the potentials of retrospective geoinformation for monitoring, communicating, modeling and eventually understanding the complex and gradually evolving processes of land cover and land use change. Currently, large amounts of geospatial data sources such as archival maps are being worldwide made online accessible by libraries and national mapping agencies. Despite their abundance and relevance, the usage of historical land use and land cover information in research is still often hindered by the laborious visual interpretation, limiting the temporal and spatial coverage of studies. Thus, the core of the thesis is dedicated to the computational acquisition of geoinformation from archival map sources by means of digital image analysis. Based on a comprehensive review of literature as well as the data and proposed algorithms, two major challenges for long-term retrospective information acquisition and change detection were identified: first, the diversity of geographical entity representations over space and time, and second, the uncertainty inherent to both the data source itself and its utilization for land change detection. To address the former challenge, image segmentation is considered a global non-linear optimization problem. The segmentation methods and parameters are adjusted using a metaheuristic, evolutionary approach. For preserving adaptability in high level image analysis, a hybrid model- and data-driven strategy, combining a knowledge-based and a neural net classifier, is recommended. To address the second challenge, a probabilistic object- and field-based change detection approach for modeling the positional, thematic, and temporal uncertainty adherent to both data and processing, is developed. Experimental results indicate the suitability of the methodology in support of land change monitoring. In conclusion, potentials of application and directions for further research are given

    Cognitive evaluation of computer-drawn sketches

    Get PDF
    CISRG discussion paper ; 1

    Exploring Deep Learning for deformative operators in vector-based cartographic road generalization

    Full text link
    Cartographic generalisation is the process by which geographical data is simplified and abstracted to increase the legibility of maps at reduced scales. As map scales decrease, irrelevant map features are removed (selective generalisation), and relevant map features are deformed, eliminating unnec- essary details while preserving the general shapes (deformative generalisation). The automation of cartographic generalisation has been a tough nut to crack for years because it is governed not only by explicit rules but also by a large body of implicit cartographic knowledge that conven- tional automation approaches struggle to acquire and formalise. In recent years, the introduction of Deep Learning (DL) and its inductive capabilities has raised hope for further progress. This thesis explores the potential of three Deep Learning architectures — Graph Convolutional Neural Network (GCNN), Auto Encoder, and Recurrent Neural Network (RNN) — in their application on the deformative generalisation of roads using a vector-based approach. The generated small- scale representations of the input roads differ substantially across the architectures, not only in their included frequency spectra but also in their ability to apply certain generalisation operators. However, the most apparent learnt and applied generalisation operator by all architectures is the smoothing of the large-scale roads. The outcome of this thesis has been encouraging but suggests to pursue further research about the effect of the pre-processing of the input geometries and the inclusion of spatial context and the combination of map features (e.g. buildings) to better capture the implicit knowledge engrained in the products of mapping agencies used for training the DL models

    Learning cartographic building generalization with deep convolutional neural networks

    Get PDF
    Cartographic generalization is a problem, which poses interesting challenges to automation. Whereas plenty of algorithms have been developed for the different sub-problems of generalization (e.g., simplification, displacement, aggregation), there are still cases, which are not generalized adequately or in a satisfactory way. The main problem is the interplay between different operators. In those cases the human operator is the benchmark, who is able to design an aesthetic and correct representation of the physical reality. Deep learning methods have shown tremendous success for interpretation problems for which algorithmic methods have deficits. A prominent example is the classification and interpretation of images, where deep learning approaches outperform traditional computer vision methods. In both domains-computer vision and cartography-humans are able to produce good solutions. A prerequisite for the application of deep learning is the availability of many representative training examples for the situation to be learned. As this is given in cartography (there are many existing map series), the idea in this paper is to employ deep convolutional neural networks (DCNNs) for cartographic generalizations tasks, especially for the task of building generalization. Three network architectures, namely U-net, residual U-net and generative adversarial network (GAN), are evaluated both quantitatively and qualitatively in this paper. They are compared based on their performance on this task at target map scales 1:10,000, 1:15,000 and 1:25,000, respectively. The results indicate that deep learning models can successfully learn cartographic generalization operations in one single model in an implicit way. The residual U-net outperforms the others and achieved the best generalization performance

    Exploring the Swin Transformer architecture for the generalization of building footprints in binary cartographic maps

    Get PDF
    This thesis explores using two distinct deep-learning models and three di↵erent data models to auto- mate the process of cartographic generalization. Cartographic generalization aims to select essential information, preserve typical elements, and simplify the information content to allow legibility in maps across different scales. Specifically, the thesis focuses on automating the generalization of building footprints. The automation of the described process has proven challenging in the past. The thesis unveils that increased computation power, better computation models, and more data alone will not solve the issue. Moreover, the thesis shows that a better approach is needed to feed data to the computation model. Comparing the performance of U-Net and Swin Transformer computation models reveals that U-Net with convolutions outperforms Swin Transformers, which use attention mechanisms. The thesis suggests that a data model with an artificial attention mechanism rather than a computation model with an attention mechanism is needed to learn the different generalization tasks on a building level. The study then points out its limitations, including a need for more balanced data to train the Trans- former model from scratch. Future research could focus on creating the needed more representative training data. Finally, it outlines the possibility of building a purpose-built Transformer model for future use
    • …
    corecore