2,325 research outputs found

    An Algorithmic Framework for Labeling Road Maps

    Full text link
    Given an unlabeled road map, we consider, from an algorithmic perspective, the cartographic problem to place non-overlapping road labels embedded in their roads. We first decompose the road network into logically coherent road sections, e.g., parts of roads between two junctions. Based on this decomposition, we present and implement a new and versatile framework for placing labels in road maps such that the number of labeled road sections is maximized. In an experimental evaluation with road maps of 11 major cities we show that our proposed labeling algorithm is both fast in practice and that it reaches near-optimal solution quality, where optimal solutions are obtained by mixed-integer linear programming. In comparison to the standard OpenStreetMap renderer Mapnik, our algorithm labels 31% more road sections in average.Comment: extended version of a paper to appear at GIScience 201

    Historical collaborative geocoding

    Full text link
    The latest developments in digital have provided large data sets that can increasingly easily be accessed and used. These data sets often contain indirect localisation information, such as historical addresses. Historical geocoding is the process of transforming the indirect localisation information to direct localisation that can be placed on a map, which enables spatial analysis and cross-referencing. Many efficient geocoders exist for current addresses, but they do not deal with the temporal aspect and are based on a strict hierarchy (..., city, street, house number) that is hard or impossible to use with historical data. Indeed historical data are full of uncertainties (temporal aspect, semantic aspect, spatial precision, confidence in historical source, ...) that can not be resolved, as there is no way to go back in time to check. We propose an open source, open data, extensible solution for geocoding that is based on the building of gazetteers composed of geohistorical objects extracted from historical topographical maps. Once the gazetteers are available, geocoding an historical address is a matter of finding the geohistorical object in the gazetteers that is the best match to the historical address. The matching criteriae are customisable and include several dimensions (fuzzy semantic, fuzzy temporal, scale, spatial precision ...). As the goal is to facilitate historical work, we also propose web-based user interfaces that help geocode (one address or batch mode) and display over current or historical topographical maps, so that they can be checked and collaboratively edited. The system is tested on Paris city for the 19-20th centuries, shows high returns rate and is fast enough to be used interactively.Comment: WORKING PAPE

    Fusion of Heterogeneous Earth Observation Data for the Classification of Local Climate Zones

    Get PDF
    This paper proposes a novel framework for fusing multi-temporal, multispectral satellite images and OpenStreetMap (OSM) data for the classification of local climate zones (LCZs). Feature stacking is the most commonly-used method of data fusion but does not consider the heterogeneity of multimodal optical images and OSM data, which becomes its main drawback. The proposed framework processes two data sources separately and then combines them at the model level through two fusion models (the landuse fusion model and building fusion model), which aim to fuse optical images with landuse and buildings layers of OSM data, respectively. In addition, a new approach to detecting building incompleteness of OSM data is proposed. The proposed framework was trained and tested using data from the 2017 IEEE GRSS Data Fusion Contest, and further validated on one additional test set containing test samples which are manually labeled in Munich and New York. Experimental results have indicated that compared to the feature stacking-based baseline framework the proposed framework is effective in fusing optical images with OSM data for the classification of LCZs with high generalization capability on a large scale. The classification accuracy of the proposed framework outperforms the baseline framework by more than 6% and 2%, while testing on the test set of 2017 IEEE GRSS Data Fusion Contest and the additional test set, respectively. In addition, the proposed framework is less sensitive to spectral diversities of optical satellite images and thus achieves more stable classification performance than state-of-the art frameworks.Comment: accepted by TGR

    A rapid deployment model for VGI projects in mobile field data collection

    Get PDF
    Spatial data collection in an organizational setup is growing in terms of the number of applications. While these applications are similar in providing the capability to collect and store spatial data, they are inherently different in the domains they cater to. For instance, they range from collecting data for facilitated Voluntary Geographic Information (VGI) like evaluating children’s walkability to schools or visual quality assessment of a community to crowd sourcing or emergency dispatch and large–scale census gathering. Mobile devices are a great medium to collect such VGI. Several constraints with respect to time, manpower and efficiency contribute to the lack of a technology model to enable dynamic creation and delivery of such projects. In this thesis, we address the research questions as to what a model for rapidly deploying a mobile VGI should look like and how to create one. We demonstrate this by creating a web–based project authoring system. The data collected from this system is fed into a Geographic Information System (GIS) model that automates creation of necessary spatial components and exposes them as Representational State Transfer (REST) services. An iOS mobile application consumes the web services and enables field data collection. The model also integrates multiple projects for a user while providing a domain specific means for collecting non–spatial attributes. Existing solutions for gathering data do not consider the relationship of attributes to one other. This thesis presents a dynamic decision tree implementation for the same, which improves efficiency and ensures correctness of the data collected in the field

    Procedural modeling of cities with semantic information for crowd simulation

    Get PDF
    En aquesta tesi de màster es presenta un sistema per a la generació procedural de ciutats poblades. Avui en dia poblar entorns virtuals grans tendeix a ser una tasca que requereix molt d’esforç i temps, i típicament la feina d’artistes o programadors experts. Amb aquest sistema es vol proporcionar una eina que permeti als usuaris generar entorns poblats d’una manera més fàcil i ràpida, mitjançat l’ús de tècniques procedurals. Les contribucions principals inclouen: la generació d’una ciutat virtual augmentada semànticament utilitzant modelat procedural basat en gramàtiques de regles, la generació dels seus habitants virtuals utilitzant dades estadístiques reals, i la generació d’agendes per a cada individu utilitzant també un mètode procedural basat en regles, el qual combina la informació semàntica de la ciutat amb les característiques i necessitats dels agents autònoms. Aquestes agendes individuals són usades per a conduir la simulació dels habitants, i poden incloure regles com a tasques d’alt nivell, l’avaluació de les quals es realitza al moment de començar-les. Això permet simular accions que depenguin del context, i interaccions amb altres agents.En esta tesis de máster se presenta un sistema para la generación procedural de ciudades pobladas. Hoy en día poblar entornos virtuales grandes tiende a ser una tarea que requiere de mucho tiempo y esfuerzo, y típicamente el trabajo de artistas o programadores expertos. Con este sistema se pretende proporcionar una herramienta que permita a los usuarios generar entornos poblados de un modo más fácil y rápido, mediante el uso de técnicas procedurales. Las contribuciones principales incluyen: la generación de una ciudad virtual aumentada semánticamente utilizando modelado procedural basado en gramáticas de reglas, la generación de sus habitantes virtuales utilizando datos estadísticos reales, y la generación de agendas para cada individuo utilizando también un método procedural basado en reglas, el cual combina la información semántica de la ciudad con las características y necesidades de los agentes autónomos. Estas agendas individuales son usadas para conducir la simulación de los habitantes, y pueden incluir reglas como tareas de alto nivel, la evaluación de las cuales se realiza cuando empiezan. Esto permite simular acciones que dependan del contexto, e interacciones con otros agentes.In this master thesis a framework for procedural generation of populated cities is presented. Nowadays, the population of large virtual environments tends to be a time-consuming task, usually requiring the work of expert artists or programmers. With this system we aim at providing a tool that can allow users to generate populated environments in an easier and faster way, by relying on the usage of procedural techniques. Our main contributions include: a generation of semantically augmented virtual cities using procedural modelling based on rule grammars, a generation of a virtual population using real-world data, and a generation of agendas for each individual inhabitant by using a procedural rule-based approach, which combines the city semantics with the autonomous agents characteristics and needs. The individual agendas are then used to drive a crowd simulation in the environment, and may include high-level rule tasks whose evaluation is delayed until they get triggered. This feature allows us to simulate context-dependant actions and interactions with other agents

    Managing contextual information in semantically-driven temporal information systems

    Get PDF
    Context-aware (CA) systems have demonstrated the provision of a robust solution for personalized information delivery in the current content-rich and dynamic information age we live in. They allow software agents to autonomously interact with users by modeling the user’s environment (e.g. profile, location, relevant public information etc.) as dynamically-evolving and interoperable contexts. There is a flurry of research activities in a wide spectrum at context-aware research areas such as managing the user’s profile, context acquisition from external environments, context storage, context representation and interpretation, context service delivery and matching of context attributes to users‘ queries etc. We propose SDCAS, a Semantic-Driven Context Aware System that facilitates public services recommendation to users at temporal location. This paper focuses on information management and service recommendation using semantic technologies, taking into account the challenges of relationship complexity in temporal and contextual information

    Road network comparison and matching techniques. a workflow proposal for the integration of traffic message channel and open source network datasets

    Get PDF
    The rapid growth of methods and techniques to acquire geospatial data has led to a wide availability of overlapping geographic datasets with different characteristics. Road network data sources are today a significant number, with high differences in level of detail and modelling schemas, depending on the main purpose. In addition, continuous information about people and freight movement is today available also in real-time. This type of data is today exchanged between traffic operators using referencing standards as Traffic Message Channel. Integrating these heterogeneous databases, in order to build an added value product, is a serious task in geographical data management. The paper is focus on techniques to conflate the Traffic message Channel logical network on Open Source road network dataset, in order to allow the precise visualisation of traffic data also in real-time. A first step of the research was the quality assessment of available Open Source (OS) road network dataset, then, a specific procedure to conflate data was set up, using an iterative process in order to reduce at every step the number of possible matching features. A first application of the enhanced OTM dataset is shown for the city of Turin: real-time open data of traffic flows recorded by road network fixed sensors, made available by the metropolitan Traffic Operation Centre (5T) and based on the TMC location referencing, are matched on the OTM road network, allowing a detailed real-time visualisation of traffic state
    • …
    corecore