435 research outputs found

    Deep Learning for Enrichment of Vector Spatial Databases: Application to Highway Interchange

    Get PDF
    Spatial analysis and pattern recognition with vector spatial data is particularly useful to enrich raw data. In road networks for instance, there are many patterns and structures that are implicit with only road line features, among which highway interchange appeared very complex to recognise with vector-based techniques. The goal is to find the roads that belong to an interchange, i.e. the slip roads and the highway roads connected to the slip roads. In order to go further than state-of-the-art vector-based techniques, this paper proposes to use raster-based deep learning techniques to recognise highway interchanges. The contribution of this work is to study how to optimally convert vector data into small images suitable for state-of-the-art deep learning models. Image classification with a convolutional neural network (i.e. is there an interchange in this image or not?) and image segmentation with a u-net (i.e. find the pixels that cover the interchange) are experimented and give results way better than existing vector-based techniques in this specific use case

    Road network selection for small-scale maps using an improved centrality-based algorithm

    Get PDF
    The road network is one of the key feature classes in topographic maps and databases. In the task of deriving road networks for products at smaller scales, road network selection forms a prerequisite for all other generalization operators, and is thus a fundamental operation in the overall process of topographic map and database production. The objective of this work was to develop an algorithm for automated road network selection from a large-scale (1:10,000) to a small-scale database (1:200,000). The project was pursued in collaboration with swisstopo, the national mapping agency of Switzerland, with generic mapping requirements in mind. Preliminary experiments suggested that a selection algorithm based on betweenness centrality performed best for this purpose, yet also exposed problems. The main contribution of this paper thus consists of four extensions that address deficiencies of the basic centrality-based algorithm and lead to a significant improvement of the results. The first two extensions improve the formation of strokes concatenating the road segments, which is crucial since strokes provide the foundation upon which the network centrality measure is computed. Thus, the first extension ensures that roundabouts are detected and collapsed, thus avoiding interruptions of strokes by roundabouts, while the second introduces additional semantics in the process of stroke formation, allowing longer and more plausible strokes to built. The third extension detects areas of high road density (i.e., urban areas) using density-based clustering and then locally increases the threshold of the centrality measure used to select road segments, such that more thinning takes place in those areas. Finally, since the basic algorithm tends to create dead-ends—which however are not tolerated in small-scale maps—the fourth extension reconnects these dead-ends to the main network, searching for the best path in the main heading of the dead-end

    Improving the Accuracy of Intersection Counts and Densities for Measuring Urban Street Network Compactness and Resilience

    Get PDF
    USDOT Grant 69A3551747109Caltrans 65A0674Intersection counts are ubiquitous in transportation planning practice and research. They are frequently normalized by area to calculate intersection density, the most common measure of compact street network design in planning practice. However, due to the nature of typical street network data (centerlines) and the typical tools used to count intersections (desktop geographic information systems [GIS]), traditional methods of counting intersections can significantly overcount them. This project addresses this long-standing problem of intersection count bias. First, it develops and distributes an algorithm to automatically and correctly calculate intersection counts and densities anywhere in the world, using a novel topological consolidation method. Second, it conducts a worldwide empirical assessment of traditional intersection counting methods\u2019 bias to quantify the importance of measurement bias and to validate our algorithm. Third, it assesses this bias\u2019s impact on resilience simulations\u2019 results and identifies the street network design characteristics that are most related to resilience. In transportation planning, innumerable downstream models and measures \u2014 from energy efficiency certification schemes to resilience simulations \u2014 rely on intersection counts as input data. A full accounting of input data bias and better methods to overcome misrepresentations of intersections are necessary for data-driven, evidence-based planning for sustainable transportation networks that support active and resilient travel

    Rural and Urban Road Network Generalisation: Deriving 1:250,000 from OS MasterMap

    Get PDF
    Roads are essential component of topographic maps and spatial databases. The challenge in automated generalisation of road networks is to derive a connected network while maintaining the structure for the intended target scale and to achieve this with minimum user intervention. A lot of methods to select, displace and simplify roads have been presented; the focus here is on the generalisation of networks using visual perception techniques. This paper presents a framework based on visual perception that uses minimum attributes for generalisation of both ‘rural’ and ‘urban’ roads over large scale change. The system incorporated graph theoretic techniques to explicitly model the topology of the network as it was generalized. The model uses a fine scale map (1:1250 or 1:2500) as input and generates small scale (1:250,000) maps directly from it without creating intermediate small scale maps. The results compared favorably with paper maps (Ordnance Surveys Stratgie dataset (1:250,000))

    Exploring object-oriented GIS for watershed resource management

    Get PDF
    The adoption of object-oriented programming for spatial technological advancement is an emerging trend in GIS. This research seeks to explore Object-Oriented GIS (OOGIS) and its potential application in watershed resource management. OOGIS provides a more intuitive and realistic abstraction of real world features as intelligent objects. The ability to embed behavior, geometry, and attribution with the objects provides considerable advantages in the processing and analysis of geospatial data. The main objective of this research was to design a prototype OOGIS for watershed resource management using the object relational Arclnfo 8.1 Geodatabase. The study builds on the OOGIS concepts of inheritance, polymorphism, and encapsulation and defines a schema for the project. Behavior is embedded in the watershed features through the use of methods and reflex methods that automatically perform functions such as data validation and text placement. Message propagation is tested using related objects, and a smart object-based topologically integrated geometric network is established for streams and roads. Because of the embedded topological relationships and methods this network is self-adapting. The resulting system indicates that OOGIS has many advantages over the more traditional entity-relationship model. The system provides a more intuitive representation of a watershed through the integration of intelligent behaviors and is particularly effective in addressing GIS maintenance issues at a database level through the use of reflex validation methods

    Enrichissement automatique de données par analyse spatiale pour la généralisation de réseaux

    Get PDF
    National audienceGeneralisation is a process that seeks to reduce the level of detail of a geographic database in order to meet new specifications. Selection is a major step of the process that consist in choosing the objects that will be part of the generalised database according to geographical context. This paper presents a general method for automated selection of geographical networks based on data enrichment by spatial analysis. The initial database is enriched with implicit information or structure recognition. For instance, in a road network, highway services are groups of connected roads with enter and exit to the highway. The selection process, specific for the different themes, takes the enriched data into account and handle such structures that are essential for the future applications of the database (analysis, cartography) with appropriate treatment. The presented method is carried out on road and river networks and applied to topographic data coming from IGN databases. The interest of such enrichments for other application is also considered.La généralisation est un processus qui vise à réduire le niveau de détail d'une base de données géographiques dans le but de satisfaire de nouvelles spécifications. La sélection est une étape importante de ce processus qui consiste à choisir en fonction du contexte géographique et des besoins les objets qui devraient apparaître dans la base de données généralisée. Cet article présente une méthode générale pour la sélection automatique de réseaux géographiques basée sur l'enrichissement de données par analyse spatiale. La base de données géographiques initiale est enrichie par la reconnaissance d'informations, de structures qui n'étaient pas explicites. Par exemple, dans un réseau routier, une aire d'autoroute est un ensemble de tronçons de route connectés entrant et sortant de l'autoroute. Les processus de sélection, plus spécifiques à chaque thème, prennent en compte cet enrichissement et traitent de manière adaptée ces structures qui sont essentielles pour les futures applications de la base de données généralisée (analyse, cartographie). La méthode présentée est mise en oeuvre pour les réseaux routier et hydrographique puis testée sur des données topographiques issues des bases de données de l'Institut Géographique National. L'intérêt d'un tel enrichissement pour d'autres objectifs que la généralisation est étudié

    Mobility mining for time-dependent urban network modeling

    Get PDF
    170 p.Mobility planning, monitoring and analysis in such a complex ecosystem as a city are very challenging.Our contributions are expected to be a small step forward towards a more integrated vision of mobilitymanagement. The main hypothesis behind this thesis is that the transportation offer and the mobilitydemand are greatly coupled, and thus, both need to be thoroughly and consistently represented in a digitalmanner so as to enable good quality data-driven advanced analysis. Data-driven analytics solutions relyon measurements. However, sensors do only provide a measure of movements that have already occurred(and associated magnitudes, such as vehicles per hour). For a movement to happen there are two mainrequirements: i) the demand (the need or interest) and ii) the offer (the feasibility and resources). Inaddition, for good measurement, the sensor needs to be located at an adequate location and be able tocollect data at the right moment. All this information needs to be digitalised accordingly in order to applyadvanced data analytic methods and take advantage of good digital transportation resource representation.Our main contributions, focused on mobility data mining over urban transportation networks, can besummarised in three groups. The first group consists of a comprehensive description of a digitalmultimodal transport infrastructure representation from global and local perspectives. The second groupis oriented towards matching diverse sensor data onto the transportation network representation,including a quantitative analysis of map-matching algorithms. The final group of contributions covers theprediction of short-term demand based on various measures of urban mobility

    Automated detection and analysis of fluorescence changes evoked by molecular signalling

    Get PDF
    Fluorescent dyes and genetically encoded fluorescence indicators (GEFI) are common tools for visualizing concentration changes of specific ions and messenger molecules during intra- as well as intercellular communication. While fluorescent dyes have to be directly loaded into target cells and function only transiently, the expression of GEFIs can be controlled in a cell and time-specific fashion, even allowing long-term analysis in living organisms. Dye and GEFI based fluorescence fluctuations, recorded using advanced imaging technologies, are the foundation for the analysis of physiological molecular signaling. Analyzing the plethora of complex fluorescence signals is a laborious and time-consuming task. An automated analysis of fluorescent signals circumvents user bias and time constraints. However, it requires to overcome several challenges, including correct estimation of fluorescence fluctuations at basal concentrations of messenger molecules, detection and extraction of events themselves, proper segmentation of neighboring events as well as tracking of propagating events. Moreover, event detection algorithms need to be sensitive enough to accurately capture localized and low amplitude events exhibiting a limited spatial extent. This thesis presents three novel algorithms, PBasE, CoRoDe and KalEve, for the automated analysis of fluorescence events, developed to overcome the aforementioned challenges. The algorithms are integrated into a graphical application called MSparkles, specifically designed for the analysis of fluorescence signals, developed in MATLAB. The capabilities of the algorithms are demonstrated by analyzing astroglial Ca2+ events, recorded in anesthetized and awake mice, visualized using genetically encoded Ca2+ indicators (GECIs) GCaMP3 as well as GCaMP5. The results were compared to those obtained by other software packages. In addition, the analysis of neuronal Na+ events recorded in acute brain slices using SBFI-AM serve to indicate the putatively broad application range of the presented algorithms. Finally, due to increasing evidence of the pivotal role of astrocytes in neurodegenerative diseases such as epilepsy, a metric to assess the synchronous occurrence of fluorescence events is introduced. In a proof-of-principle analysis, this metric is used to correlate astroglial Ca2+ events with EEG measurementsFluoreszenzfarbstoffe und genetisch kodierte Fluoreszenzindikatoren (GEFI) sind gängige Werkzeuge zur Visualisierung von Konzentrationsänderungen bestimmter Ionen und Botenmoleküle der intra- sowie interzellulären Kommunikation. Während Fluoreszenzfarbstoffe direkt in die Zielzellen eingebracht werden müssen und nur über einen begrenzten Zeitraum funktionieren, kann die Expression von GEFIs zell- und zeitspezifisch gesteuert werden, was darüber hinaus Langzeitanalysen in lebenden Organismen ermöglicht. Farbstoff- und GEFI-basierte Fluoreszenzfluktuationen, die mit Hilfe moderner bildgebender Verfahren aufgezeichnet werden, bilden die Grundlage für die Analyse physiologischer molekularer Kommunikation. Die Analyse einer großen Zahl komplexer Fluoreszenzsignale ist jedoch eine schwierige und zeitaufwändige Aufgabe. Eine automatisierte Analyse ist dagegen weniger zeitaufwändig und unabhängig von der Voreingenommenheit des Anwenders. Allerdings müssen hierzu mehrere Herausforderungen bewältigt werden. Unter anderem die korrekte Schätzung von Fluoreszenzschwankungen bei Basalkonzentrationen von Botenmolekülen, die Detektion und Extraktion von Signalen selbst, die korrekte Segmentierung benachbarter Signale sowie die Verfolgung sich ausbreitender Signale. Darüber hinaus müssen die Algorithmen zur Signalerkennung empfindlich genug sein, um lokalisierte Signale mit geringer Amplitude sowie begrenzter räumlicher Ausdehnung genau zu erfassen. In dieser Arbeit werden drei neue Algorithmen, PBasE, CoRoDe und KalEve, für die automatische Extraktion und Analyse von Fluoreszenzsignalen vorgestellt, die entwickelt wurden, um die oben genannten Herausforderungen zu bewältigen. Die Algorithmen sind in eine grafische Anwendung namens MSparkles integriert, die speziell für die Analyse von Fluoreszenzsignalen entwickelt und in MATLAB implementiert wurde. Die Fähigkeiten der Algorithmen werden anhand der Analyse astroglialer Ca2+-Signale demonstriert, die in narkotisierten sowie wachen Mäusen aufgezeichnet und mit den genetisch kodierten Ca2+-Indikatoren (GECIs) GCaMP3 und GCaMP5 visualisiert wurden. Erlangte Ergebnisse werden anschließend mit denen anderer Softwarepakete verglichen. Darüber hinaus dient die Analyse neuronaler Na+-Signale, die in akuten Hirnschnitten mit SBFI-AM aufgezeichnet wurden, dazu, den breiten Anwendungsbereich der Algorithmen aufzuzeigen. Zu guter Letzt wird aufgrund der zunehmenden Indizien auf die zentrale Rolle von Astrozyten bei neurodegenerativen Erkrankungen wie Epilepsie eine Metrik zur Bewertung des synchronen Auftretens fluoreszenter Signale eingeführt. In einer Proof-of-Principle-Analyse wird diese Metrik verwendet, um astrogliale Ca2+-Signale mit EEG-Messungen zu korrelieren
    • …
    corecore