182 research outputs found

    Geospatial Computing: Architectures and Algorithms for Mapping Applications

    Get PDF
    Beginning with the MapTube website (1), which was launched in 2007 for crowd-sourcing maps, this project investigates approaches to exploratory Geographic Information Systems (GIS) using web-based mapping, or ‘web GIS’. Users can log in to upload their own maps and overlay different layers of GIS data sets. This work looks into the theory behind how web-based mapping systems function and whether their performance can be modelled and predicted. One of the important questions when dealing with different geospatial data sets is how they relate to one another. Internet data stores provide another source of information, which can be exploited if more generic geospatial data mining techniques are developed. The identification of similarities between thousands of maps is a GIS technique that can give structure to the overall fabric of the data, once the problems of scalability and comparisons between different geographies are solved. After running MapTube for nine years to crowd-source data, this would mark a natural progression from visualisation of individual maps to wider questions about what additional knowledge can be discovered from the data collected. In the new ‘data science’ age, the introduction of real-time data sets introduces a new challenge for web-based mapping applications. The mapping of real-time geospatial systems is technically challenging, but has the potential to show inter-dependencies as they emerge in the time series. Combined geospatial and temporal data mining of realtime sources can provide archives of transport and environmental data from which to accurately model the systems under investigation. By using techniques from machine learning, the models can be built directly from the real-time data stream. These models can then be used for analysis and experimentation, being derived directly from city data. This then leads to an analysis of the behaviours of the interacting systems. (1) The MapTube website: http://www.maptube.org

    Management and Visualisation of Non-linear History of Polygonal 3D Models

    Get PDF
    The research presented in this thesis concerns the problems of maintenance and revision control of large-scale three dimensional (3D) models over the Internet. As the models grow in size and the authoring tools grow in complexity, standard approaches to collaborative asset development become impractical. The prevalent paradigm of sharing files on a file system poses serious risks with regards, but not limited to, ensuring consistency and concurrency of multi-user 3D editing. Although modifications might be tracked manually using naming conventions or automatically in a version control system (VCS), understanding the provenance of a large 3D dataset is hard due to revision metadata not being associated with the underlying scene structures. Some tools and protocols enable seamless synchronisation of file and directory changes in remote locations. However, the existing web-based technologies are not yet fully exploiting the modern design patters for access to and management of alternative shared resources online. Therefore, four distinct but highly interconnected conceptual tools are explored. The first is the organisation of 3D assets within recent document-oriented No Structured Query Language (NoSQL) databases. These "schemaless" databases, unlike their relational counterparts, do not represent data in rigid table structures. Instead, they rely on polymorphic documents composed of key-value pairs that are much better suited to the diverse nature of 3D assets. Hence, a domain-specific non-linear revision control system 3D Repo is built around a NoSQL database to enable asynchronous editing similar to traditional VCSs. The second concept is that of visual 3D differencing and merging. The accompanying 3D Diff tool supports interactive conflict resolution at the level of scene graph nodes that are de facto the delta changes stored in the repository. The third is the utilisation of HyperText Transfer Protocol (HTTP) for the purposes of 3D data management. The XML3DRepo daemon application exposes the contents of the repository and the version control logic in a Representational State Transfer (REST) style of architecture. At the same time, it manifests the effects of various 3D encoding strategies on the file sizes and download times in modern web browsers. The fourth and final concept is the reverse-engineering of an editing history. Even if the models are being version controlled, the extracted provenance is limited to additions, deletions and modifications. The 3D Timeline tool, therefore, implies a plausible history of common modelling operations such as duplications, transformations, etc. Given a collection of 3D models, it estimates a part-based correspondence and visualises it in a temporal flow. The prototype tools developed as part of the research were evaluated in pilot user studies that suggest they are usable by the end users and well suited to their respective tasks. Together, the results constitute a novel framework that demonstrates the feasibility of a domain-specific 3D version control

    Modeling and Simulation in Engineering

    Get PDF
    This book provides an open platform to establish and share knowledge developed by scholars, scientists, and engineers from all over the world, about various applications of the modeling and simulation in the design process of products, in various engineering fields. The book consists of 12 chapters arranged in two sections (3D Modeling and Virtual Prototyping), reflecting the multidimensionality of applications related to modeling and simulation. Some of the most recent modeling and simulation techniques, as well as some of the most accurate and sophisticated software in treating complex systems, are applied. All the original contributions in this book are jointed by the basic principle of a successful modeling and simulation process: as complex as necessary, and as simple as possible. The idea is to manipulate the simplifying assumptions in a way that reduces the complexity of the model (in order to make a real-time simulation), but without altering the precision of the results

    Geomatics for Mobility Management. A comprehensive database model for Mobility Management

    Get PDF
    In urban and metropolitan context, Traffic Operations Centres (TOCs) use technologies as Geographic Information Systems (GIS) and Intelligent Transport Systems (ITS) to tackling urban mobility issue. Usually in TOCs, various isolated systems are maintained in parallel (stored in different databases), and data comes from different sources: a challenge in transport management is to transfer disparate data into a unified data management system that preserves access to legacy data, allowing multi-thematic analysis. This need of integration between systems is important for a wise policy decisions. This study aims to design a comprehensive and general spatial data model that could allow the integration and visualization of traffic components and measures. The activity is focused on the case study of 5T Agency in Turin, a TOC that manages traffic regulation, public transit fleets and information to users, in the metropolitan area of Turin and Piedmont Region. In particular, the agency has set up during years a wide system of ITS technologies that acquires continuously measures and traffic information, which are used to deploy information services to citizens and public administrations. However, the spatial nature of these data is not fully considered in the daily operational activity, with the result of difficulties in information integration. Indeed the agency lacks of a complete GIS that includes all the management information in an organized spatial and “horizontal” vision. The main research question concerns the integration of different kind of data in a unique GIS spatial data model. Spatial data interoperability is critical and particularly challenging because geographic data definition in legacy database can vary widely: different data format and standards, data inconsistencies, different spatial and temporal granularities, different methods and enforcing rules that relates measures, events and physical infrastructures. The idea is not to replace the existing implemented and efficient system, but to built-up on these systems a GIS that overpass the different software and DBMS platforms and that can demonstrate how a spatial and horizontal vision in tackling urban mobility issues may be useful for policy and strategies decisions. The modelling activity take reference from a transport standards review and results in database general schema, which can be reused by other TOCs in their activities, helping the integration and coordination between different TOCs. The final output of the research is an ArcGIS geodatabase, tailored on 5T data requirements, which enable the customised representation of private traffic elements and measures. Specific custom scripts have been developed to allow the extraction and the temporal aggregation of traffic measures and events. The solution proposed allows the reuse of data and measures for custom purposes, without the need to deeply know the entire ITS environment system. In addition, The proposed ArcGIS geodatabase solution is optimised for limited power-computing environment. A case study has been deepened in order to evaluate the suitability of the database: a confrontation between damages, detected by Emergency Mapping Services (EMS), and Traffic Message Channel traffic events, has been conducted, evaluating the utility of 5T historical information of traffic events of the Piedmont floods of November 2016 for EMS services

    Visualizing Spatio-Temporal data

    Get PDF
    The amount of spatio-temporal data produced everyday has sky rocketed in the recent years due to the commercial GPS systems and smart devices. Together with this, the need for tools and techniques to analyze this kind of data have also increased. A major task of spatio-temporal data analysis is to discover relationships and patterns among spatially and temporally scattered events. However, most of the existing visualization techniques implement a top-down approach i.e, they require prior knowledge of existing patterns. In this dissertation, I present my novel visualization technique called Storygraph which supports bottom-up discovery of patterns. Since Storygraph presents and integrated view, analysis of events can be done with losing either of time or spatial contexts. In addition, Storygraph can handle spatio-temporal uncertainty making it ideal for data being extracted from text. In the subsequent chapters, I demonstrate the versatility and the effectiveness of the Storygraph along with case studies from my published works. Finally, I also talk about edge bundling in Storygraph to enhance the aesthetics and improve the readability of Storygraph

    PolyVR - A Virtual Reality Authoring Framework for Engineering Applications

    Get PDF
    Die virtuelle RealitĂ€t ist ein fantastischer Ort, frei von EinschrĂ€nkungen und vielen Möglichkeiten. FĂŒr Ingenieure ist dies der perfekte Ort, um Wissenschaft und Technik zu erleben, es fehlt jedoch die Infrastruktur, um die virtuelle RealitĂ€t zugĂ€nglich zu machen, insbesondere fĂŒr technische Anwendungen. Diese Arbeit bescheibt die Entstehung einer Softwareumgebung, die eine einfachere Entwicklung von Virtual-Reality-Anwendungen und deren Implementierung in immersiven Hardware-Setups ermöglicht. Virtual Engineering, die Verwendung virtueller Umgebungen fĂŒr Design-Reviews wĂ€hrend des Produktentwicklungsprozesses, wird insbesondere von kleinen und mittleren Unternehmen nur Ă€ußerst selten eingesetzt. Die HauptgrĂŒnde sind nicht mehr die hohen Kosten fĂŒr professionelle Virtual-Reality-Hardware, sondern das Fehlen automatisierter VirtualisierungsablĂ€ufe und die hohen Wartungs- und Softwareentwicklungskosten. Ein wichtiger Aspekt bei der Automatisierung von Virtualisierung ist die Integration von Intelligenz in kĂŒnstlichen Umgebungen. Ontologien sind die Grundlage des menschlichen Verstehens und der Intelligenz. Die Kategorisierung unseres Universums in Begriffe, Eigenschaften und Regeln ist ein grundlegender Schritt von Prozessen wie Beobachtung, Lernen oder Wissen. Diese Arbeit zielt darauf ab, einen Schritt zu einem breiteren Einsatz von Virtual-Reality-Anwendungen in allen Bereichen der Wissenschaft und Technik zu entwickeln. Der Ansatz ist der Aufbau eines Virtual-Reality-Authoring-Tools, eines Softwarepakets zur Vereinfachung der Erstellung von virtuellen Welten und der Implementierung dieser Welten in fortschrittlichen immersiven Hardware-Umgebungen wie verteilten Visualisierungssystemen. Ein weiteres Ziel dieser Arbeit ist es, das intuitive Authoring von semantischen Elementen in virtuellen Welten zu ermöglichen. Dies sollte die Erstellung von virtuellen Inhalten und die Interaktionsmöglichkeiten revolutionieren. Intelligente immersive Umgebungen sind der SchlĂŒssel, um das Lernen und Trainieren in virtuellen Welten zu fördern, Prozesse zu planen und zu ĂŒberwachen oder den Weg fĂŒr völlig neue Interaktionsparadigmen zu ebnen
    • 

    corecore