2,281 research outputs found

    Management and Visualisation of Non-linear History of Polygonal 3D Models

    Get PDF
    The research presented in this thesis concerns the problems of maintenance and revision control of large-scale three dimensional (3D) models over the Internet. As the models grow in size and the authoring tools grow in complexity, standard approaches to collaborative asset development become impractical. The prevalent paradigm of sharing files on a file system poses serious risks with regards, but not limited to, ensuring consistency and concurrency of multi-user 3D editing. Although modifications might be tracked manually using naming conventions or automatically in a version control system (VCS), understanding the provenance of a large 3D dataset is hard due to revision metadata not being associated with the underlying scene structures. Some tools and protocols enable seamless synchronisation of file and directory changes in remote locations. However, the existing web-based technologies are not yet fully exploiting the modern design patters for access to and management of alternative shared resources online. Therefore, four distinct but highly interconnected conceptual tools are explored. The first is the organisation of 3D assets within recent document-oriented No Structured Query Language (NoSQL) databases. These "schemaless" databases, unlike their relational counterparts, do not represent data in rigid table structures. Instead, they rely on polymorphic documents composed of key-value pairs that are much better suited to the diverse nature of 3D assets. Hence, a domain-specific non-linear revision control system 3D Repo is built around a NoSQL database to enable asynchronous editing similar to traditional VCSs. The second concept is that of visual 3D differencing and merging. The accompanying 3D Diff tool supports interactive conflict resolution at the level of scene graph nodes that are de facto the delta changes stored in the repository. The third is the utilisation of HyperText Transfer Protocol (HTTP) for the purposes of 3D data management. The XML3DRepo daemon application exposes the contents of the repository and the version control logic in a Representational State Transfer (REST) style of architecture. At the same time, it manifests the effects of various 3D encoding strategies on the file sizes and download times in modern web browsers. The fourth and final concept is the reverse-engineering of an editing history. Even if the models are being version controlled, the extracted provenance is limited to additions, deletions and modifications. The 3D Timeline tool, therefore, implies a plausible history of common modelling operations such as duplications, transformations, etc. Given a collection of 3D models, it estimates a part-based correspondence and visualises it in a temporal flow. The prototype tools developed as part of the research were evaluated in pilot user studies that suggest they are usable by the end users and well suited to their respective tasks. Together, the results constitute a novel framework that demonstrates the feasibility of a domain-specific 3D version control

    A provenance metadata model integrating ISO geospatial lineage and the OGC WPS : conceptual model and implementation

    Get PDF
    Nowadays, there are still some gaps in the description of provenance metadata. These gaps prevent the capture of comprehensive provenance, useful for reuse and reproducibility. In addition, the lack of automated tools for capturing provenance hinders the broad generation and compilation of provenance information. This work presents a provenance engine (PE) that captures and represents provenance information using a combination of the Web Processing Service (WPS) standard and the ISO 19115 geospatial lineage model. The PE, developed within the MiraMon GIS & RS software, automatically records detailed information about sources and processes. The PE also includes a metadata editor that shows a graphical representation of the provenance and allows users to complement provenance information by adding missing processes or deleting redundant process steps or sources, thus building a consistent geospatial workflow. One use case is presented to demonstrate the usefulness and effectiveness of the PE: the generation of a radiometric pseudo-invariant areas bench for the Iberian Peninsula. This remote-sensing use case shows how provenance can be automatically captured, also in a non-sequential complex flow, and its essential role in the automation and replication tasks in work with very large amounts of geospatial data

    Coastal Biophysical Inventory Database for the Point Reyes National Seashore

    Get PDF
    The Coastal Biophysical Inventory Database is the repository of the data gathered from a rapid assessment of approximately 161 km of the intertidal habitat managed by the Point Reyes National Seashore and Golden Gate National Recreation Area. The Coastal Biophysical Inventory Database is modeled after the “Alaska Coastal Resources Inventory and Mapping Database” and CoastWalker program of Glacier Bay National Park and Preserve. The protocol and database were adapted for this effort to represent the features of the Point Reyes National Seashore and Golden Gate National Recreation Area located along the northern central coast of California. The database is an integration of spatial data and observation data entered and browsed through an interface designed to complement the methods of the observation protocol. The Coastal Biophysical Inventory (CBI) and Mapping Protocol is the methodology to collect and store repeatable observations of the intertidal zone to create a baseline of information useful for resource management and potentially assist damage assessment in the event of an oil spill. The inventory contributes to the knowledge needed for the conservation of coastal resources managed in the public’s trust. The Coastal Biophysical Inventory Database is a Microsoft Access 2003 format relational database with a customized data entry interface programmed in Microsoft Access Visual Basic for Applications. The interface facilitates the entry, storage and relation of substrate, biology, photographs, and other field observations. Data can be browsed or queried using query tools common to the Microsoft Access software or using custom spatial query tools built into the interface with ESRI MapObjects LT 2.0 ActiveX COM objects. The Coastal Biophysical Inventory’s GIS data set is useful for collecting, analyzing and reporting field observations about the intertidal zone. The GIS data set is linked to the observation data set through a unique number, the Segment ID, by using the relate tools found in ArcGIS (9.2-10). The Segment ID is a non-repeating number that references a section of coastline that is delineated by the type and form of the substrate observed. The Segment ID allows connection to the biological observations and other observation records such as photos or the original data sheets. Through ArcGIS connections to the observation database using the Segment ID, summaries of biodiversity or habitat can be made by location. The Coastal Biophysical Inventory has completed its initial goals to assess the coastline of two National Parks. The data set collected provides a snapshot of information and the database allows for future observations to be recorded. It provides coastal resource managers a broad insight and orientation to the intertidal resources managed by the National Park Service

    Geospatial queries on data collection using a common provenance model

    Get PDF
    Altres ajuts: Xavier Pons is the recipient of an ICREA Academia Excellence in Research Grant (2016-2020)Lineage information is the part of the metadata that describes "what", "when", "who", "how", and "where" geospatial data were generated. If it is well-presented and queryable, lineage becomes very useful information for inferring data quality, tracing error sources and increasing trust in geospatial information. In addition, if the lineage of a collection of datasets can be related and presented together, datasets, process chains, and methodologies can be compared. This paper proposes extending process step lineage descriptions into four explicit levels of abstraction (process run, tool, algorithm and functionality). Including functionalities and algorithm descriptions as a part of lineage provides high-level information that is independent from the details of the software used. Therefore, it is possible to transform lineage metadata that is initially documenting specific processing steps into a reusable workflow that describes a set of operations as a processing chain. This paper presents a system that provides lineage information as a service in a distributed environment. The system is complemented by an integrated provenance web application that is capable of visualizing and querying a provenance graph that is composed by the lineage of a collection of datasets. The International Organization for Standardization (ISO) 19115 standards family with World Wide Web Consortium (W3C) provenance initiative (W3C PROV) were combined in order to integrate provenance of a collection of datasets. To represent lineage elements, the ISO 19115-2 lineage class names were chosen, because they express the names of the geospatial objects that are involved more precisely. The relationship naming conventions of W3C PROV are used to represent relationships among these elements. The elements and relationships are presented in a queryable graph

    African marine invertebrate data analysis and imaging: a dataset and digital platform for research, education and outreach

    Get PDF
    Tese de mestrado, Bioinformática e Biologia Computacional, Universidade de Lisboa, Faculdade de Ciências, 2019As espécies costeiras e marinhas e os seus respetivos habitats estão a ser adversamente perdidos ou danificados, reduzindo significativamente a biodiversidade marinha. Os invertebrados marinhos, incluindo crustáceos decápodes, constituem importantes fontes de alimento para as populações locais, especialmente para pessoas mais pobres, que dependem destes recursos para sustento e alimentação. Moçambique e São Tomé e Príncipe abrigam um grande número de espécies decápodes indígenas, que são usadas pelas populações locais para a sua subsistência e segurança alimentar. Existe uma grande quantidade de dados valiosos sobre a biodiversidade do mundo, armazenados em Coleções de História Natural, repositórios digitais, programas de vigilância climática, projetos de investigação e outros, que estão disponíveis para pesquisa. Em particular, as Coleções de História Natural albergam informação muito relevante para estudos de biodiversidade, incluindo séries espaciais e temporais, na medida em que são desenvolvidas desde o século XIX, e têm sido usadas em numerosos estudos, desde taxonómicos e sistemáticos até distribuição espacial e temporal e composição populacional. A quantidade de dados acessíveis online cresce diariamente e desta forma, métodos e técnicas das ciências da computação são essenciais para a gestão e análise dos mesmos. Os repositórios online são instrumentos úteis na pesquisa sobre biodiversidade, pois fornecem acesso rápido aos dados de enumeras coleções. Vários museus, organizações e universidades processam informações das suas coleções em diversas bases de dados digitais que disponibilizam em repositórios na internet. Um desses repositórios de dados e o Global Information Facility for Biodiversity (GBIF). O modelo de metadados Darwin Core (DwC), e usado pelo GBIF com o intuito de partilhar informação padronizada sobre biodiversidade, e desempenha um papel fundamental na interoperabilidade e integração destes dados. Esta dissertação faz parte do projeto COBIO-NET – Coastal biodiversity and food security in peri-urban Sub-Saharan Africa: assessment, capacity building and regional networking in contrasting Indian and Atlantic Ocean, financiado pela fundação FCT/AGA KHAN. O objetivo desta secção do projeto COBIO-NET é compilar conjuntos de dados de crustáceos decápodes, e informação associada, de Moçambique e São Tomé e Príncipe por meio da catalogação em um conjunto de dados e base de dados organizados em DwC, e torná-los acessíveis através de repositórios e de mapas interativos online. Neste contexto, o principal objetivo da dissertação foi a construção de um conjunto de dados e o uso de ferramentas digitais para compilar informações a fim de criar um conjunto de dados abrangente sobre crustáceos decápodes e respetivos habitats nas zonas costeiras de Moçambique e São Tomé e Príncipe. O trabalho foi repartido em 4 partes: 1) recolha de dados mundiais sobre biodiversidade em repositórios digitais globais relacionados com Coleções de História Natural, outros dados de biodiversidade e literatura científica para a construção de um conjunto de dados; 2) gestão e processamento de dados sobre biodiversidade a partir da compilação de estudos biológicos por meio de um sistema de gestão de bases de dados relacional (SGBDR) de código aberto; 3) armazenamento e representação de eventos e distribuição de dados geográficos na ferramenta de um sistema de informação geográfica de código aberto; 4) disseminação de dados online através da criação de um web map interativo usando uma biblioteca JavaScript de web mapping de código aberto. Nesta dissertação, a estrutura dos metadados e a base do projeto COBIO-NET, que será usada para armazenar os dados de invertebrados marinhos obtidos durante o projeto. O conjunto de dados piloto desenvolvido durante esta dissertação, apresenta um layout com 26 campos Darwin Core, e contém um conjunto de dados referente a 7486 ocorrências de crustáceos decápodes em mangais, pradarias marinhas, corais e outras áreas costeiras de Moçambique e São Tomé e Príncipe. Este modelo de metadados é adequado para a recolha de informação pré-estruturada de acordo com o formato de dados usado pelo GBIF. Ferramentas digitais tais como o software Open Refine e a linguagem de programação Python foram usadas para criar um conjunto de dados, através da compilação e limpeza de dados de ocorrências de biodiversidade obtidos a partir de repositórios digitais, dados obtidos diretamente de outras fontes e literatura científica, agregados em um conjunto de dados detalhado. De forma a poder realizar uma análise geográfica, é necessário que os dados incluam os locais de recolha na forma de coordenadas espaciais. Nos casos em que os registos obtidos não incluíam esta informação, a georreferenciação foi feita utilizando o software GEOLocate. A base de dados PostgreSQL construída durante este estudo e o suporte digital usado para gerir e processar informações sobre crustáceos decápodes de Moçambique e São Tomé e Príncipe. Os dados da base de dados podem ser filtrados por meio de consultas e a base de dados pode ser atualizada pelos investigadores do COBIO-NET com mais dados referentes a outros grupos taxonómicos de invertebrados marinhos. O QGIS foi o sistema de informação geográfica de código aberto utilizado para visualizar, processar e avaliar os dados e informações geográficas recolhidas. A avaliação geográfica, usando o QGIS, envolveu vários passos, como processamento de informações geográficas, mapeamento de diferentes habitats, combinação de camadas de informação e personalização da simbologia existente. O design do mapa QGIS foi projetado especificamente para o projeto COBIO-NET, oferecendo uma base para analisar informações sobre incidência e alocação de espécies. O projeto de mapeamento em QGIS permite a sua aplicação a outros conjuntos de dados mais amplos, incluindo informação sobre outros invertebrados marinhos. O plugin qgis2web para o QGIS foi utilizado para produzir um mapa interativo em JavaScript, tornado público na internet, personalizado através da biblioteca Leaflet que permite visualizar as ocorrências das espécies de crustáceos decápodes. O mapa fornece vários filtros para manipular os dados, permitindo a visualização de ocorrências de acordo com critérios específicos. Este mapa inclui 18 camadas que podem ser escolhidas ou não para filtrar as informações visualizadas, e são categorizadas em (1) ocorrência de espécies, (2) habitat, (3) áreas dos países e (4) zonas costeiras globais. Ao longo desta dissertação, foram usadas várias ferramentas e técnicas, que apresentaram vários desafios. A limpeza e validação de nomes científicos no conjunto de dados de biodiversidade foi um deles. O campo da taxonomia está em constante mudança, dificultando a compreensão de quais termos descritos correspondem ao nosso conhecimento contemporâneo de uma espécie em particular. As limitações também se estendem aos shapefiles de habitat usados neste estudo. A maioria dos conjuntos de dados possui cobertura global e foi compilada a partir de várias fontes de dados de qualidade e a escalas variadas para as quais a interpretação da imagem foi realizada. Outra limitação foi o mapa em JavaScript gerado com o plugin qgis2web no QGIS. Embora o plugin possa simular muitos elementos do QGIS, incluindo símbolos e estilos de camada, não consegue replicar aspetos mais complexos. Os dados recolhidos no conjunto de dados podem ser partilhados através de repositórios online, a fim de serem usados, por exemplo, em estudos de distribuição de espécies e avaliar a composição de comunidades ecológicas distintas, para estimar a probabilidade de extinção e preservação da biodiversidade. Outra aplicação deste trabalho é permitir a criação de uma coleção de dados de referência alargada a outros invertebrados marinhos de Moçambique e São Tomé e Príncipe, que seja disponibilizada a comunidade cientifica e ao público em geral de forma a dar a conhecer a biodiversidade e história natural destes países africanos. Uma das ideias do projeto COBIO-NET é gerir e centralizar todos os conjuntos de dados de biodiversidade produzidos durante o projeto e a documentação associada com recursos a ferramentas multimédia, num repositório digital. Este repositório será usado para disseminar as informações e dados reunidos para a comunidade científica e o público em geral. Portanto, o conjunto de dados, mapas QGIS e mapas interativos produzidos durante esta dissertação serão incluídos no repositório digital COBIONET para maior disseminação da informação a investigadores e sociedade em geral. Além disso, as características do conjunto de dados desenvolvido, que integra um conjunto de dados de biodiversidade de invertebrados marinhos estruturado em DwC, a base de dados, os mapas e o mapa online serão publicados em artigos científicos, data papers e repositórios digitais (GBIF e COBIO-NET) no âmbito do COBIO-NET, em colaboração com outros investigadores do projeto. Adicionalmente, ao serem disponibilizados ao público, estes dados e ferramentas digitais podem ser usados por outros utilizadores como ferramentas de ensino ou divulgação, para escrever livros, artigos científicos, folhetos de divulgação, etc. Os crustáceos decápodes são elementos relevantes da dieta e meios de subsistência das populações locais de Moçambique e São Tomé e Príncipe. Neste contexto, os resultados obtidos nesta dissertação são relevantes na medida em que podem ser usados para ligar o conhecimento destes recursos naturais ao seu valor gastronómico nestes dois países, bem como aos Objetivos de Desenvolvimento Sustentável (ODS) das Nações Unidas, a saber, o ODS #2 (Erradicar a fome), que visa acabar com a fome, alcançar a segurança a alimentar e melhorar a nutrição; e o ODS #14 (Proteger a vida marinha), que se refere a biodiversidade marinha e costeira, sua conservação e uso sustentável para um desenvolvimento sustentável da sociedade humana.The quantity of biological data accessible in online repositories is exponentially growing, thus computer science methods are required for its management and analysis. Online, digital repositories are useful instruments for biodiversity research, as they provide fast access to data from different sources. Among these, large contributors are museums that hold a vast quantity of specimens in their collections. This dissertation was developed as part of the COBIO-NET project, funded by FCT/AGA KHAN Foundation, and its goals were to create a comprehensive dataset on the marine coastal biodiversity of decapod crustaceans through different habitats in Mozambique and Sao Tome and Principe, and to use digital tools to disseminate the compiled information online, so that its available to the scientific community and the general public. These digital tools have been used to aggregate, georeference and clean up global biodiversity data retrieved from online digital repositories, biodiversity data and scientific literature into a detailed dataset; to manage and structure biodiversity data for the compilation of biological studies through a relational database management system; to manage and process data through, also to show the acquired spatial data in the geographic information system QGIS and an interactive web map. In this dissertation, a metadata structure was defined, which will be further used to store data collected during the COBIO-NET project. This metadata structure includes 26 fields defined by the Darwin Core metadata standard. A pilot dataset, based on this metadata structure was compiled, including 7486 decapod crustaceans occurrences records in Mozambique and Sao Tome and Principe. The database PostgreSQL constructed during this study is the digital support to manage and process information on decapod crustaceans from Mozambique and Sao Tome and Principe. The data from the database can be filtered through queries, and this database is ready to be updated by researchers enrolled in COBIO-NET with more data from different marine invertebrate taxonomic groups. The QGIS map design provides maps to visualize decapod crustacean occurrences, offering the foundation for the analysis of species incidence and allocation information. The map project is prepared in a way that it can be used to display information on larger datasets, including information on other marine taxonomic groups added during the COBIO-NET project. A web map was constructed using Leaflet and is an interactive digital platform, that allows the visualization of decapod crustacean occurrences through mangroves, seagrasses, corals and other coastal areas of Mozambique and Sao Tome and Principe. The web map provides distinct filters concentrations to manipulate the data, allowing visualization of occurrences events according with specific demands. A dataset of biodiversity data, representing occurrence records, was compiled and the digital tools developed during this study will be published and disseminated through online digital repositories, that can be used in future studies to model crustacean species allocation studies. To evaluate species communities in an effort to estimate their probability of extinction and to preserve biodiversity. These data as well as the digital tools, and their following publication further adds and promotes our current knowledge on biodiversity of marine crustaceans in mangroves, seagrasses, corals and other coastal areas of Mozambique and Sao Tome and Principe

    Protocol for Surveying Bat Use of Lava Tube Caves during Winter in Craters of the Moon National Monument and Preserve, Standard Operating Procedures

    Get PDF
    Background The Upper Columbia Basin Network I&M (Inventory and Monitoring) program and Craters of the Moon National Monument and Preserve are collaborating to monitor winter bat use in Arco Tunnel, which is a safely accessed cave in the northern portion of the monument that consistently has been found with the largest number of bats (~30/year) among the set of caves recently inventoried. The standard operating procedures documented here and the methods described in the associated protocol narrative will also be used to periodically inventory other caves within the monument and surrounding preserve as park resources and safety (winter environmental and accessibility) conditions permit. This protocol addresses the survey objective to regularly count bats in Arco Tunnel during winter (January-March) and in other caves as environmental conditions and staff resources allow. Purpose This SOP describes the step-by-step procedures for preparing for field work and for preparing and organizing field equipment prior to the personnel training and entry into the field. Adequate preparation of equipment for the field and is crucial to a successful monitoring program

    Quality Assessment of the Canadian OpenStreetMap Road Networks

    Get PDF
    Volunteered geographic information (VGI) has been applied in many fields such as participatory planning, humanitarian relief and crisis management because of its cost-effectiveness. However, coverage and accuracy of VGI cannot be guaranteed. OpenStreetMap (OSM) is a popular VGI platform that allows users to create or edit maps using GPS-enabled devices or aerial imageries. The issue of geospatial data quality in OSM has become a trending research topic because of the large size of the dataset and the multiple channels of data access. The objective of this study is to examine the overall reliability of the Canadian OSM data. A systematic review is first presented to provide details on the quality evaluation process of OSM. A case study of London, Ontario is followed as an experimental analysis of completeness, positional accuracy and attribute accuracy of the OSM street networks. Next, a national study of the Canadian OSM data assesses the overall semantic accuracy and lineage in addition to the quality measures mentioned above. Results of the quality evaluation are compared with associated OSM provenance metadata to examine potential correlations. The Canadian OSM road networks were found to have comparable accuracy with the tested commercial database (DMTI). Although statistical analysis suggests that there are no significant relations between OSM accuracy and its editing history, the study presents the complex processes behind OSM contributions possibly influenced by data import and remote mapping. The findings of this thesis can potentially guide cartographic product selection for interested parties and offer a better understanding of future quality improvement in OSM

    Architecture for Provenance Systems

    No full text
    This document covers the logical and process architectures of provenance systems. The logical architecture identifies key roles and their interactions, whereas the process architecture discusses distribution and security. A fundamental aspect of our presentation is its technology-independent nature, which makes it reusable: the principles that are exposed in this document may be applied to different technologies

    CAD, BIM, GIS and other tricks of the computer science in the education of the Building Engineer

    Get PDF
    Revisione internazionale- relazione a invito- chair sessione S8T The paper aims to develop some thoughts on the upgrade implemented in the disciplines of drawing from the latest forms of digital representation, commenting on the experiences under way in some university courses included in the learning curriculum provided to engineering students with regard to the course of study in Ingegneria Edile (Building Engineering, also known as Architectural or Construction Engineering) at the Politecnico di Torino. It's a matter of reasoning on what and how to suggest knowledge and practises in the experience of teaching that result as an improvement of skills and abilities appropriate for future commitments required by the job world. Method: Methodological reasons, subject contents and experiences positively carried out during the activities of the course of Representation Techniques and Data Management (in the post graduate "Laurea Magistrale") are treated, focusing on all the resources needed to conduct profitable operations training and first clarifying the specific skills and experience required for the teaching staff, essential qualities to ensure good results: all the activities organized to achieve the training objectives are based on the belief that early training is needed to trigger virtuous review processes for engineering practice and that opportunities to practice through simulations in the academic curriculum for future engineers can produce effects of greater permanence and enable an enhancement of learning outcomes. Result: The analysis, which is addressed primarily to illustrate the result of some of the outcomes of exercise activities leaded by students, brings attention to a solicitation that seems to be constraining and that concerns the system of relations required between operators of the design and construction process, which are requested to enter into shared aims while operating in the specificity of the various technical fields; in this sense, the tricks of the CAD, which is at the service of a geometric knowledge, measured and fulfilled by its attributes, the attention demanded by BIM, which builds a widespread and open network of relationship, the cunnings of the GIS, which has to gather dynamic information and alternative choices, appear to address areas of operational testing following a single purpose directed towards a better characterization of the process of conceptual development and a more advantageous control of the working method. Discussion & Conclusion: So, with the design and over the usual representations, we speak of computer tricks to say that to be understood as the necessary infrastructure to solicit and investigate the reasons of doing and how to solve the complexity of operating on the field, upon which students must impractical themselves to identify qualities and limits, whether they are exploring the reasons of the survey or the reasons bound with the design; certainly a renewal for the most usual ways of designing useful to produce different levels of knowledge and a new shared place for the exchange and discussion of the hypotheses, with what results
    corecore