36 research outputs found
Digital Twins in Industry
Digital Twins in Industry is a compilation of works by authors with specific emphasis on industrial applications. Much of the research on digital twins has been conducted by the academia in both theoretical considerations and laboratory-based prototypes. Industry, while taking the lead on larger scale implementations of Digital Twins (DT) using sophisticated software, is concentrating on dedicated solutions that are not within the reach of the average-sized industries. This book covers 11 chapters of various implementations of DT. It provides an insight for companies who are contemplating the adaption of the DT technology, as well as researchers and senior students in exploring the potential of DT and its associated technologies
MĂ©thodes du Web sĂ©mantique pour lâintĂ©gration de donnĂ©es en sciences de la vie
Sous la direction de Christine Froidevaux, Marie-Laure Martin-Magniette, Guillem RigaillInternational audienc
Automatic Geospatial Data Conflation Using Semantic Web Technologies
Duplicate geospatial data collections and maintenance are an extensive problem across Australia government organisations. This research examines how Semantic Web technologies can be used to automate the geospatial data conflation process. The research presents a new approach where generation of OWL ontologies based on output data models and presenting geospatial data as RDF triples serve as the basis for the solution and SWRL rules serve as the core to automate the geospatial data conflation processes
The construction of a linguistic linked data framework for bilingual lexicographic resources
Little-known lexicographic resources can be of tremendous value to users once digitised. By extending the digitisation efforts for a lexicographic resource, converting the human readable digital object to a state that is also machine-readable, structured data can be created that is semantically interoperable, thereby enabling the lexicographic resource to access, and be accessed by, other semantically interoperable resources. The purpose of this study is to formulate a process when converting a lexicographic resource in print form to a machine-readable bilingual lexicographic resource applying linguistic linked data principles, using the English-Xhosa Dictionary for Nurses as a case study. This is accomplished by creating a linked data framework, in which data are expressed in the form of RDF triples and URIs, in a manner which allows for extensibility to a multilingual resource. Click languages with characters not typically represented by the Roman alphabet are also considered. The purpose of this linked data framework is to define each lexical entry as âhistorically dynamicâ, instead of âontologically staticâ (Rafferty, 2016:5). For a framework which has instances in constant evolution, focus is thus given to the management of provenance and linked data generation thereof. The output is an implementation framework which provides methodological guidelines for similar language resources in the interdisciplinary field of Library and Information Science
Linked open data e ontologie per la descrizione del patrimonio culturale: criteri per la progettazione di un registro ragionato
La tesi affronta il tema del semantic web e della pubblicazione delle informazioni relative al patrimonio culturale in modalitĂ linked open data. In particolare, oggetto dellâattivitĂ di ricerca sono i registri di ontologie, vale a dire quegli strumenti che descrivono formalmente i modelli ontologici disponibili sul web e ne agevolano il reperimento e la valutazione, incentivandone il riuso e facilitando i processi di allineamento semantico e di interoperabilitĂ . I registri di ontologie rispondono in modo efficace allâassenza di strumenti di riferimento e di orientamento nei processi di modellazione concettuale delle risorse informative e sono stati sperimentati con successo in diversi domini, ma sono ancora inediti in ambito culturale.
Lâesame puntuale delle iniziative condotte nellâultimo decennio nellâambito dei beni culturali ha evidenziato con chiarezza la mancanza di un assetto epistemologico consolidato nella modellazione concettuale delle risorse informative, a fronte delle numerose ontologie realizzate in funzione dei molteplici progetti di pubblicazione di linked open data. Di conseguenza, risulta tuttâaltro che agevole conoscere esaustivamente tutte le ontologie disponibili in relazione al proprio abito di interesse ed ottenere in maniera agevole e sistematica una valutazione attendibile circa la loro capacitĂ rappresentativa e il loro grado di interoperabilitĂ semantica.
Lâanalisi dei principali registri di ontologie finora realizzati al di fuori del dominio dei beni culturali ha consentito di individuare e definire i requisiti di un registro di ontologie per i beni culturali (denominato CLOVER, Culture â Linked Open Vocabularies â Extensible Registry), e di elaborarne la relativa ontologia. Lâontologia ADMS-AP_IT (Asset Description Metadata Schema â Application Profile â Italy) Ăš stata redatta a seguito di unâanalisi sistematica e di una valutazione critica di preesistenti ontologie concepite per scopi similari. Essa Ăš stata sottoposta ad AgID, che lâha inclusa nella rete di ontologie e vocabolari controllati della pubblica amministrazione detta OntoPiA. Tale ontologia rappresenta un punto di arrivo del progetto di ricerca, ma anche una base di partenza per approfondire l'indagine su tali temi: in questo senso, la sua inclusione nella rete OntoPiA di ontologie e vocabolari controllati della pubblica amministrazione si configura come un'opportunitĂ rilevante per sperimentarne l'applicabilitĂ e migliorarne la qualitĂ
Privacy-Preserving Reengineering of Model-View-Controller Application Architectures Using Linked Data
When a legacy systemâs software architecture cannot be redesigned, implementing
additional privacy requirements is often complex, unreliable and
costly to maintain. This paper presents a privacy-by-design approach to
reengineer web applications as linked data-enabled and implement access
control and privacy preservation properties. The method is based on the
knowledge of the application architecture, which for the Web of data is
commonly designed on the basis of a model-view-controller pattern. Whereas
wrapping techniques commonly used to link data of web applications duplicate
the security source code, the new approach allows for the controlled
disclosure of an applicationâs data, while preserving non-functional properties
such as privacy preservation. The solution has been implemented
and compared with existing linked data frameworks in terms of reliability,
maintainability and complexity
Linked data approach in accessing geospatial big data
Today, linked data is frequently associated with Geographic Information Systems (GIS) as its technology stack is utilized in alleviating geospatial data integration issue. Geospatial data have become ubiquitous as they have emerged everywhere and these data can be geo-referenced. One of the types of georeferenced data that is lacking in Malaysia is the insufficient availability of Malaysian oceanographic data. It is a great relief to know that most of the earth observation agencies have granted access into their data obtained from satellite altimetry. Consequently, the exponential growth of geospatial data as well as its complexity and diversity has led to big data problem and caused information sharing and exchange on the web becoming more complicated. To resolve this issue, linked data should be used in handling geospatial big data. Linked data is one of the best practices for exposing, sharing, publishing and connecting the structured data on the web. This study explored linked data as an approach to provide access to the Malaysian physical oceanography datasets on the web, which would allow the data to be standardized in a machine-readable format. The research reviewed the existing software tools used in publishing linked data, identified an appropriate software tool to generate Resource Description Framework (RDF) presenting geographical data and built a physical oceanography data website based on linked data principles. Initially, document analysis was conducted to review the existing linked data tools that have been used for geospatial data. Various scholarly articles, journals, tutorials and web pages were used as references to investigate the use of linked data tools. Based on the review, five software tools, namely Geometry2RDF, TripleGeo, Datalift, OpenLink Virtuoso and KARMA were identified as the appropriate tools to generate the RDF. Each of this software tool has its own capabilities and functionalities. Next, the tools were compared with one another based on literature review to get the best possible tool that can manage georeferenced oceanographic data. After the comparison, this study identified the best software tool to transform the shapefile into the RDF format was Datalift. Finally, a web-based information system was built to publish the linked data to data interlinking and sharing by web users. In conclusion, this study has introduced an alternative way to publish and access geospatial data, particularly related to physical oceanography datasets using linked data principles. Using such an approach would facilitate stakeholders and unveil information within the big data, thus enriching the discovery of geospatial information on the web
Query Optimization Techniques For Scaling Up To Data Variety
Even though Data Lakes are efficient in terms of data storage, they increase the complexity of query processing; this can lead to expensive query execution. Hence, novel techniques for generating query execution plans are demanded. Those techniques have to be able to exploit the main characteristics of Data Lakes. Ontario is a federated query engine capable of processing queries over heterogeneous data sources. Ontario uses source descriptions based on RDF Molecule Templates, i.e., an
abstract description of the properties belonging to the entities in the unified schema of the data in the Data Lake. This thesis proposes new heuristics tailored to the problem of query processing over heterogeneous data sources including heuristics specifically designed for certain data models. The proposed heuristics are integrated into the Ontario query optimizer. Ontario is compared to state-of-the-art RDF query engines in order to study the overhead introduced by considering heterogeneity during query processing. The results of the empirical evaluation suggest that there is no significant overhead when considering heterogeneity. Furthermore, the baseline version of Ontario is compared to two different sets of additional heuristics, i.e., heuristics specifically designed for certain data models and heuristics that do not consider the data model. The analysis of the obtained experimental results shows that source-specific heuristics are able to improve query performance. Ontario optimization techniques are able to generate effective and efficient query plans that can be executed over heterogeneous data sources in a Data Lake
The role of volunteered geographic information in land administration systems in developing countries
PhD ThesisDeveloping countries, especially in Africa are faced with a lack of formally registered land.
Available limited records are outdated, inaccurate and unreliable, which makes it a challenge
to properly administer and manage land and its resources. Moreover, limited maintenance
budgets prevalent in these countries make it difficult for organizations to conduct regular
systematic updates of geographic information. Despite these challenges, geographic
information still forms a major component for effective land administration. For a land
administration system (LAS) to remain useful, it must reflect realities on the ground, and this
can only be achieved if land information is reported regularly. However, if changes in land are
not captured in properly administered land registers, LAS lose societal relevance and are
eventually replaced by informal systems. Volunteered Geographic Information (VGI) can
address these LAS challenges by providing timely, affordable, up-to-date, flexible, and fit for
purpose (FFP) land information to support the limited current systems. Nonetheless, the
involvement of volunteers, who in most cases are untrained or non-experts in handling
geographic information, implies that VGI can be of varying quality. Thus, VGI is characterised
by unstructured, heterogeneous, unreliable data which makes data integration for value-added
purposes difficult to effect. These quality challenges can make land authorities reluctant to
incorporate the contributed datasets into their official databases. This research has developed
an innovative approach for establishing the quality and credibility of VGI such that it can be
considered in LAS on an FFP basis. However, verifying volunteer efforts can be difficult
without reference to ground truth, which is prevalent in many developing countries. Therefore,
a novel Trust and Reputation Modelling (TRM) methodology is proposed as a suitable
technique to effect such VGI validation. TRM relies on a view that the public can police
themselves in establishing âproxyâ measures of VGI quality and credibility of volunteers, thus
facilitating VGI to be used on an FFP basis in LAS. The output of this research is a conceptual
participatory framework for an FFP land administration based on VGI. The framework outlines
major aspects (social, legal, technical, and institutional) necessary for establishing a
participatory FFP LAS in developing countries.University of Botswan