325 research outputs found

    Filtering Real-Time Linked Data Streams

    Get PDF
    Viimastel aastetel on veebis kiiresti kasvanud lingitud andmete hulk. Lingitud andmeid, mis on tihti kodeeritud RDF-formaadis, peetakse “viie tärni” andmeteks avatud andmete kontekstis tänu nende kasutatavusele ja potentsiaalile. Kuigi on märgata progressi lingitud andmete tehnoloogiate arengus ja nende töötlemises, pole veel suudetud nende täit potentsiaali saavutada. Üks väljakutsetest on lingitud andmevoogude peal järelduste tegemine, mis on alles hiljuti hakkanud uuringutes koguma hoogu. Nende tulemusena on pakutud välja päringu keeled nagu C-SPARQL ja loodud tuletusmootorite implementatsioonid. Neid mootoreid on senini testitud ainult akadeemilistes keskkondades. Selle töö eesmärk on luua täielikult töötav prototüüp lingitud andmevoogude töötlemiseks sõnumipõhistes süsteemides, mis suudab lingitud andmetest koosnevat sõnumite järjekorda näha kui andmevoogu ja filtreerida seda C-SPARQL-i mootoriga, mis oli üks esimesi omalaadseid. Selle süsteemi südames olevat C-SPARQL-i mootorit testisime CityBench võrdlusuuringu programmiga võttes arvesse ärivaldkonda kuuluvat reaalaja rakendust Inforegister NOW!, mis on veel arendusfaasis.The amount of linked data in the Web has increased rapidly in recent years. Linked data, often encoded in RDF, is considered as five-star data in the context of open data due to its usability and potential. Although there has been progress in development of linked data technologies and data processing models, still the full potential of linked data has not been realized. One of the challenges is reasoning over linked data streams, which has just recently gained momentum in research. As a result query languages, such as C-SPARQL, have been proposed and corresponding stream reasoning engines have been implemented. However, such implementations have been evaluated so far mostly in academic settings. This work describes a fully functional proof of concept implementation of a stream reasoning system for message-oriented systems, which is capable of exposing a message queue as a linked data stream, which can be filtered by using C-SPARQL - one of the earliest linked data processing engines. The performance of the C-SPARQL engine, which lies at the heart of the implementation, is evaluated by using CityBench benchmark with settings of an enterprise-scale real-time economy application Inforegister NOW!, which is currently under development

    Semantic data integration for supply chain management: with a specific focus on applications in the semiconductor industry

    Get PDF
    Supply Chain Management (SCM) is essential to monitor, control, and enhance the performance of SCs. Increasing globalization and diversity of Supply Chains (SC)s lead to complex SC structures, limited visibility among SC partners, and challenging collaboration caused by dispersed data silos. Digitalization is responsible for driving and transforming SCs of fundamental sectors such as the semiconductor industry. This is further accelerated due to the inevitable role that semiconductor products play in electronics, IoT, and security systems. Semiconductor SCM is unique as the SC operations exhibit special features, e.g., long production lead times and short product life. Hence, systematic SCM is required to establish information exchange, overcome inefficiency resulting from incompatibility, and adapt to industry-specific challenges. The Semantic Web is designed for linking data and establishing information exchange. Semantic models provide high-level descriptions of the domain that enable interoperability. Semantic data integration consolidates the heterogeneous data into meaningful and valuable information. The main goal of this thesis is to investigate Semantic Web Technologies (SWT) for SCM with a specific focus on applications in the semiconductor industry. As part of SCM, End-to-End SC modeling ensures visibility of SC partners and flows. Existing models are limited in the way they represent operational SC relationships beyond one-to-one structures. The scarcity of empirical data from multiple SC partners hinders the analysis of the impact of supply network partners on each other and the benchmarking of the overall SC performance. In our work, we investigate (i) how semantic models can be used to standardize and benchmark SCs. Moreover, in a volatile and unpredictable environment, SC experts require methodical and efficient approaches to integrate various data sources for informed decision-making regarding SC behavior. Thus, this work addresses (ii) how semantic data integration can help make SCs more efficient and resilient. Moreover, to secure a good position in a competitive market, semiconductor SCs strive to implement operational strategies to control demand variation, i.e., bullwhip, while maintaining sustainable relationships with customers. We examine (iii) how we can apply semantic technologies to specifically support semiconductor SCs. In this thesis, we provide semantic models that integrate, in a standardized way, SC processes, structure, and flows, ensuring both an elaborate understanding of the holistic SCs and including granular operational details. We demonstrate that these models enable the instantiation of a synthetic SC for benchmarking. We contribute with semantic data integration applications to enable interoperability and make SCs more efficient and resilient. Moreover, we leverage ontologies and KGs to implement customer-oriented bullwhip-taming strategies. We create semantic-based approaches intertwined with Artificial Intelligence (AI) algorithms to address semiconductor industry specifics and ensure operational excellence. The results prove that relying on semantic technologies contributes to achieving rigorous and systematic SCM. We deem that better standardization, simulation, benchmarking, and analysis, as elaborated in the contributions, will help master more complex SC scenarios. SCs stakeholders can increasingly understand the domain and thus are better equipped with effective control strategies to restrain disruption accelerators, such as the bullwhip effect. In essence, the proposed Sematic Web Technology-based strategies unlock the potential to increase the efficiency, resilience, and operational excellence of supply networks and the semiconductor SC in particular

    Knowledge-Driven Harmonization of Sensor Observations: Exploiting Linked Open Data for IoT Data Streams

    Get PDF
    The rise of the Internet of Things leads to an unprecedented number of continuous sensor observations that are available as IoT data streams. Harmonization of such observations is a labor-intensive task due to heterogeneity in format, syntax, and semantics. We aim to reduce the effort for such harmonization tasks by employing a knowledge-driven approach. To this end, we pursue the idea of exploiting the large body of formalized public knowledge represented as statements in Linked Open Data

    Utilising Semantic Web Technologies for Improved Road Network Information Exchange

    Get PDF
    Road asset data harmonisation is a challenge for the Australian road and transport authorities considering their heterogeneous data standards, data formats and tools. Classic data harmonisation techniques require huge databases with many tables, a unified metadata definition and standardised tools to share data with others. In order to find a better way to harmonise heterogeneous road network data, this dissertation uses Semantic Web technologies to investigate fast and efficient road asset data harmonisation

    Towards Semantically Enabled Complex Event Processing

    Full text link

    Collaborative recommendations with content-based filters for cultural activities via a scalable event distribution platform

    Get PDF
    Nowadays, most people have limited leisure time and the offer of (cultural) activities to spend this time is enormous. Consequently, picking the most appropriate events becomes increasingly difficult for end-users. This complexity of choice reinforces the necessity of filtering systems that assist users in finding and selecting relevant events. Whereas traditional filtering tools enable e.g. the use of keyword-based or filtered searches, innovative recommender systems draw on user ratings, preferences, and metadata describing the events. Existing collaborative recommendation techniques, developed for suggesting web-shop products or audio-visual content, have difficulties with sparse rating data and can not cope at all with event-specific restrictions like availability, time, and location. Moreover, aggregating, enriching, and distributing these events are additional requisites for an optimal communication channel. In this paper, we propose a highly-scalable event recommendation platform which considers event-specific characteristics. Personal suggestions are generated by an advanced collaborative filtering algorithm, which is more robust on sparse data by extending user profiles with presumable future consumptions. The events, which are described using an RDF/OWL representation of the EventsML-G2 standard, are categorized and enriched via smart indexing and open linked data sets. This metadata model enables additional content-based filters, which consider event-specific characteristics, on the recommendation list. The integration of these different functionalities is realized by a scalable and extendable bus architecture. Finally, focus group conversations were organized with external experts, cultural mediators, and potential end-users to evaluate the event distribution platform and investigate the possible added value of recommendations for cultural participation

    Improving Semantic Web Services Discovery Using SPARQL-Based Repository Filtering

    Get PDF
    Semantic Web Services discovery is commonly a heavyweight task, which has scalability issues when the number of services or the ontology complexity increase, because most approaches are based on Description Logics reasoning. As a higher number of services becomes available, there is a need for solutions that improve discovery performance. Our proposal tackles this scalability problem by adding a preprocessing stage based on two SPARQL queries that filter service repositories, discarding service descriptions that do not refer to any functionality or non-functional aspect requested by the user before the actual discovery takes place. This approach fairly reduces the search space for discovery mechanisms, consequently improving the overall performance of this task. Furthermore, this particular solution does not provide yet another discovery mechanism, but it is easily applicable to any of the existing ones, as our prototype evaluation shows. Moreover, proposed queries are automatically generated from service requests, transparently to the user. In order to validate our proposal, this article showcases an application to the OWL-S ontology, in addition to a comprehensive performance analysis that we carried out in order to test and compare the results obtained from proposed filters and current discovery approaches, discussing the benefits of our proposal

    Analyzing Middle High German syntax with RDF and SPARQL

    Get PDF
    The paper presents technological foundations for an empirical study of Middle High German (MHG) syntax. We aim to analyze the diachronic changes of MHG syntax on the example of direct and indirect object alterations in the middle field. In the absence of syntactically annotated corpora, we provide a rule-based shallow parser and an enrichment pipeline with the purpose of quantitative evaluation of a qualitative hypothesis. We provide a publicaly available enrichment and annotation pipeline grounded. A technologically innovative aspect is the application of CoNLL-RDF and SPARQL Update for parsing
    corecore