9 research outputs found

    Linked Vocabulary Recommendation Tools for Internet of Things: A Survey

    Get PDF
    The Semantic Web emerged with the vision of eased integration of heterogeneous, distributed data on the Web. The approach fundamentally relies on the linkage between and reuse of previously published vocabularies to facilitate semantic interoperability. In recent years, the Semantic Web has been perceived as a potential enabling technology to overcome interoperability issues in the Internet of Things (IoT), especially for service discovery and composition. Despite the importance of making vocabulary terms discoverable and selecting most suitable ones in forthcoming IoT applications, no state-of-the-art survey of tools achieving such recommendation tasks exists to date. This survey covers this gap, by specifying an extensive evaluation framework and assessing linked vocabulary recommendation tools. Furthermore, we discuss challenges and opportunities of vocabulary recommendation and related tools in the context of emerging IoT ecosystems. Overall, 40 recommendation tools for linked vocabularies were evaluated, both, empirically and experimentally. Some of the key ndings include that (i) many tools neglect to thoroughly address both, the curation of a vocabulary collection and e ective selection mechanisms; (ii) modern information retrieval techniques are underrepresented; and (iii) the reviewed tools that emerged from Semantic Web use cases are not yet su ciently extended to t today’s IoT projects

    TermPicker: Empfehlungen von Vokabulartermen für die Wiederverwendung beim Modellieren von Linked Open Data

    Get PDF
    Reusing terms from Resource Description Framework (RDF) vocabularies when modeling data as Linked Open Data (LOD) is difficult and without additional guidance far from trivial. This work proposes and evaluates TermPicker: a novel approach alleviating this situation by recommending vocabulary terms based on the information how other data providers modeled their data as LOD. TermPicker gathers such information and represents it via so- called schema-level patterns (SLPs), which are used to calculate a ranked list of RDF vocabulary term recommendations. The ranking of the recommendations is based either on the machine learning approach "Learning To Rank" (L2R) or on the data mining approach "Association Rule" mining (AR). TermPicker is evaluated in a two-fold way. First, an automated cross-validation evaluates TermPicker’s prediction based on the Mean Average Precision (MAP) as well as the Mean Reciprocal Rank at the first five positions (MRR@5). Second, a user study examines which of the recommendation methods (L2R vs. AR) aids real users more to reuse RDF vocabulary terms in a practical setting. The participants, i.e., TermPicker’s potential users, are asked to reuse vocabulary terms while modeling three data sets as LOD, but they receive either L2R-based recommendations, AR-based recommendation, or no recommendations. The results of the cross-validation show that using SLPs, TermPicker achieves 35% higher MAP and MRR@5 values compared to using solely the features based on the typical reuse strategies. Both the L2R-based and the AR-based recommendation methods were able to calculate lists of recommendations with MAP = 0.75 and MRR@5 = 0.80. However, the results of the user study show that the majority of the participants favor the AR-based recommendations. The outcome of this work demonstrates that TermPicker alleviates the situation of searching for classes and properties used by other data providers on the LOD cloud for representing similar data

    Construção de ontologia na prática: um estudo de caso aplicado ao domínio obstétrico

    Get PDF
    O volume de informação e a variedade de fontes de informação gera desafios para a integração informacional. A necessidade de integração de informação entre sistemas de informação distintos alavancou pesquisas para identificar alternativas capazes de proporcionar interoperabilidade semântica entre sistemas, ou seja, a especificação da informação sem gerar ambiguidades. Como contribuição da Ciência da Informação, as ontologias servem como alternativa de padronização semântica das informações. Entretanto, o processo de construção de ontologias ainda gera muita dúvida entre pesquisadores. Muitos autores descrevem metodologias para a construção de ontologias, mas observa-se uma lacuna entre os métodos descritos e a sua aplicação na prática. Busca-se demonstrar na prática a construção de uma ontologia que adotou duas consolidadas metodologias: o realismo ontológico e a metodologia NeOn. Em relação aos métodos e procedimentos técnicos realizados, esta pesquisa é um estudo de caso que investiga a prática da construção de uma ontologia biomédica no domínio obstétrico, com o objetivo de descrever e explicar o processo de construção da ontologia praticado. Espera-se contribuir com o avanço da pesquisa em construção de ontologias no campo da Ciência da Informação, dada sua aplicação na solução problemas de organização e recuperação de informações em ambientes informacionais de diversos campos científicos

    Linked democracy : foundations, tools, and applications

    Get PDF
    Chapter 1Introduction to Linked DataAbstractThis chapter presents Linked Data, a new form of distributed data on theweb which is especially suitable to be manipulated by machines and to shareknowledge. By adopting the linked data publication paradigm, anybody can publishdata on the web, relate it to data resources published by others and run artificialintelligence algorithms in a smooth manner. Open linked data resources maydemocratize the future access to knowledge by the mass of internet users, eitherdirectly or mediated through algorithms. Governments have enthusiasticallyadopted these ideas, which is in harmony with the broader open data movement

    Linked Democracy

    Get PDF
    This open access book shows the factors linking information flow, social intelligence, rights management and modelling with epistemic democracy, offering licensed linked data along with information about the rights involved. This model of democracy for the web of data brings new challenges for the social organisation of knowledge, collective innovation, and the coordination of actions. Licensed linked data, licensed linguistic linked data, right expression languages, semantic web regulatory models, electronic institutions, artificial socio-cognitive systems are examples of regulatory and institutional design (regulations by design). The web has been massively populated with both data and services, and semantically structured data, the linked data cloud, facilitates and fosters human-machine interaction. Linked data aims to create ecosystems to make it possible to browse, discover, exploit and reuse data sets for applications. Rights Expression Languages semi-automatically regulate the use and reuse of content. ; Links information flow, social intelligence, rights management, and modelling with epistemic democracy Presents examples of regulatory and institutional desig

    Linked open data e ontologie per la descrizione del patrimonio culturale: criteri per la progettazione di un registro ragionato

    Get PDF
    La tesi affronta il tema del semantic web e della pubblicazione delle informazioni relative al patrimonio culturale in modalità linked open data. In particolare, oggetto dell’attività di ricerca sono i registri di ontologie, vale a dire quegli strumenti che descrivono formalmente i modelli ontologici disponibili sul web e ne agevolano il reperimento e la valutazione, incentivandone il riuso e facilitando i processi di allineamento semantico e di interoperabilità. I registri di ontologie rispondono in modo efficace all’assenza di strumenti di riferimento e di orientamento nei processi di modellazione concettuale delle risorse informative e sono stati sperimentati con successo in diversi domini, ma sono ancora inediti in ambito culturale. L’esame puntuale delle iniziative condotte nell’ultimo decennio nell’ambito dei beni culturali ha evidenziato con chiarezza la mancanza di un assetto epistemologico consolidato nella modellazione concettuale delle risorse informative, a fronte delle numerose ontologie realizzate in funzione dei molteplici progetti di pubblicazione di linked open data. Di conseguenza, risulta tutt’altro che agevole conoscere esaustivamente tutte le ontologie disponibili in relazione al proprio abito di interesse ed ottenere in maniera agevole e sistematica una valutazione attendibile circa la loro capacità rappresentativa e il loro grado di interoperabilità semantica. L’analisi dei principali registri di ontologie finora realizzati al di fuori del dominio dei beni culturali ha consentito di individuare e definire i requisiti di un registro di ontologie per i beni culturali (denominato CLOVER, Culture – Linked Open Vocabularies – Extensible Registry), e di elaborarne la relativa ontologia. L’ontologia ADMS-AP_IT (Asset Description Metadata Schema – Application Profile – Italy) è stata redatta a seguito di un’analisi sistematica e di una valutazione critica di preesistenti ontologie concepite per scopi similari. Essa è stata sottoposta ad AgID, che l’ha inclusa nella rete di ontologie e vocabolari controllati della pubblica amministrazione detta OntoPiA. Tale ontologia rappresenta un punto di arrivo del progetto di ricerca, ma anche una base di partenza per approfondire l'indagine su tali temi: in questo senso, la sua inclusione nella rete OntoPiA di ontologie e vocabolari controllati della pubblica amministrazione si configura come un'opportunità rilevante per sperimentarne l'applicabilità e migliorarne la qualità

    A Framework to Support Developers in the Integration and Application of Linked and Open Data

    Get PDF
    In the last years, the number of freely available Linked and Open Data datasets has multiplied into tens of thousands. The numbers of applications taking advantage of it, however, have not. Thus, large portions of potentially valuable data remain unexploited and are inaccessible for lay users. Therefore the upfront investment in releasing data in the first place is hard to justify. The lack of applications needs to be addressed in order not to undermine efforts put into Linked and Open Data. In existing research, strong indicators can be found that the dearth of applications is due to a lack of pragmatic, working architectures supporting these applications and guiding developers. In this thesis, a new architecture for the integration and application of Linked and Open Data is presented. Fundamental design decisions are backed up by two studies: firstly, based on real-world Linked and Open Data samples, characteristic properties are identified. A key finding is the fact that large amounts of structured data display tabular structures, do not use clear licensing and involve multiple different file formats. Secondly, following on from that study, a comparison of storage choices in relevant query scenarios is made. It includes the de-facto standard storage choice in this domain, Triples Stores, as well as relational and NoSQL approaches. Results show significant performance deficiencies of some technologies in certain scenarios. Consequently, when integrating Linked and Open Data in scenarios with application-specific entities, the first choice of storage is relational databases. Combining these findings and related best practices of existing research, a prototype framework is implemented using Java 8 and Hibernate. As a proof-of-concept it is employed in an existing Linked and Open Data integration project. Thereby, it is shown that a best practice architectural component is introduced successfully, while development effort to implement specific program code can be simplified. Thus, the present work provides an important foundation for the development of semantic applications based on Linked and Open Data and potentially leads to a broader adoption of such applications

    An employer demand intelligence framework

    Get PDF
    Employer demand intelligence is crucial to ensure accurate and reliable education, workforce and immigration related decisions are made. To date, current methods have been manually intensive and expensive — providing insufficient scope of information required to address such important economic implications. This research developed an Employer Demand Intelligence Framework (EDIF) to address detailed employer demand intelligence requirements. To further the EDIF’s functionality, a semi-automated Employer Demand Identification Tool (EDIT) was developed that continuously provide such intelligence

    Actes des 29es Journées Francophones d'Ingénierie des Connaissances, IC 2018

    Get PDF
    International audienc
    corecore