7 research outputs found

    RDF query and protocols language using for description and representation of web ontologies

    Get PDF
    The purpose of this article is to expose the metadata structure based on RDF (Resource Description Framework) and the way in which queries can be made using SPARQL (Protocol and RDF Query Language), as a principle for searching the Semantic Web. It also describes what must be considered to build a Web Ontology and the tools that can help the Software developer to make querys using SPARQL

    Ontology model for zakat hadith knowledge based on causal relationship, semantic relatedness and suggestion extraction

    Get PDF
    Hadith is the second most important source used by all Muslims. However, semantic ambiguity in the hadith raises issues such as misinterpretation, misunderstanding, and misjudgement of the hadith’s content. How to tackle the semantic ambiguity will be focused on this research (RQ). The Zakat hadith data should be expressed semantically by changing the surface-level semantics to a deeper sense of the intended meaning. This can be achieved using an ontology model covering three main aspects (i.e., semantic relationship extraction, causal relationship representation, and suggestion extraction). This study aims to resolve the semantic ambiguity in hadith, particularly in the Zakat topic by proposing a semantic approach to resolve semantic ambiguity, representing causal relationships in the Zakat ontology model, proposing methods to extract suggestion polarity in hadith, and building the ontology model for Zakat topic. The selection of the Zakat topic is based on the survey findings that respondents still lack knowledge and understanding of the Zakat process. Four hadith book types (i.e., Sahih Bukhari, Sahih Muslim, Sunan Abu Dawud, and Sunan Ibn Majah) that was covering 334 concept words and 247 hadiths were analysed. The Zakat ontology modelling cover three phases which are Preliminary study, source selection and data collection, data pre-processing and analysis, and development and evaluation of ontology models. Domain experts in language, Zakat hadith, and ontology have evaluated the Zakat ontology and identified that 85% of Zakat concept was defined correctly. The Ontology Usability Scale was used to evaluate the final ontology model. An expert in ontology development evaluated the ontology that was developed in Protégé OWL, while 80 respondents evaluated the ontology concepts developed in PHP systems. The evaluation results show that the Zakat ontology has resolved the issue of ambiguity and misunderstanding of the Zakat process in the Zakat hadith. The Zakat ontology model also allows practitioners in Natural language processing (NLP), hadith, and ontology to extract Zakat hadith based on the representation of a reusable formal model, as well as causal relationships and the suggestion polarity of the Zakat hadith

    Interoperability of heterogeneous Systems of Systems: from requirements to a reference architecture

    Get PDF
    Interoperability stands as a critical hurdle in developing and overseeing distributed and collaborative systems. Thus, it becomes imperative to gain a deep comprehension of the primary obstacles hindering interoperability and the essential criteria that systems must satisfy to achieve it. In light of this objective, in the initial phase of this research, we conducted a survey questionnaire involving stakeholders and practitioners engaged in distributed and collaborative systems. This effort resulted in the identification of eight essential interoperability requirements, along with their corresponding challenges. Then, the second part of our study encompassed a critical review of the literature to assess the effectiveness of prevailing conceptual approaches and associated technologies in addressing the identified requirements. This analysis led to the identification of a set of components that promise to deliver the desired interoperability by addressing the requirements identified earlier. These elements subsequently form the foundation for the third part of our study, a reference architecture for interoperability-fostering frameworks that is proposed in this paper. The results of our research can significantly impact the software engineering of interoperable systems by introducing their fundamental requirements and the best practices to address them, but also by identifying the key elements of a framework facilitating interoperability in Systems of Systems

    An Automated Method to Enrich and Expand Consumer Health Vocabularies Using GloVe Word Embeddings

    Get PDF
    Clear language makes communication easier between any two parties. However, a layman may have difficulty communicating with a professional due to not understanding the specialized terms common to the domain. In healthcare, it is rare to find a layman knowledgeable in medical jargon, which can lead to poor understanding of their condition and/or treatment. To bridge this gap, several professional vocabularies and ontologies have been created to map laymen medical terms to professional medical terms and vice versa. Many of the presented vocabularies are built manually or semi-automatically requiring large investments of time and human effort and consequently the slow growth of these vocabularies. In this dissertation, we present an automatic method to enrich existing concepts in a medical ontology with additional laymen terms and also to expand the number of concepts in the ontology that do not have associated laymen terms. Our work has the benefit of being applicable to vocabularies in any domain. Our entirely automatic approach uses machine learning, specifically Global Vectors for Word Embeddings (GloVe), on a corpus collected from a social media healthcare platform to extend and enhance consumer health vocabularies. We improve these vocabularies by incorporating synonyms and hyponyms from the WordNet ontology. By performing iterative feedback using GloVe’s candidate terms, we can boost the number of word occurrences in the co-occurrence matrix allowing our approach to work with a smaller training corpus. Our novel algorithms and GloVe were evaluated using two laymen datasets from the National Library of Medicine (NLM), the Open-Access and Collaborative Consumer Health Vocabulary (OAC CHV) and the MedlinePlus Healthcare Vocabulary. For our first goal, enriching concepts, the results show that GloVe was able to find new laymen terms with an F-score of 48.44%. Our best algorithm enhanced the corpus with synonyms from WordNet, outperformed GloVe with an F-score relative improvement of 25%. For our second goal, expanding the number of concepts with related laymen’s terms, our synonym-enhanced GloVe outperformed GloVe with a relative F-score relative improvement of 63%. The results of the system were in general promising and can be applied not only to enrich and expand laymen vocabularies for medicine but any ontology for a domain, given an appropriate corpus for the domain. Our approach is applicable to narrow domains that may not have the huge training corpora typically used with word embedding approaches. In essence, by incorporating an external source of linguistic information, WordNet, and expanding the training corpus, we are getting more out of our training corpus. Our system can help building an application for patients where they can read their physician\u27s letters more understandably and clearly. Moreover, the output of this system can be used to improve the results of healthcare search engines, entity recognition systems, and many others
    corecore