27 research outputs found

    Uncertainty in Ontologies: Dempster-Shafer Theory for Data Fusion Applications

    Full text link
    Nowadays ontologies present a growing interest in Data Fusion applications. As a matter of fact, the ontologies are seen as a semantic tool for describing and reasoning about sensor data, objects, relations and general domain theories. In addition, uncertainty is perhaps one of the most important characteristics of the data and information handled by Data Fusion. However, the fundamental nature of ontologies implies that ontologies describe only asserted and veracious facts of the world. Different probabilistic, fuzzy and evidential approaches already exist to fill this gap; this paper recaps the most popular tools. However none of the tools meets exactly our purposes. Therefore, we constructed a Dempster-Shafer ontology that can be imported into any specific domain ontology and that enables us to instantiate it in an uncertain manner. We also developed a Java application that enables reasoning about these uncertain ontological instances.Comment: Workshop on Theory of Belief Functions, Brest: France (2010

    PEMBUATAN ONTOLOGY LEARNING OBJECT PADA E-LEARNING

    Get PDF
    Salah satu komponen penting web e-learning adalah content. Dalam perkembangannya, e-learning content berubah menjadi learning object dengan penambahan metadata di dalamnya. Metadata ini kemudian dimasukan ke dalam ontology sehingga diharapkan dapat memudahkan penyebaran dan pencarian materi tersebut. Penulisan ini akan menjabarkan proses pembuatan ontology beserta implementasinya untuk dimanfaatkan dalam proses pencarian suatu learning object. Dengan memanfaatkan ontology ini hasil pencarian diharapkan dapat lebih tepat dan sesuai

    Ontologizing Lexicon Access Functions based on an LMF-based Lexicon Taxonomy

    Get PDF
    This paper discusses ontologization of lexicon access functions in the context of a service-oriented language infrastructure, such as the Language Grid. In such a language infrastructure, an access function to a lexical resource, embodied as an atomic Web service, plays a crucially important role in composing a composite Web service tailored to a user?s specific requirement. To facilitate the composition process involving service discovery, planning and invocation, the language infrastructure should be ontology-based; hence the ontologization of a range of lexicon functions is highly required. In a service-oriented environment, lexical resources however can be classified from a service-oriented perspective rather than from a lexicographically motivated standard. Hence to address the issue of interoperability, the taxonomy for lexical resources should be ground to principled and shared lexicon ontology. To do this, we have ontologized the standardized lexicon modeling framework LMF, and utilized it as a foundation to stipulate the service-oriented lexicon taxonomy and the corresponding ontology for lexicon access functions. This paper also examines a possible solution to fill the gap between the ontological descriptions and the actual Web service API by adopting a W3C recommendation SAWSDL, with which Web service descriptions can be linked with the domain ontology

    Desirable Ontologies for the Construction of Semantic Applications

    Get PDF
    The intended goal of semantic web is to provide the search results to the user with at most accuracy and good precision. To make possible, the object of semantic web is to add semantics to the existing information on the Web using semantic web languages. These web languages have been to express detail information of the content present on the web with help of Ontologies. Ontology is expressed in a knowledge representation language, which provides a formal frame of semantics. Therefore we provide a brief explanation of semantic web languages in which some of them uses description logic and frames as basis. These semantic languages used in construction and understanding of ontologies clearly. The goal of this paper is to provide a brief survey of state-of-the-art ontology languages which are used to express ontology over the Web, a basic understanding of ontologies and how the ontologies are constructed

    Toward an Epistemic Web

    No full text
    In the beginning knowledge was local. With the development of more complex forms of economic organization knowledge began to travel. The Library of Alexandria was the fulfillment - however partial and transitory - of a vision to bring together all the knowledge of the world. But to obtain the knowledge one had to go to Alexandria. Today the World Wide Web promises to make universally accessible the knowledge of a world grown larger. To be sure, much work remains to be done: many documents need to be made available (i.e. digitized if they are not already, and freed from restrictive access controls); and various biases (economic, legal, linguistic, social, technological) need to be overcome. But what do we do with this knowledge? Is it enough to create a digital library of Alexandria, with (perhaps) improved finding aids? We propose that the crucial question is how to structure knowledge on the Web to facilitate the construction of new knowledge, knowledge that will be critical in addressing the challenges of the emerging global society. We begin by asking three questions about the Web and its future. In the remainder of the paper we explore the possibility of an Epistemic Web in the context of a more general discussion of knowledge representation technologies, technologies used for storing, manipulating and spreading knowledge

    Toward an Epistemic Web

    Full text link
    In the beginning knowledge was local. With the development of more complex forms of economic organization knowledge began to travel. The Library of Alexandria was the fulfillment - however partial and transitory - of a vision to bring together all the knowledge of the world. But to obtain the knowledge one had to go to Alexandria. Today the World Wide Web promises to make universally accessible the knowledge of a world grown larger. To be sure, much work remains to be done: many documents need to be made available (i.e. digitized if they are not already, and freed from restrictive access controls); and various biases (economic, legal, linguistic, social, technological) need to be overcome. But what do we do with this knowledge? Is it enough to create a digital library of Alexandria, with (perhaps) improved finding aids? We propose that the crucial question is how to structure knowledge on the Web to facilitate the construction of new knowledge, knowledge that will be critical in addressing the challenges of the emerging global society. We begin by asking three questions about the Web and its future. In the remainder of the paper we explore the possibility of an Epistemic Web in the context of a more general discussion of knowledge representation technologies, technologies used for storing, manipulating and spreading knowledge

    Web services choreography testing using semantic service description

    Get PDF
    Web services have become popular due to their ability to integrate with and to interoperate heterogeneous applications. Several web services can be combined into a single application to meet the needs of users. In the course of web services selection, a web candidate service needs to conform to the behaviour of its client, and one way of ensuring this conformity is by testing the interaction between the web service and its user. The existing web services test approaches mainly focus on syntax-based web services description, whilst the semantic-based solutions mostly address composite process flow testing. The aim of this research is to provide an automated testing approach to support service selection during automatic web services composition using Web Service Modeling Ontology (WSMO). The research work began with understanding and analysing the existing test generation approaches for web services. Second, the weaknesses of the existing approaches were identified and addressed by utilizing the choreography transition rules of WSMO in an effort to generate a Finite State Machine (FSM). The FSM was then used to generate the working test cases. Third, a technique to generate an FSM from Abstract State Machine (ASM) was adapted to be used with WSMO. This thesis finally proposed a new testing model called the Choreography to Finite State Machine (C2FSM) to support the service selection of an automatic web service composition. It proposed new algorithms to automatically generate the test cases from the semantic description (WSMO choreography description). The proposed approach was then evaluated using the Amazon E-Commerce Web Service WSMO description. The quality of the test cases generated using the proposed approach was measured by assessing their mutation adequacy score. A total of 115 mutants were created based on 7 mutant operators. A mutation adequacy score of 0.713 was obtained. The experimental validation demonstrated a significant result in the sense that C2FSM provided an efficient and feasible solution. The result of this research could assist the service consumer agents in verifying the behaviour of the Web service in selecting appropriate services for web service composition

    OPEN DATA AND PRACTICAL USE OF RDF FORMAT

    Full text link
    Glavni izzivi, ki jih predstavljajo odprti povezani podatki, so njihova strukturiranost in uporabnost za razvoj ter napredek sodobne digitalne družbe. Podatki predstavljajo temelj vseh informaciji o delovanju organizacij. Pridobivanje in uporaba podatkov zaradi napredka informacijske tehnologije postaja vse lažja. S tem omogočamo povečani gospodarski napredek, razvoj aplikacij, programov ipd. Hkrati se pojavljajo novi izzivi na področju formatov podatkov, njihove pretvorbe, odpiranja podatkov in uporabnosti s strani največjega možnega števila deležnikov. RDF (Resource Description Framework) je format semantičnega spleta, ki se uporablja kot splošna metoda za opis informacij. Vsi podatki, ki so na semantičnem spletu, kot primarni jezik za njihovo predstavitev uporabljajo RDF. Prednost RDF formata je, da je modularen, omogoča prikaz kvalitetnih in obnovljivih podatkov. Namen odpiranja podatkov je spodbujanje in motiviranje organizacij k ponovni uporabi podatkov. V javnem sektorju je odpiranje podatkov povezano s področjem informacij javnega značaja. RDF nam omogoči, da so podatki med seboj lahko povezljivi in povezani. S povezanimi podatki na semantičnem spletu omogočimo še bolj kvalitetne in uporabne podatke. Za področje odpiranja podatkov ne smemo izpustiti pomena pravnih podlag, ki morajo biti primerno urejene tako z vidika transparentnosti in javnosti kot tudi z vidika varstva osebnih podatkov. Pri objavi podatkov moramo biti pazljivi, da ne kršimo členov, ki preprečujejo objavo podatkov neprimernih za ponovno uporabo. Proučitev in analiza Zakona o informacijah javnega značaja (ZDIJZ) in Zakona o varstvu osebnih podatkov (ZVOP) nam omogočata izvajanje pravih aktivnosti za pravilno objavo podatkov. S SWOT raziskavo in primerjalno analizo med posameznimi formati smo ugotovili, da odprtost podatkov omogoča boljše delovanje javnih in zasebnih organizacij. Posledično to vodi v pospeševanje gospodarskega razvoja in znanstvenih dosežkov. Prikazan primer formata RDF prikaže njegovo uporabnost in prednost pred vsemi ostalimi formati.The main challenges posed by open linked data is their structure and usability for the development and progress of a modern digital society. The data forms the basis of all information for organizations. Getting and using data to make progress in information technology is getting easier. This allows for increased economic progress, development of applications, programs, etc. At the same time, new challenges arise in the field of data formats, their conversion, data opening and usability by the largest possible number of stakeholders. RDF (Resource Description Framework) is a semantic web format used as a general method for describing information. All data on the Semantic Web is used by RDF as the primary language for their presentation. The advantage of the RDF format is that it is modular, allowing you to view high-quality and renewable data. The purpose of opening data is to encourage and motivate organizations to re-use data. In the public sector, the opening of data is linked to the field of information of a public character. RDF allows us to make data interconnected and connected. With related information on the semantic web, we provide even better quality and useful data. In the field of data disclosure, we should not omit the importance of legal bases, which should be properly regulated in terms of transparency and publicity, as well as from the point of view of the protection of personal data. When publishing data, we must be careful not to violate articles that prevent publication of data unsuitable for reuse. The examination and analysis of the Public Information Act (ZDIJZ) and the Personal Data Protection Act (ZVOP) allow us to carry out the right activities for the proper publication of data. SWOT research and comparative analysis has been done between individual formats, we found that the openness of data enables better functioning of public and private organizations. Consequently, this leads to the promotion of economic development and scientific achievements. The RDF format example illustrates its usefulness and advantage over all other formats

    Semantic-Based Access Control Mechanisms in Dynamic Environments

    Get PDF
    The appearance of dynamic distributed networks in early eighties of the last century has evoked technologies like pervasive systems, ubiquitous computing, ambient intelligence, and more recently, Internet of Things (IoT) to be developed. Moreover, sensing capabil- ities embedded in computing devices offer users the ability to share, retrieve, and update resources on anytime and anywhere basis. These resources (or data) constitute what is widely known as contextual information. In these systems, there is an association between a system and its environment and the system should always adapt to its ever-changing environment. This situation makes the Context-Based Access Control (CBAC) the method of choice for such environments. However, most traditional policy models do not address the issue of dynamic nature of dynamic distributed systems and are limited in addressing issues like adaptability, extensibility, and reasoning over security policies. We propose a security framework for dynamic distributed network domain that is based on semantic technologies. This framework presents a flexible and adaptable context-based access control authoriza- tion model for protecting dynamic distributed networks’ resources. We extend our secu- rity model to incorporate context delegation in context-based access control environments. We show that security mechanisms provided by the framework are sound and adhere to the least-privilege principle. We develop a prototype implementation of our framework and present the results to show that our framework correctly derives Context-Based au- thorization decision. Furthermore, we provide complexity analysis for the authorization framework in its response to the requests and contrast the complexity against possible op- timization that can be applied on the framework. Finally, we incorporate semantic-based obligation into our security framework. In phase I of our research, we design two lightweight Web Ontology Language (OWL) ontologies CTX-Lite and CBAC. CTX-Lite ontology serves as a core ontology for context handling, while CBAC ontology is used for modeling access control policy requirements. Based on the two OWL ontologies, we develop access authorization approach in which access decision is solely made based on the context of the request. We separate context operations from access authorization operations to reduce processing time for distributed networks’ devices. In phase II, we present two novel ontology-based context delegation ap- proaches. Monotonic context delegation, which adopts GRANT version of delegation, and non-monotonic for TRANSFER version of delegation. Our goal is to present context del- egation mechanisms that can be adopted by existing CBAC systems which do not provide delegation services. Phase III has two sub-phases, the first is to provide complexity anal- ysis of the authorization framework. The second sub-phase is dedicated to incorporating semantic-based obligation
    corecore