38,369 research outputs found

    Ontology mapping: the state of the art

    No full text
    Ontology mapping is seen as a solution provider in today's landscape of ontology research. As the number of ontologies that are made publicly available and accessible on the Web increases steadily, so does the need for applications to use them. A single ontology is no longer enough to support the tasks envisaged by a distributed environment like the Semantic Web. Multiple ontologies need to be accessed from several applications. Mapping could provide a common layer from which several ontologies could be accessed and hence could exchange information in semantically sound manners. Developing such mapping has beeb the focus of a variety of works originating from diverse communities over a number of years. In this article we comprehensively review and present these works. We also provide insights on the pragmatics of ontology mapping and elaborate on a theoretical approach for defining ontology mapping

    Weak signal identification with semantic web mining

    Get PDF
    We investigate an automated identification of weak signals according to Ansoff to improve strategic planning and technological forecasting. Literature shows that weak signals can be found in the organization's environment and that they appear in different contexts. We use internet information to represent organization's environment and we select these websites that are related to a given hypothesis. In contrast to related research, a methodology is provided that uses latent semantic indexing (LSI) for the identification of weak signals. This improves existing knowledge based approaches because LSI considers the aspects of meaning and thus, it is able to identify similar textual patterns in different contexts. A new weak signal maximization approach is introduced that replaces the commonly used prediction modeling approach in LSI. It enables to calculate the largest number of relevant weak signals represented by singular value decomposition (SVD) dimensions. A case study identifies and analyses weak signals to predict trends in the field of on-site medical oxygen production. This supports the planning of research and development (R&D) for a medical oxygen supplier. As a result, it is shown that the proposed methodology enables organizations to identify weak signals from the internet for a given hypothesis. This helps strategic planners to react ahead of time

    Automated schema matching techniques: an exploratory study

    Get PDF
    Manual schema matching is a problem for many database applications that use multiple data sources including data warehousing and e-commerce applications. Current research attempts to address this problem by developing algorithms to automate aspects of the schema-matching task. In this paper, an approach using an external dictionary facilitates automated discovery of the semantic meaning of database schema terms. An experimental study was conducted to evaluate the performance and accuracy of five schema-matching techniques with the proposed approach, called SemMA. The proposed approach and results are compared with two existing semi-automated schema-matching approaches and suggestions for future research are made

    Cloud service localisation

    Get PDF
    The essence of cloud computing is the provision of software and hardware services to a range of users in dierent locations. The aim of cloud service localisation is to facilitate the internationalisation and localisation of cloud services by allowing their adaption to dierent locales. We address the lingual localisation by providing service-level language translation techniques to adopt services to dierent languages and regulatory localisation by providing standards-based mappings to achieve regulatory compliance with regionally varying laws, standards and regulations. The aim is to support and enforce the explicit modelling of aspects particularly relevant to localisation and runtime support consisting of tools and middleware services to automating the deployment based on models of locales, driven by the two localisation dimensions. We focus here on an ontology-based conceptual information model that integrates locale specication in a coherent way

    25 years development of knowledge graph theory: the results and the challenge

    Get PDF
    The project on knowledge graph theory was begun in 1982. At the initial stage, the goal was to use graphs to represent knowledge in the form of an expert system. By the end of the 80's expert systems in medical and social science were developed successfully using knowledge graph theory. In the following stage, the goal of the project was broadened to represent natural language by knowledge graphs. Since then, this theory can be considered as one of the methods to deal with natural language processing. At the present time knowledge graph representation has been proven to be a method that is language independent. The theory can be applied to represent almost any characteristic feature in various languages.\ud The objective of the paper is to summarize the results of 25 years of development of knowledge graph theory and to point out some challenges to be dealt with in the next stage of the development of the theory. The paper will give some highlight on the difference between this theory and other theories like that of conceptual graphs which has been developed and presented by Sowa in 1984 and other theories like that of formal concept analysis by Wille or semantic networks

    Intelligent agent simulator in massive crowd

    Get PDF
    Crowd simulations have many benefits over real-life research such as in computer games, architecture and entertainment. One of the key elements in this study is to include elements of decision-making into the crowd. The aim of this simulator is to simulate the features of an intelligent agent to escape from crowded environments especially in one-way corridor, two-way corridor and four-way intersection. The addition of the graphical user interface enables intuitive and fast handling in all settings and features of the Intelligent Agent Simulator and allows convenient research in the field of intelligent behaviour in massive crowd. This paper describes the development of a simulator by using the Open Graphics Library (OpenGL), starting from the production of training data, the simulation process, until the simulation results. The Social Force Model (SFM) is used to generate the motion of agents and the Support Vector Machine (SVM) is used to predict the next step for intelligent agent

    HIGGINS: where knowledge acquisition meets the crowds

    Get PDF
    We present HIGGINS, an engine for high quality Knowl- edge Acquisition (KA), placing special emphasis on its ar- chitecture. The distinguishing characteristic and novelty of HIGGINS lies in its special blending of two engines: An automated Information Extraction (IE) engine, aided by semantic resources, and a game-based, Human Computing engine (HC). We focus on KA from web data and text sources and, in particular, on deriving relationships between enti- ties. As a running application we utilise movie narratives, using which we wish to derive relationships among movie characters
    corecore