1,296 research outputs found

    Initiating organizational memories using ontology network analysis

    Get PDF
    One of the important problems in organizational memories is their initial set-up. It is difficult to choose the right information to include in an organizational memory, and the right information is also a prerequisite for maximizing the uptake and relevance of the memory content. To tackle this problem, most developers adopt heavy-weight solutions and rely on a faithful continuous interaction with users to create and improve its content. In this paper, we explore the use of an automatic, light-weight solution, drawn from the underlying ingredients of an organizational memory: ontologies. We have developed an ontology-based network analysis method which we applied to tackle the problem of identifying communities of practice in an organization. We use ontology-based network analysis as a means to provide content automatically for the initial set up of an organizational memory

    A Role of Semantic Web and Ontology in Information Retrieval

    Get PDF
    Web Mining is an application of data mining which focuses on discovering relevant data from Web content. The Semantic Web describes a web as data rather than documents. It characterizes information in understandable manner moreimplicitly for humans and computers.It wasdeveloped with the help of Ontology, which is the pillar of the Semantic Web. The semantic Web depends on integration and use of semantic data, and sematic data is depends on ontology. Ontology can provide a common vocabulary, a grammar for publishing data, and can supply a semantic d data which can be used to preserve the Ontologies and keep them ready for inference. This also helps in personalized filtering mechanisms for users to consume relevant, interesting information from web sites. By combining web mining and sematic web, we can retrieve relevant data called as semantic web mining. This paper gives an overview of sematic web mining and their applications

    Eliciting Expertise

    No full text
    Since the last edition of this book there have been rapid developments in the use and exploitation of formally elicited knowledge. Previously, (Shadbolt and Burton, 1995) the emphasis was on eliciting knowledge for the purpose of building expert or knowledge-based systems. These systems are computer programs intended to solve real-world problems, achieving the same level of accuracy as human experts. Knowledge engineering is the discipline that has evolved to support the whole process of specifying, developing and deploying knowledge-based systems (Schreiber et al., 2000) This chapter will discuss the problem of knowledge elicitation for knowledge intensive systems in general

    Towards a system redesign for better performance and customer satisfaction : a case study of the ICTS helpdesk at the University of Cape Town

    Get PDF
    Includes bibliographical references.This paper presents the findings from a study, which was carried out to investigate how the design of knowledge management systems could be improved for enhanced performance and greater customer satisfaction. The ICTS Department's helpdesk at the University of Cape Town, South Africa, was the venue for this case study. The study set out to meet the following objectives: - undertaking a knowledge acquisition strategy by carrying out a systems evaluation and analysis of the existing web-based user support system, - suggesting a knowledge representation model for an adaptive web-based user support system, and - developing and testing an online troubleshooter prototype for an improved knowledge use support system. To achieve the objectives of the study, knowledge engineering techniques were deployed on top of a qualitative research design. Questionnaires, which were supplemented by interview guides and observations, were the research tools used in gathering the data. In addition to this, a representative sample of the ICTS clientele and management was interviewed. It was discovered that poorly designed knowledge management systems cause frustration among the clientele who interact with the system. Specifically, it was found that the language used for knowledge representation plays a vital role in determining how best users can interpret knowledge items in a given knowledge domain. In other words, knowledge modelling and representation can improve knowledge representation if knowledge engineering techniques are appropriately followed in designing knowledge based systems. It was concluded that knowledge representation can be improved significantly if, firstly, the ontology technique is embraced as a mechanism of knowledge representation. Secondly, using hierarchies and taxonomies improves navigability in the knowledge structure. Thirdly, visual knowledge representation that supplements textual knowledge adds more meaning to the user, and is such a major and important technique that it can even cater for novice users

    Generic Architecture for Predictive Computational Modelling with Application to Financial Data Analysis: Integration of Semantic Approach and Machine Learning

    Get PDF
    The PhD thesis introduces a Generic Architecture for Predictive Computational Modelling capable of automating analytical conclusions regarding quantitative data structured as a data frame. The model involves heterogeneous data mining based on a semantic approach, graph-based methods (ontology, knowledge graphs, graph databases) and advanced machine learning methods. The main focus of my research is data pre-processing aimed at a more efficient selection of input features to the computational model. Since the model I propose is generic, it can be applied for data mining of all quantitative datasets (containing two-dimensional, size-mutable, heterogeneous tabular data); however, it is best suitable for highly interconnected data. To adapt this generic model to a specific use case, an Ontology as the formal conceptual representation for the relevant domain knowledge is needed. I have determined to use financial/market data for my use cases. In the course of practical experiments, the effectiveness of the PCM model application for the UK companies’ financial risk analysis and the FTSE100 market index forecasting was evaluated. The tests confirmed that the PCM model has more accurate outcomes than stand-alone traditional machine learning methods. By critically evaluating this architecture, I proved its validity and suggested directions for future research

    Knowledge modelling of emerging technologies for sustainable building development

    Get PDF
    In the quest for improved performance of buildings and mitigation of climate change, governments are encouraging the use of innovative sustainable building technologies. Consequently, there is now a large amount of information and knowledge on sustainable building technologies over the web. However, internet searches often overwhelm practitioners with millions of pages that they browse to identify suitable innovations to use on their projects. It has been widely acknowledged that the solution to this problem is the use of a machine-understandable language with rich semantics - the semantic web technology. This research investigates the extent to which semantic web technologies can be exploited to represent knowledge about sustainable building technologies, and to facilitate system decision-making in recommending appropriate choices for use in different situations. To achieve this aim, an exploratory study on sustainable building and semantic web technologies was conducted. This led to the use of two most popular knowledge engineering methodologies - the CommonKADS and "Ontology Development 101" in modelling knowledge about sustainable building technology and PV -system domains. A prototype system - Photo Voltaic Technology ONtology System (PV -TONS) - that employed sustainable building technology and PV -system domain knowledge models was developed and validated with a case study. While the sustainable building technology ontology and PV -TONS can both be used as generic knowledge models, PV -TONS is extended to include applications for the design and selection of PV -systems and components. Although its focus was on PV -systems, the application of semantic web technologies can be extended to cover other areas of sustainable building technologies. The major challenges encountered in this study are two-fold. First, many semantic web technologies are still under development and very unstable, thus hindering their full exploitation. Second, the lack of learning resources in this field steepen the learning curve and is a potential set-back in using semantic web technologies

    LearnFCA: A Fuzzy FCA and Probability Based Approach for Learning and Classification

    Get PDF
    Formal concept analysis(FCA) is a mathematical theory based on lattice and order theory used for data analysis and knowledge representation. Over the past several years, many of its extensions have been proposed and applied in several domains including data mining, machine learning, knowledge management, semantic web, software development, chemistry ,biology, medicine, data analytics, biology and ontology engineering. This thesis reviews the state-of-the-art of theory of Formal Concept Analysis(FCA) and its various extensions that have been developed and well-studied in the past several years. We discuss their historical roots, reproduce the original definitions and derivations with illustrative examples. Further, we provide a literature review of it’s applications and various approaches adopted by researchers in the areas of dataanalysis, knowledge management with emphasis to data-learning and classification problems. We propose LearnFCA, a novel approach based on FuzzyFCA and probability theory for learning and classification problems. LearnFCA uses an enhanced version of FuzzyLattice which has been developed to store class labels and probability vectors and has the capability to be used for classifying instances with encoded and unlabelled features. We evaluate LearnFCA on encodings from three datasets - mnist, omniglot and cancer images with interesting results and varying degrees of success. Adviser: Dr Jitender Deogu

    Ontological Engineering: What are Ontologies and How Can We Build Them?

    Get PDF
    Ontologies are formal, explicit specifications of shared conceptualizations. There is much literature on what they are, how they can be engineered and where they can be used inside applications. All these literature can be grouped under the term “Ontological Engineering,” which is defined as the set of activities that concern the ontology development process, the ontology lifecycle, the principles, methods and methodologies for building ontologies, and the tool suites and languages that support them. In this chapter we provide an overview of Ontological Engineering, describing the current trends, issues and problem

    Combining Inclusion and Individually Adaptive Learning in an Educational Game for Preschool Children

    Get PDF
    Digital educational games have been around for a long time and have shown to be pedagogically valuable. Unfortunately most games do not utilize technology to the extent that is possible. Not the least this applies to mathematical educational games for younger children. This work aims to combine several educational scientific approaches using current technologies, which traditionally had been very difficult or not economically viable in the non-digital context. Especially we focus on combining Inclusion with Adaptive Learning while simultaneously use the beneficial properties that Learning by Teaching offers and find additional synergies to improve mathematical learning in preschool children. No studies have yet been carried out with this system, but it has opened up for several potential studies and offers a mean to carry out cost effective studies, including cross-cultural studies

    LEARNFCA: A FUZZY FCA AND PROBABILITY BASED APPROACH FOR LEARNING AND CLASSIFICATION

    Get PDF
    Formal concept analysis(FCA) is a mathematical theory based on lattice and order theory used for data analysis and knowledge representation. Over the past several years, many of its extensions have been proposed and applied in several domains including data mining, machine learning, knowledge management, semantic web, software development, chemistry ,biology, medicine, data analytics, biology and ontology engineering. This thesis reviews the state-of-the-art of theory of Formal Concept Analysis(FCA) and its various extensions that have been developed and well-studied in the past several years. We discuss their historical roots, reproduce the original definitions and derivations with illustrative examples. Further, we provide a literature review of it’s applications and various approaches adopted by researchers in the areas of dataanalysis, knowledge management with emphasis to data-learning and classification problems. We propose LearnFCA, a novel approach based on FuzzyFCA and probability theory for learning and classification problems. LearnFCA uses an enhanced version of FuzzyLattice which has been developed to store class labels and probability vectors and has the capability to be used for classifying instances with encoded and unlabelled features. We evaluate LearnFCA on encodings from three datasets - mnist, omniglot and cancer images with interesting results and varying degrees of success. Adviser: Jitender Deogu
    • 

    corecore