6 research outputs found

    Enterprise knowledge management: introducing new technologies in traditional Information Systems

    Get PDF
    Knowledge management systems described in research papers are rarely seen implemented in business realities, at least on a large scale. Companies are often tied to existing systems and cannot or would not revolutionize the situation to accommodate completely new solutions. Given this assumption, this work investigates several small-scale modifications that could be applied to in-place Information Systems so as to improve them with new technologies without major transformations and service discontinuities. The focus is interoperability, with a particular stress on the promotion of the ebXML registry standard. A universal interface for document management was defined, and the conforming “interoperable” DMSs were arranged within an architecture explicitly designed for ebXML-compliant access. This allowed standards-based manipulation of legacy DM systems. The closely related topic of Semantic knowledge management was also tackled. We developed Semantic tools integration for traditional repositories with low architectural impact. Finally, we discussed a novel issue in document categorization, and a new kind of ontology that could be used in that contex

    Enabling Ontology-Based Document Classification and Management in ebXML Registries

    No full text
    Document Management Systems (DMSs) are a key component in modern enterprises. For successful document search and retrieval, an adequate metadata set should be defined in order to describe documents with sufficient detail. However, often a single metadata set is not sufficient throughout the whole DMS, as different document types require different attributes to be properly characterized. In this paper, we introduce ontologies as a modeling technology for structured metadata definition within DMSs. Focusing on the ebXML registry standard, we show an approach to enhance DMSs for semantic content management and then we propose a method to exploit this new capability for automated document characterization

    Reasoning-Supported Quality Assurance for Knowledge Bases

    Get PDF
    The increasing application of ontology reuse and automated knowledge acquisition tools in ontology engineering brings about a shift of development efforts from knowledge modeling towards quality assurance. Despite the high practical importance, there has been a substantial lack of support for ensuring semantic accuracy and conciseness. In this thesis, we make a significant step forward in ontology engineering by developing a support for two such essential quality assurance activities

    User-centric knowledge extraction and maintenance

    Get PDF
    An ontology is a machine readable knowledge collection. There is an abundance of information available for human consumption. Thus, large general knowledge ontologies are typically generated tapping into this information source using imperfect automatic extraction approaches that translate human readable text into machine readable semantic knowledge. This thesis provides methods for user-driven ontology generation and maintenance. In particular, this work consists of three main contributions: 1. An interactive human-supported extraction tool: LUKe. The system extends an automatic extraction framework to integrate human feedback on extraction decisions and extracted information on multiple levels. 2. A document retrieval approach based on semantic statements: S3K. While one application is the retrieval of documents that support extracted information to verify the correctness of the piece of information, another application in combination with an extraction system is a fact based indexing of a document corpus allowing statement based document retrieval. 3. A method for similarity based ontology navigation: QBEES. The approach enables search by example. That is, given a set of semantic entities, it provides the most similar entities with respect to their semantic properties considering different aspects. All three components are integrated into a modular architecture that also provides an interface for third-party components.Eine Ontologie ist eine Wissenssammlung in maschinenlesbarer Form. Da eine große Bandbreite an Informationen nur in natürlichsprachlicher Form verfügbar ist, werden maschinenlesbare Ontologien häufig durch imperfekte automatische Verfahren erzeugt, die eine Übersetzung in eine maschinenlesbare Darstellung vornehmen. In der vorliegenden Arbeit werden Methoden zur menschlichen Unterstützung des Extraktionsprozesses und Wartung der erzeugten Wissensbasen präsentiert. Dabei werden drei Beiträge geleistet: 1. Zum ersten wird ein interaktives Extraktionstool (LUKe) vorgestellt. Hierfür wird ein bestehendes Extraktionssystem um die Integration von Nutzerkorrekturen auf verschiedenen Ebenen der Extraktion erweitert und an ein beispielhaftes Szenario angepasst. 2. Zum zweiten wird ein Ansatz (S3K) zur Dokumentsuche basierend auf faktischen Aussagen beschrieben. Dieser erlaubt eine aussagenbasierte Suche nach Belegstellen oder weiteren Informationen im Zusammenhang mit diesen Aussagen in den Dokumentsammlungen die der Wissensbasis zugrunde liegen. 3. Zuletzt wird QBEES, eine Ähnlichkeitssuche in Ontologien, vorgestellt. QBEES ermöglicht die Suche bzw. Empfehlung von ähnlichen Entitäten auf Basis der semantischen Eigenschaften die sie mit einer als Beispiel angegebenen Menge von Entitäten gemein haben. Alle einzelnen Komponenten sind zudem in eine modulare Gesamtarchitektur integriert

    Bioinspired metaheuristic algorithms for global optimization

    Get PDF
    This paper presents concise comparison study of newly developed bioinspired algorithms for global optimization problems. Three different metaheuristic techniques, namely Accelerated Particle Swarm Optimization (APSO), Firefly Algorithm (FA), and Grey Wolf Optimizer (GWO) are investigated and implemented in Matlab environment. These methods are compared on four unimodal and multimodal nonlinear functions in order to find global optimum values. Computational results indicate that GWO outperforms other intelligent techniques, and that all aforementioned algorithms can be successfully used for optimization of continuous functions

    Experimental Evaluation of Growing and Pruning Hyper Basis Function Neural Networks Trained with Extended Information Filter

    Get PDF
    In this paper we test Extended Information Filter (EIF) for sequential training of Hyper Basis Function Neural Networks with growing and pruning ability (HBF-GP). The HBF neuron allows different scaling of input dimensions to provide better generalization property when dealing with complex nonlinear problems in engineering practice. The main intuition behind HBF is in generalization of Gaussian type of neuron that applies Mahalanobis-like distance as a distance metrics between input training sample and prototype vector. We exploit concept of neuron’s significance and allow growing and pruning of HBF neurons during sequential learning process. From engineer’s perspective, EIF is attractive for training of neural networks because it allows a designer to have scarce initial knowledge of the system/problem. Extensive experimental study shows that HBF neural network trained with EIF achieves same prediction error and compactness of network topology when compared to EKF, but without the need to know initial state uncertainty, which is its main advantage over EKF
    corecore