131 research outputs found

    An Event-Ontology-Based Approach to Constructing Episodic Knowledge from Unstructured Text Documents

    Get PDF
    Document summarization is an important function for knowledge management when a digital library of text documents grows. It allows documents to be presented in a concise manner for easy reading and understanding. Traditionally, document summarization adopts sentence-based mechanisms that identify and extract key sentences from long documents and assemble them together. Although that approach is useful in providing an abstract of documents, it cannot extract the relationship or sequence of a set of related events (also called episodes). This paper proposes an event-oriented ontology approach to constructing episodic knowledge to facilitate the understanding of documents. We also empirically evaluated the proposed approach by using instruments developed based on Bloom’s Taxonomy. The result reveals that the approach based on proposed event-oriented ontology outperformed the traditional text summarization approach in capturing conceptual and procedural knowledge, but the latter was still better in delivering factual knowledge

    Biomedical ontology alignment: An approach based on representation learning

    Get PDF
    While representation learning techniques have shown great promise in application to a number of different NLP tasks, they have had little impact on the problem of ontology matching. Unlike past work that has focused on feature engineering, we present a novel representation learning approach that is tailored to the ontology matching task. Our approach is based on embedding ontological terms in a high-dimensional Euclidean space. This embedding is derived on the basis of a novel phrase retrofitting strategy through which semantic similarity information becomes inscribed onto fields of pre-trained word vectors. The resulting framework also incorporates a novel outlier detection mechanism based on a denoising autoencoder that is shown to improve performance. An ontology matching system derived using the proposed framework achieved an F-score of 94% on an alignment scenario involving the Adult Mouse Anatomical Dictionary and the Foundational Model of Anatomy ontology (FMA) as targets. This compares favorably with the best performing systems on the Ontology Alignment Evaluation Initiative anatomy challenge. We performed additional experiments on aligning FMA to NCI Thesaurus and to SNOMED CT based on a reference alignment extracted from the UMLS Metathesaurus. Our system obtained overall F-scores of 93.2% and 89.2% for these experiments, thus achieving state-of-the-art results

    Representing Imprecise Time Intervals in OWL 2

    Get PDF
    International audienceRepresenting and reasoning on imprecise temporal information is a common requirement in the field of Semantic Web. Many works exist to represent and reason on precise temporal information in OWL; however, to the best of our knowledge, none of these works is devoted to imprecise temporal time intervals. To address this problem, we propose two approaches: a crisp-based approach and a fuzzy-based approach. (1) The first approach uses only crisp standards and tools and is modelled in OWL 2. We extend the 4D-fluents model, with new crisp components, to represent imprecise time intervals and qualitative crisp interval relations. Then, we extend the Allen’s interval algebra to compare imprecise time intervals in a crisp way and inferences are done via a set of SWRL rules. (2) The second approach is based on fuzzy sets theory and fuzzy tools and is modelled in Fuzzy-OWL 2. The 4D-fluents approach is extended, with new fuzzy components, in order to represent imprecise time intervals and qualitative fuzzy interval relations. The Allen’s interval algebra is extended in order to compare imprecise time intervals in a fuzzy gradual personalized way. Inferences are done via a set of Mamdani IF-THEN rules

    Investigating business process elements: a journey from the field of Business Process Management to ontological analysis, and back

    Get PDF
    Business process modelling languages (BPMLs) typically enable the representation of business processes via the creation of process models, which are constructed using the elements and graphical symbols of the BPML itself. Despite the wide literature on business process modelling languages, on the comparison between graphical components of different languages, on the development and enrichment of new and existing notations, and the numerous definitions of what a business process is, the BPM community still lacks a robust (ontological) characterisation of the elements involved in business process models and, even more importantly, of the very notion of business process. While some efforts have been done towards this direction, the majority of works in this area focuses on the analysis of the behavioural (control flow) aspects of process models only, thus neglecting other central modelling elements, such as those denoting process participants (e.g., data objects, actors), relationships among activities, goals, values, and so on. The overall purpose of this PhD thesis is to provide a systematic study of the elements that constitute a business process, based on ontological analysis, and to apply these results back to the Business Process Management field. The major contributions that were achieved in pursuing our overall purpose are: (i) a first comprehensive and systematic investigation of what constitutes a business process meta-model in literature, and a definition of what we call a literature-based business process meta-model starting from the different business process meta-models proposed in the literature; (ii) the ontological analysis of four business process elements (event, participant, relationship among activities, and goal), which were identified as missing or problematic in the literature and in the literature-based meta-model; (iii) the revision of the literature-based business process meta-model that incorporates the analysis of the four investigated business process elements - event, participant, relationship among activities and goal; and (iv) the definition and evaluation of a notation that enriches the relationships between activities by including the notions of occurrence dependences and rationales

    Recognizing Textual Entailment Using Description Logic And Semantic Relatedness

    Get PDF
    Textual entailment (TE) is a relation that holds between two pieces of text where one reading the first piece can conclude that the second is most likely true. Accurate approaches for textual entailment can be beneficial to various natural language processing (NLP) applications such as: question answering, information extraction, summarization, and even machine translation. For this reason, research on textual entailment has attracted a significant amount of attention in recent years. A robust logical-based meaning representation of text is very hard to build, therefore the majority of textual entailment approaches rely on syntactic methods or shallow semantic alternatives. In addition, approaches that do use a logical-based meaning representation, require a large knowledge base of axioms and inference rules that are rarely available. The goal of this thesis is to design an efficient description logic based approach for recognizing textual entailment that uses semantic relatedness information as an alternative to large knowledge base of axioms and inference rules. In this thesis, we propose a description logic and semantic relatedness approach to textual entailment, where the type of semantic relatedness axioms employed in aligning the description logic representations are used as indicators of textual entailment. In our approach, the text and the hypothesis are first represented in description logic. The representations are enriched with additional semantic knowledge acquired by using the web as a corpus. The hypothesis is then merged into the text representation by learning semantic relatedness axioms on demand and a reasoner is then used to reason over the aligned representation. Finally, the types of axioms employed by the reasoner are used to learn if the text entails the hypothesis or not. To validate our approach we have implemented an RTE system named AORTE, and evaluated its performance on recognizing textual entailment using the fourth recognizing textual entailment challenge. Our approach achieved an accuracy of 68.8 on the two way task and 61.6 on the three way task which ranked the approach as 2nd when compared to the other participating runs in the same challenge. These results show that our description logical based approach can effectively be used to recognize textual entailment

    Hybrid fuzzy multi-objective particle swarm optimization for taxonomy extraction

    Get PDF
    Ontology learning refers to an automatic extraction of ontology to produce the ontology learning layer cake which consists of five kinds of output: terms, concepts, taxonomy relations, non-taxonomy relations and axioms. Term extraction is a prerequisite for all aspects of ontology learning. It is the automatic mining of complete terms from the input document. Another important part of ontology is taxonomy, or the hierarchy of concepts. It presents a tree view of the ontology and shows the inheritance between subconcepts and superconcepts. In this research, two methods were proposed for improving the performance of the extraction result. The first method uses particle swarm optimization in order to optimize the weights of features. The advantage of particle swarm optimization is that it can calculate and adjust the weight of each feature according to the appropriate value, and here it is used to improve the performance of term and taxonomy extraction. The second method uses a hybrid technique that uses multi-objective particle swarm optimization and fuzzy systems that ensures that the membership functions and fuzzy system rule sets are optimized. The advantage of using a fuzzy system is that the imprecise and uncertain values of feature weights can be tolerated during the extraction process. This method is used to improve the performance of taxonomy extraction. In the term extraction experiment, five extracted features were used for each term from the document. These features were represented by feature vectors consisting of domain relevance, domain consensus, term cohesion, first occurrence and length of noun phrase. For taxonomy extraction, matching Hearst lexico-syntactic patterns in documents and the web, and hypernym information form WordNet were used as the features that represent each pair of terms from the texts. These two proposed methods are evaluated using a dataset that contains documents about tourism. For term extraction, the proposed method is compared with benchmark algorithms such as Term Frequency Inverse Document Frequency, Weirdness, Glossary Extraction and Term Extractor, using the precision performance evaluation measurement. For taxonomy extraction, the proposed methods are compared with benchmark methods of Feature-based and weighting by Support Vector Machine using the f-measure, precision and recall performance evaluation measurements. For the first method, the experiment results concluded that implementing particle swarm optimization in order to optimize the feature weights in terms and taxonomy extraction leads to improved accuracy of extraction result compared to the benchmark algorithms. For the second method, the results concluded that the hybrid technique that uses multi-objective particle swarm optimization and fuzzy systems leads to improved performance of taxonomy extraction results when compared to the benchmark methods, while adjusting the fuzzy membership function and keeping the number of fuzzy rules to a minimum number with a high degree of accuracy

    Spatial and temporal resolution of sensor observations

    Full text link
    Beobachtung ist ein Kernkonzept der Geoinformatik. Beobachtungen dienen bei PhĂ€nomenen wie Klimawandel, Massenbewegungen (z. B. Hangbewegungen) und demographischer Wandel zur Überwachung, Entwicklung von Modellen und Simulation dieser Erscheinungen. Auflösung ist eine zentrale Eigenschaft von Beobachtungen. Der Gebrauch von Beobachtungen unterschiedlicher Auflösung fĂŒhrt zu (potenziell) unterschiedlichen Entscheidungen, da die Auflösung der Beobachtungen das Erkennen von Strukturen wĂ€hrend der Phase der Datenanalyse beeinflusst. Der Hauptbeitrag dieser Arbeit ist eine entwickelte Theorie der raum- und zeitlichen Auflösung von Beobachtungen, die sowohl auf technische Sensoren (z. B. Fotoapparat) als auch auf menschliche Sensoren anwendbar ist. Die Konsistenz der Theorie wurde anhand der Sprache Haskell evaluiert, und ihre praktische Anwendbarkeit wurde unter Einsatz von Beobachtungen des Webportals Flickr illustriert

    Local matching learning of large scale biomedical ontologies

    Get PDF
    Les larges ontologies biomĂ©dicales dĂ©crivent gĂ©nĂ©ralement le mĂȘme domaine d'intĂ©rĂȘt, mais en utilisant des modĂšles de modĂ©lisation et des vocabulaires diffĂ©rents. Aligner ces ontologies qui sont complexes et hĂ©tĂ©rogĂšnes est une tĂąche fastidieuse. Les systĂšmes de matching doivent fournir des rĂ©sultats de haute qualitĂ© en tenant compte de la grande taille de ces ressources. Les systĂšmes de matching d'ontologies doivent rĂ©soudre deux problĂšmes: (i) intĂ©grer la grande taille d'ontologies, (ii) automatiser le processus d'alignement. Le matching d'ontologies est une tĂąche difficile en raison de la large taille des ontologies. Les systĂšmes de matching d'ontologies combinent diffĂ©rents types de matcher pour rĂ©soudre ces problĂšmes. Les principaux problĂšmes de l'alignement de larges ontologies biomĂ©dicales sont: l'hĂ©tĂ©rogĂ©nĂ©itĂ© conceptuelle, l'espace de recherche Ă©levĂ© et la qualitĂ© rĂ©duite des alignements rĂ©sultants. Les systĂšmes d'alignement d'ontologies combinent diffĂ©rents matchers afin de rĂ©duire l'hĂ©tĂ©rogĂ©nĂ©itĂ©. Cette combinaison devrait dĂ©finir le choix des matchers Ă  combiner et le poids. DiffĂ©rents matchers traitent diffĂ©rents types d'hĂ©tĂ©rogĂ©nĂ©itĂ©. Par consĂ©quent, le paramĂ©trage d'un matcher devrait ĂȘtre automatisĂ© par les systĂšmes d'alignement d'ontologies afin d'obtenir une bonne qualitĂ© de correspondance. Nous avons proposĂ© une approche appele "local matching learning" pour faire face Ă  la fois Ă  la grande taille des ontologies et au problĂšme de l'automatisation. Nous divisons un gros problĂšme d'alignement en un ensemble de problĂšmes d'alignement locaux plus petits. Chaque problĂšme d'alignement local est indĂ©pendamment alignĂ© par une approche d'apprentissage automatique. Nous rĂ©duisons l'Ă©norme espace de recherche en un ensemble de taches de recherche de corresondances locales plus petites. Nous pouvons aligner efficacement chaque tache de recherche de corresondances locale pour obtenir une meilleure qualitĂ© de correspondance. Notre approche de partitionnement se base sur une nouvelle stratĂ©gie Ă  dĂ©coupes multiples gĂ©nĂ©rant des partitions non volumineuses et non isolĂ©es. Par consĂ©quence, nous pouvons surmonter le problĂšme de l'hĂ©tĂ©rogĂ©nĂ©itĂ© conceptuelle. Le nouvel algorithme de partitionnement est basĂ© sur le clustering hiĂ©rarchique par agglomĂ©ration (CHA). Cette approche gĂ©nĂšre un ensemble de tĂąches de correspondance locale avec un taux de couverture suffisant avec aucune partition isolĂ©e. Chaque tĂąche d'alignement local est automatiquement alignĂ©e en se basant sur les techniques d'apprentissage automatique. Un classificateur local aligne une seule tĂąche d'alignement local. Les classificateurs locaux sont basĂ©s sur des features Ă©lĂ©mentaires et structurelles. L'attribut class de chaque set de donne d'apprentissage " training set" est automatiquement Ă©tiquetĂ© Ă  l'aide d'une base de connaissances externe. Nous avons appliquĂ© une technique de sĂ©lection de features pour chaque classificateur local afin de sĂ©lectionner les matchers appropriĂ©s pour chaque tĂąche d'alignement local. Cette approche rĂ©duit la complexitĂ© d'alignement et augmente la prĂ©cision globale par rapport aux mĂ©thodes d'apprentissage traditionnelles. Nous avons prouvĂ© que l'approche de partitionnement est meilleure que les approches actuelles en terme de prĂ©cision, de taux de couverture et d'absence de partitions isolĂ©es. Nous avons Ă©valuĂ© l'approche d'apprentissage d'alignement local Ă  l'aide de diverses expĂ©riences basĂ©es sur des jeux de donnĂ©es d'OAEI 2018. Nous avons dĂ©duit qu'il est avantageux de diviser une grande tĂąche d'alignement d'ontologies en un ensemble de tĂąches d'alignement locaux. L'espace de recherche est rĂ©duit, ce qui rĂ©duit le nombre de faux nĂ©gatifs et de faux positifs. L'application de techniques de sĂ©lection de caractĂ©ristiques Ă  chaque classificateur local augmente la valeur de rappel pour chaque tĂąche d'alignement local.Although a considerable body of research work has addressed the problem of ontology matching, few studies have tackled the large ontologies used in the biomedical domain. We introduce a fully automated local matching learning approach that breaks down a large ontology matching task into a set of independent local sub-matching tasks. This approach integrates a novel partitioning algorithm as well as a set of matching learning techniques. The partitioning method is based on hierarchical clustering and does not generate isolated partitions. The matching learning approach employs different techniques: (i) local matching tasks are independently and automatically aligned using their local classifiers, which are based on local training sets built from element level and structure level features, (ii) resampling techniques are used to balance each local training set, and (iii) feature selection techniques are used to automatically select the appropriate tuning parameters for each local matching context. Our local matching learning approach generates a set of combined alignments from each local matching task, and experiments show that a multiple local classifier approach outperforms conventional, state-of-the-art approaches: these use a single classifier for the whole ontology matching task. In addition, focusing on context-aware local training sets based on local feature selection and resampling techniques significantly enhances the obtained results

    Dwelling on ontology - semantic reasoning over topographic maps

    Get PDF
    The thesis builds upon the hypothesis that the spatial arrangement of topographic features, such as buildings, roads and other land cover parcels, indicates how land is used. The aim is to make this kind of high-level semantic information explicit within topographic data. There is an increasing need to share and use data for a wider range of purposes, and to make data more definitive, intelligent and accessible. Unfortunately, we still encounter a gap between low-level data representations and high-level concepts that typify human qualitative spatial reasoning. The thesis adopts an ontological approach to bridge this gap and to derive functional information by using standard reasoning mechanisms offered by logic-based knowledge representation formalisms. It formulates a framework for the processes involved in interpreting land use information from topographic maps. Land use is a high-level abstract concept, but it is also an observable fact intimately tied to geography. By decomposing this relationship, the thesis correlates a one-to-one mapping between high-level conceptualisations established from human knowledge and real world entities represented in the data. Based on a middle-out approach, it develops a conceptual model that incrementally links different levels of detail, and thereby derives coarser, more meaningful descriptions from more detailed ones. The thesis verifies its proposed ideas by implementing an ontology describing the land use ‘residential area’ in the ontology editor ProtĂ©gĂ©. By asserting knowledge about high-level concepts such as types of dwellings, urban blocks and residential districts as well as individuals that link directly to topographic features stored in the database, the reasoner successfully infers instances of the defined classes. Despite current technological limitations, ontologies are a promising way forward in the manner we handle and integrate geographic data, especially with respect to how humans conceptualise geographic space

    Image retrieval using automatic region tagging

    Get PDF
    The task of tagging, annotating or labelling image content automatically with semantic keywords is a challenging problem. To automatically tag images semantically based on the objects that they contain is essential for image retrieval. In addressing these problems, we explore the techniques developed to combine textual description of images with visual features, automatic region tagging and region-based ontology image retrieval. To evaluate the techniques, we use three corpora comprising: Lonely Planet travel guide articles with images, Wikipedia articles with images and Goats comic strips. In searching for similar images or textual information specified in a query, we explore the unification of textual descriptions and visual features (such as colour and texture) of the images. We compare the effectiveness of using different retrieval similarity measures for the textual component. We also analyse the effectiveness of different visual features extracted from the images. We then investigate the best weight combination of using textual and visual features. Using the queries from the Multimedia Track of INEX 2005 and 2006, we found that the best weight combination significantly improves the effectiveness of the retrieval system. Our findings suggest that image regions are better in capturing the semantics, since we can identify specific regions of interest in an image. In this context, we develop a technique to tag image regions with high-level semantics. This is done by combining several shape feature descriptors and colour, using an equal-weight linear combination. We experimentally compare this technique with more complex machine-learning algorithms, and show that the equal-weight linear combination of shape features is simpler and at least as effective as using a machine learning algorithm. We focus on the synergy between ontology and image annotations with the aim of reducing the gap between image features and high-level semantics. Ontologies ease information retrieval. They are used to mine, interpret, and organise knowledge. An ontology may be seen as a knowledge base that can be used to improve the image retrieval process, and conversely keywords obtained from automatic tagging of image regions may be useful for creating an ontology. We engineer an ontology that surrogates concepts derived from image feature descriptors. We test the usability of the constructed ontology by querying the ontology via the Visual Ontology Query Interface, which has a formally specified grammar known as the Visual Ontology Query Language. We show that synergy between ontology and image annotations is possible and this method can reduce the gap between image features and high-level semantics by providing the relationships between objects in the image. In this thesis, we conclude that suitable techniques for image retrieval include fusing text accompanying the images with visual features, automatic region tagging and using an ontology to enrich the semantic meaning of the tagged image regions
    • 

    corecore