12 research outputs found

    Semantic Representation and Composition for Unknown Compounds in E-HowNet

    Get PDF
    PACLIC 20 / Wuhan, China / 1-3 November, 200

    A Step toward Compositional Semantics: E-HowNet a Lexical Semantic Representation System

    Get PDF
    PACLIC 23 / City University of Hong Kong / 3-5 December 200

    A Study of Chinese Named Entity and Relation Identification in a Specific Domain

    Get PDF
    This thesis aims at investigating automatic identification of Chinese named entities (NEs) and their relations (NERs) in a specific domain. We have proposed a three-stage pipeline computational model for the error correction of word segmentation and POS tagging, NE recognition and NER identification. In this model, an error repair module utilizing machine learning techniques is developed in the first stage. At the second stage, a new algorithm that can automatically construct Finite State Cascades (FSC) from given sets of rules is designed. As a supplement, the recognition strategy without NE trigger words can identify the special linguistic phenomena. In the third stage, a novel approach - positive and negative case-based learning and identification (PNCBL&I) is implemented. It pursues the improvement of the identification performance for NERs through simultaneously learning two opposite cases and automatically selecting effective multi-level linguistic features for NERs and non-NERs. Further, two other strategies, resolving relation conflicts and inferring missing relations, are also integrated in the identification procedure.Diese Dissertation ist der Forschung zur automatischen Erkennung von chinesischen Begriffen (named entities, NE) und ihrer Relationen (NER) in einer spezifischen Domäne gewidmet. Wir haben ein Pipelinemodell mit drei aufeinanderfolgenden Verarbeitungsschritten für die Korrektur der Fehler der Wortsegmentation und Wortartmarkierung, NE-Erkennung, und NER-Identifizierung vorgeschlagen. In diesem Modell wird eine Komponente zur Fehlerreparatur im ersten Verarbeitungsschritt verwirklicht, die ein machinelles Lernverfahren einsetzt. Im zweiten Stadium wird ein neuer Algorithmus, der die Kaskaden endlicher Transduktoren aus den Mengen der Regeln automatisch konstruieren kann, entworfen. Zusätzlich kann eine Strategie für die Erkennung von NE, die nicht durch das Vorkommen bestimmer lexikalischer Trigger markiert sind, die spezielle linguistische Phänomene identifizieren. Im dritten Verarbeitungsschritt wird ein neues Verfahren, das auf dem Lernen und der Identifizierung positiver und negativer Fälle beruht, implementiert. Es verfolgt die Verbesserung der NER-Erkennungsleistung durch das gleichzeitige Lernen zweier gegenüberliegenden Fälle und die automatische Auswahl der wirkungsvollen linguistischen Merkmale auf mehreren Ebenen für die NER und Nicht-NER. Weiter werden zwei andere Strategien, die Lösung von Konflikten in der Relationenerkennung und die Inferenz von fehlenden Relationen, auch in den Erkennungsprozeß integriert

    A Study of Chinese Named Entity and Relation Identification in a Specific Domain

    Get PDF
    This thesis aims at investigating automatic identification of Chinese named entities (NEs) and their relations (NERs) in a specific domain. We have proposed a three-stage pipeline computational model for the error correction of word segmentation and POS tagging, NE recognition and NER identification. In this model, an error repair module utilizing machine learning techniques is developed in the first stage. At the second stage, a new algorithm that can automatically construct Finite State Cascades (FSC) from given sets of rules is designed. As a supplement, the recognition strategy without NE trigger words can identify the special linguistic phenomena. In the third stage, a novel approach - positive and negative case-based learning and identification (PNCBL&I) is implemented. It pursues the improvement of the identification performance for NERs through simultaneously learning two opposite cases and automatically selecting effective multi-level linguistic features for NERs and non-NERs. Further, two other strategies, resolving relation conflicts and inferring missing relations, are also integrated in the identification procedure.Diese Dissertation ist der Forschung zur automatischen Erkennung von chinesischen Begriffen (named entities, NE) und ihrer Relationen (NER) in einer spezifischen Domäne gewidmet. Wir haben ein Pipelinemodell mit drei aufeinanderfolgenden Verarbeitungsschritten für die Korrektur der Fehler der Wortsegmentation und Wortartmarkierung, NE-Erkennung, und NER-Identifizierung vorgeschlagen. In diesem Modell wird eine Komponente zur Fehlerreparatur im ersten Verarbeitungsschritt verwirklicht, die ein machinelles Lernverfahren einsetzt. Im zweiten Stadium wird ein neuer Algorithmus, der die Kaskaden endlicher Transduktoren aus den Mengen der Regeln automatisch konstruieren kann, entworfen. Zusätzlich kann eine Strategie für die Erkennung von NE, die nicht durch das Vorkommen bestimmer lexikalischer Trigger markiert sind, die spezielle linguistische Phänomene identifizieren. Im dritten Verarbeitungsschritt wird ein neues Verfahren, das auf dem Lernen und der Identifizierung positiver und negativer Fälle beruht, implementiert. Es verfolgt die Verbesserung der NER-Erkennungsleistung durch das gleichzeitige Lernen zweier gegenüberliegenden Fälle und die automatische Auswahl der wirkungsvollen linguistischen Merkmale auf mehreren Ebenen für die NER und Nicht-NER. Weiter werden zwei andere Strategien, die Lösung von Konflikten in der Relationenerkennung und die Inferenz von fehlenden Relationen, auch in den Erkennungsprozeß integriert

    Cross-language Ontology Learning: Incorporating and Exploiting Cross-language Data in the Ontology Learning Process

    Get PDF
    Hans Hjelm. Cross-language Ontology Learning: Incorporating and Exploiting Cross-language Data in the Ontology Learning Process. NEALT Monograph Series, Vol. 1 (2009), 159 pages. © 2009 Hans Hjelm. Published by Northern European Association for Language Technology (NEALT) http://omilia.uio.no/nealt . Electronically published at Tartu University Library (Estonia) http://hdl.handle.net/10062/10126

    Theory of Spatial Similarity Relations and Its Applications in Automated Map Generalization

    Get PDF
    Automated map generalization is a necessary technique for the construction of multi-scale vector map databases that are crucial components in spatial data infrastructure of cities, provinces, and countries. Nevertheless, this is still a dream because many algorithms for map feature generalization are not parameter-free and therefore need human’s interference. One of the major reasons is that map generalization is a process of spatial similarity transformation in multi-scale map spaces; however, no theory can be found to support such kind of transformation. This thesis focuses on the theory of spatial similarity relations in multi-scale map spaces, aiming at proposing the approaches and models that can be used to automate some relevant algorithms in map generalization. After a systematic review of existing achievements including the definitions and features of similarity in various communities, a classification system of spatial similarity relations, and the calculation models of similarity relations in the communities of psychology, computer science, music, and geography, as well as a number of raster-based approaches for calculating similarity degrees between images, the thesis achieves the following innovative contributions. First, the fundamental issues of spatial similarity relations are explored, i.e. (1) a classification system is proposed that classifies the objects processed by map generalization algorithms into ten categories; (2) the Set Theory-based definitions of similarity, spatial similarity, and spatial similarity relation in multi-scale map spaces are given; (3) mathematical language-based descriptions of the features of spatial similarity relations in multi-scale map spaces are addressed; (4) the factors that affect human’s judgments of spatial similarity relations are proposed, and their weights are also obtained by psychological experiments; and (5) a classification system for spatial similarity relations in multi-scale map spaces is proposed. Second, the models that can calculate spatial similarity degrees for the ten types of objects in multi-scale map spaces are proposed, and their validity is tested by psychological experiments. If a map (or an individual object, or an object group) and its generalized counterpart are given, the models can be used to calculate the spatial similarity degrees between them. Third, the proposed models are used to solve problems in map generalization: (1) ten formulae are constructed that can calculate spatial similarity degrees by map scale changes in map generalization; (2) an approach based on spatial similarity degree is proposed that can determine when to terminate a map generalization system or an algorithm when it is executed to generalize objects on maps, which may fully automate some relevant algorithms and therefore improve the efficiency of map generalization; and (3) an approach is proposed to calculate the distance tolerance of the Douglas-Peucker Algorithm so that the Douglas-Peucker Algorithm may become fully automatic. Nevertheless, the theory and the approaches proposed in this study possess two limitations and needs further exploration. • More experiments should be done to improve the accuracy and adaptability of the proposed models and formulae. The new experiments should select more typical maps and map objects as samples, and find more subjects with different cultural backgrounds. • Whether it is feasible to integrate the ten models/formulae for calculating spatial similarity degrees into an identical model/formula needs further investigation. In addition, it is important to find out the other algorithms, like the Douglas-Peucker Algorithm, that are not parameter-free and closely related to spatial similarity relation, and explore the approaches to calculating the parameters used in these algorithms with the help of the models and formulae proposed in this thesis

    Chinese elements : a bridge of the integration between Chinese -English translation and linguaculture transnational mobility

    Get PDF
    [Abstract] As the popularity of Chinese elements in the innovation of the translation part in Chinese CET, we realized that Chinese elements have become a bridge between linguaculture transnational mobility and Chinese-English translation.So, Chinese students translation skills should be critically improved; for example, on their understanding about Chinese culture, especially the meaning of Chinese culture. Five important secrets of skillful translation are introduced to improve students’ translation skills

    Development of computational methods for biological complexity

    Get PDF
    The cell is a complex system. In this system, the different layers of biological information establish complex links converging in the space of functions; processes and pathways talk each other defining cell types and organs. In the space of biological functions, this lead to a higher order of “emergence”, greater than the sum of the single parts, defining a biological entity a complex system. The introduction of omic techniques has made possible to investigate the complexity of each biological layer. With the different technologies we can have a near complete readout of the different biomolecules. However, it is only through data integration that we can let emerge and understand biological complexity. Given the complexity of the problem, we are far from having fully understood and developed exhaustive computational methods. Thus, this make urgent the exploration of biological complexity through the implementation of more powerful tools relying on new data and hypotheses. To this aim, Bioinformatics and Computational Biology play determinant roles. The present thesis describes computational methods aimed at deciphering biological complexity starting from genomic, interactomic, metabolomic and functional data. The first part describes NET-GE, a network-based gene enrichment tool aimed at extracting biological functions and processes of a set of gene/proteins related to a phenotype. NET-GE exploits the information stored in biological networks to better define the biological events occurring at gene/protein level. The first part describes also eDGAR, a database collecting and organizing gene-disease associations. The second part deals with metabolomics. I describe a new way to perform metabolite enrichment analysis: the metabolome is explored by exploiting the features of an interactome. The third part describes the methods and results obtained in the CAGI experiment, a community experiment aimed at assessing computational methods used to predict the impact of genomic variation on a phenotype

    Lexical database enrichment through semi-automated morphological analysis

    Get PDF
    Derivational morphology proposes meaningful connections between words and is largely unrepresented in lexical databases. This thesis presents a project to enrich a lexical database with morphological links and to evaluate their contribution to disambiguation. A lexical database with sense distinctions was required. WordNet was chosen because of its free availability and widespread use. Its suitability was assessed through critical evaluation with respect to specifications and criticisms, using a transparent, extensible model. The identification of serious shortcomings suggested a portable enrichment methodology, applicable to alternative resources. Although 40% of the most frequent words are prepositions, they have been largely ignored by computational linguists, so addition of prepositions was also required. The preferred approach to morphological enrichment was to infer relations from phenomena discovered algorithmically. Both existing databases and existing algorithms can capture regular morphological relations, but cannot capture exceptions correctly; neither of them provide any semantic information. Some morphological analysis algorithms are subject to the fallacy that morphological analysis can be performed simply by segmentation. Morphological rules, grounded in observation and etymology, govern associations between and attachment of suffixes and contribute to defining the meaning of morphological relationships. Specifying character substitutions circumvents the segmentation fallacy. Morphological rules are prone to undergeneration, minimised through a variable lexical validity requirement, and overgeneration, minimised by rule reformulation and restricting monosyllabic output. Rules take into account the morphology of ancestor languages through co-occurrences of morphological patterns. Multiple rules applicable to an input suffix need their precedence established. The resistance of prefixations to segmentation has been addressed by identifying linking vowel exceptions and irregular prefixes. The automatic affix discovery algorithm applies heuristics to identify meaningful affixes and is combined with morphological rules into a hybrid model, fed only with empirical data, collected without supervision. Further algorithms apply the rules optimally to automatically pre-identified suffixes and break words into their component morphemes. To handle exceptions, stoplists were created in response to initial errors and fed back into the model through iterative development, leading to 100% precision, contestable only on lexicographic criteria. Stoplist length is minimised by special treatment of monosyllables and reformulation of rules. 96% of words and phrases are analysed. 218,802 directed derivational links have been encoded in the lexicon rather than the wordnet component of the model because the lexicon provides the optimal clustering of word senses. Both links and analyser are portable to an alternative lexicon. The evaluation uses the extended gloss overlaps disambiguation algorithm. The enriched model outperformed WordNet in terms of recall without loss of precision. Failure of all experiments to outperform disambiguation by frequency reflects on WordNet sense distinctions

    Lexical database enrichment through semi-automated morphological analysis

    Get PDF
    Derivational morphology proposes meaningful connections between words and is largely unrepresented in lexical databases. This thesis presents a project to enrich a lexical database with morphological links and to evaluate their contribution to disambiguation. A lexical database with sense distinctions was required. WordNet was chosen because of its free availability and widespread use. Its suitability was assessed through critical evaluation with respect to specifications and criticisms, using a transparent, extensible model. The identification of serious shortcomings suggested a portable enrichment methodology, applicable to alternative resources. Although 40% of the most frequent words are prepositions, they have been largely ignored by computational linguists, so addition of prepositions was also required. The preferred approach to morphological enrichment was to infer relations from phenomena discovered algorithmically. Both existing databases and existing algorithms can capture regular morphological relations, but cannot capture exceptions correctly; neither of them provide any semantic information. Some morphological analysis algorithms are subject to the fallacy that morphological analysis can be performed simply by segmentation. Morphological rules, grounded in observation and etymology, govern associations between and attachment of suffixes and contribute to defining the meaning of morphological relationships. Specifying character substitutions circumvents the segmentation fallacy. Morphological rules are prone to undergeneration, minimised through a variable lexical validity requirement, and overgeneration, minimised by rule reformulation and restricting monosyllabic output. Rules take into account the morphology of ancestor languages through co-occurrences of morphological patterns. Multiple rules applicable to an input suffix need their precedence established. The resistance of prefixations to segmentation has been addressed by identifying linking vowel exceptions and irregular prefixes. The automatic affix discovery algorithm applies heuristics to identify meaningful affixes and is combined with morphological rules into a hybrid model, fed only with empirical data, collected without supervision. Further algorithms apply the rules optimally to automatically pre-identified suffixes and break words into their component morphemes. To handle exceptions, stoplists were created in response to initial errors and fed back into the model through iterative development, leading to 100% precision, contestable only on lexicographic criteria. Stoplist length is minimised by special treatment of monosyllables and reformulation of rules. 96% of words and phrases are analysed. 218,802 directed derivational links have been encoded in the lexicon rather than the wordnet component of the model because the lexicon provides the optimal clustering of word senses. Both links and analyser are portable to an alternative lexicon. The evaluation uses the extended gloss overlaps disambiguation algorithm. The enriched model outperformed WordNet in terms of recall without loss of precision. Failure of all experiments to outperform disambiguation by frequency reflects on WordNet sense distinctions.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore