179,675 research outputs found

    Semantic spaces

    Get PDF
    Any natural language can be considered as a tool for producing large databases (consisting of texts, written, or discursive). This tool for its description in turn requires other large databases (dictionaries, grammars etc.). Nowadays, the notion of database is associated with computer processing and computer memory. However, a natural language resides also in human brains and functions in human communication, from interpersonal to intergenerational one. We discuss in this survey/research paper mathematical, in particular geometric, constructions, which help to bridge these two worlds. In particular, in this paper we consider the Vector Space Model of semantics based on frequency matrices, as used in Natural Language Processing. We investigate underlying geometries, formulated in terms of Grassmannians, projective spaces, and flag varieties. We formulate the relation between vector space models and semantic spaces based on semic axes in terms of projectability of subvarieties in Grassmannians and projective spaces. We interpret Latent Semantics as a geometric flow on Grassmannians. We also discuss how to formulate G\"ardenfors' notion of "meeting of minds" in our geometric setting.Comment: 32 pages, TeX, 1 eps figur

    Normalization And Matching Of Chemical Compound Names

    Get PDF
    We have developed ChemHits (http://sabio.h-its.org/chemHits/), an application which detects and matches synonymic names of chemical compounds. The tool is based on natural language processing (NLP) methods and applies rules to systematically normalize chemical compound names. Subsequently, matching of synonymous names is achieved by comparison of the normalized name forms. The tool is capable of normalizing a given name of a chemical compound and matching it against names in (bio-)chemical databases, like SABIO-RK, PubChem, ChEBI or KEGG, even when there is no exact name-to-name-match

    An XML-based Tool for Tracking English Inclusions in German Text

    Get PDF
    The use of lexicons and corpora advances both linguistic research and performances of current natural language processing (NLP) systems. We present a tool that exploits such resources, specifically English and German lexical databases and the World Wide Web to recognise English inclusions in German newspaper articles. The output of the tool can assist lexical resource developers in monitoring changing patterns of English inclusion usage. The corpus used for the classification covers three different domains. We report the classification results and illustrate their value to linguistic and NLP research

    Editorial

    Get PDF
    This first issue of CIT. Journal of Computing and Information Technology conveys five papers from the regular section, which address topics in computer networks, relational databases, knowledge discovery in customer relationship management, and natural language processing.</p

    Editorial

    Get PDF
    This first issue of CIT. Journal of Computing and Information Technology conveys five papers from the regular section, which address topics in computer networks, relational databases, knowledge discovery in customer relationship management, and natural language processing.</p
    corecore