5 research outputs found

    Visualising Textual Knowledge about Risks to Aid Risk Communication

    Get PDF
    This paper demonstrates a potential application for latent semantic analysis and similar techniques in visualising the differences between two levels of knowledge about a risk issue. The HIV/AIDS risk issue will be examined and the semantic clusters of key words in a technical corpora derived from specific literature about HIV/AIDS will be compared with the semantic clusters of those in more general corpora. It is hoped that these comparisons will create a fast and efficient complementary approach to the articulation of mental models of risk issues that could be used to target possible inconsistencies between expert and lay mental models

    Hierarchical Structure in Semantic Networks of Japanese Word Associations

    Get PDF
    PACLIC 21 / Seoul National University, Seoul, Korea / November 1-3, 200

    A Framework Based on Semantic Spaces and Glyphs for Social Sensing on Twitter

    Get PDF
    Abstract In this paper we present a framework aimed at detecting emotions and sentiments in a Twitter stream. The approach uses the well-founded Latent Semantic Analysis technique, which can be seen as a bio-insipred cognitive architecture, to induce a semantic space where tweets are mapped and analysed by soft sensors. The measurements of the soft sensors are then used by a visualisation module which exploits glyphs to graphically present them. The result is an interactive map which makes easy the exploration of reactions and opinions in the whole globe regarding tweets retrieved from specific queries

    Measuring the Semantic Specificity in Mandarin Verbs: A Corpus-based Quantitative Survey

    Get PDF
    [[abstract]]The purpose of this thesis is to study semantic specificity in Chinese based on corpus-based statistical and computational methods. The analysis begins with single verbs and does primitive tests with resultative verb compounds in Chinese. The verbs studied in this work include one hundred and fifty head verbs collected in the M3 project. As a prerequisite, these one hundred and fifty head verbs were tagged as generic or specific type following the three criteria proposed in literatures: the specification of agent/instrument, the limitation of objects and their types, and the confinement on the action denotation to only physical action. The next step is to measure semantic specificity with quantitative data. To specify the use of verbs by statistics, it relies on counting the frequency, the number of senses of a verb and the range of co-occurrence objects. Two major analyses, Principle Component Analysis (PCA) and Multinomial Logistic Model, are adopted to assess the predictive power of variables and to predict the probability of different verb categories. In addition, the vector-based model in Latent Semantic Analysis (LSA) is applied to justify the concept of semantic specificity. A distributional model based on Academia Sinica Balanced Corpus (ASBC) with LSA is built to investigate the semantic space variation depending on the semantic specificity. By measuring the vector distance, the semantic similarity between words is calculated. The word-space model is used to measure the semantic loads of single verbs and explore the semantic information on Chinese resultative verb compounds (RVCs).

    Visualisation techniques for analysing meaning

    No full text
    Many ways of dealing with large collections of linguistic information involve the general principle of mapping words, larger terms and documents into some sort of abstract space. Considerable effort has been devoted to applying such techniques for practical tasks such as information retrieval and word-sense disambiguation. However, the inherent structure of these spaces is often less well-understood. Visualisation tools can help to uncover the relationships between meanings in this space, giving a clearer picture of the natural structure of linguistic information. We present a variety of tools for visualising wordmeanings in vector spaces and graph models, derived from co-occurrence information and local syntactic analysis. Our techniques suggest new solutions to standard problems such as automatic management of lexical resources, which perform well under evaluation. The tools presented in this paper are all available for public use on our website.
    corecore