18 research outputs found

    Geographies and cultures of international student experiences in higher education: Shared perspectives between students from different countries

    Full text link
    Updated research is required on the geographies of the cultural issues that shape international students’ experiences. The growing number of students traveling to different countries implies a need to cater to cultures and values from different parts of the world. Apart from cultural and geographical aspects, there is scarce knowledge about similarities between students’ experiences abroad that takes into account their countries of origin (and, to some extent, their cultures) within those mobility flows. Using a probabilistic topic model on 59,662 international student reports from 167 countries on their mobility experiences, we examine links between the students’ experiences and their countries of origin. The results show that the geographical features of the reports are connected not only to cultural issues, but also to other factors that might affect their international experience

    A Sentence-level Hierarchical BERT Model for Document Classification with Limited Labelled Data

    Get PDF
    Training deep learning models with limited labelled data is an attractive scenario for many NLP tasks, including document classification. While with the recent emergence of BERT, deep learning language models can achieve reasonably good performance in document classification with few labelled instances, there is a lack of evidence in the utility of applying BERT-like models on long document classification. This work introduces a long-text-specific model -- the Hierarchical BERT Model (HBM) -- that learns sentence-level features of the text and works well in scenarios with limited labelled data. Various evaluation experiments have demonstrated that HBM can achieve higher performance in document classification than the previous state-of-the-art methods with only 50 to 200 labelled instances, especially when documents are long. Also, as an extra benefit of HBM, the salient sentences identified by learned HBM are useful as explanations for labelling documents based on a user study

    Knowledge Discovery from CVs: A Topic Modeling Procedure

    Get PDF
    With a huge number of CVs available online, recruiting via the web has become an integral part of human resource management for companies. Automated text mining methods can be used to analyze large databases containing CVs. We present a topic modeling procedure consisting of five steps with the aim of identifying competences in CVs in an automated manner. Both the procedure and its exemplary application to CVs from IT experts are described in detail. The specific characteristics of CVs are considered in each step for optimal results. The exemplary application suggests that clearly interpretable topics describing fine-grained competences (e.g., Java programming, web design) can be discovered. This information can be used to rapidly assess the contents of a CV, categorize CVs and identify candidates for job offers. Furthermore, a topic-based search technique is evaluated to provide helpful decision support

    Blending citizen science with natural language processing and machine learning: Understanding the experience of living with multiple sclerosis.

    Get PDF
    The emergence of new digital technologies has enabled a new way of doing research, including active collaboration with the public ('citizen science'). Innovation in machine learning (ML) and natural language processing (NLP) has made automatic analysis of large-scale text data accessible to study individual perspectives in a convenient and efficient fashion. Here we blend citizen science with innovation in NLP and ML to examine (1) which categories of life events persons with multiple sclerosis (MS) perceived as central for their MS; and (2) associated emotions. We subsequently relate our results to standardized individual-level measures. Participants (n = 1039) took part in the 'My Life with MS' study of the Swiss MS Registry which involved telling their story through self-selected life events using text descriptions and a semi-structured questionnaire. We performed topic modeling ('latent Dirichlet allocation') to identify high-level topics underlying the text descriptions. Using a pre-trained language model, we performed a fine-grained emotion analysis of the text descriptions. A topic modeling analysis of totally 4293 descriptions revealed eight underlying topics. Five topics are common in clinical research: 'diagnosis', 'medication/treatment', 'relapse/child', 'rehabilitation/wheelchair', and 'injection/symptoms'. However, three topics, 'work', 'birth/health', and 'partnership/MS' represent domains that are of great relevance for participants but are generally understudied in MS research. While emotions were predominantly negative (sadness, anxiety), emotions linked to the topics 'birth/health' and 'partnership/MS' was also positive (joy). Designed in close collaboration with persons with MS, the 'My Life with MS' project explores the experience of living with the chronic disease of MS using NLP and ML. Our study thus contributes to the body of research demonstrating the potential of integrating citizen science with ML-driven NLP methods to explore the experience of living with a chronic condition

    Combining Rating and Review Data by Initializing Latent Factor Models with Topic Models for Top-N Recommendation

    Get PDF
    The 14th ACM Recommender Systems conference (RecSys '20), Virtual Event, 22-26 September 2020Nowadays we commonly have multiple sources of data associated with items. Users may provide numerical ratings, or implicit interactions, but may also provide textual reviews. Although many algorithms have been proposed to jointly learn a model over both interactions and textual data, there is room to improve the many factorization models that are proven to work well on interactions data, but are not designed to exploit textual information. Our focus in this work is to propose a simple, yet easily applicable and effective, method to incorporate review data into such factorization models. In particular, we propose to build the user and item embeddings within the topic space of a topic model learned from the review data. This has several advantages: we observe that initializing the user and item embeddings in topic space leads to faster convergence of the factorization algorithm to a model that out-performs models initialized randomly, or with other state-of-the-art initialization strategies. Moreover, constraining user and item factors to topic space allows for the learning of an interpretable model that users can visualise.Science Foundation IrelandInsight Research Centre2020-10-06 JG: PDF replaced with correct versio

    An enhanced sequential exception technique for semantic-based text anomaly detection

    Get PDF
    The detection of semantic-based text anomaly is an interesting research area which has gained considerable attention from the data mining community. Text anomaly detection identifies deviating information from general information contained in documents. Text data are characterized by having problems related to ambiguity, high dimensionality, sparsity and text representation. If these challenges are not properly resolved, identifying semantic-based text anomaly will be less accurate. This study proposes an Enhanced Sequential Exception Technique (ESET) to detect semantic-based text anomaly by achieving five objectives: (1) to modify Sequential Exception Technique (SET) in processing unstructured text; (2) to optimize Cosine Similarity for identifying similar and dissimilar text data; (3) to hybridize modified SET with Latent Semantic Analysis (LSA); (4) to integrate Lesk and Selectional Preference algorithms for disambiguating senses and identifying text canonical form; and (5) to represent semantic-based text anomaly using First Order Logic (FOL) and Concept Network Graph (CNG). ESET performs text anomaly detection by employing optimized Cosine Similarity, hybridizing LSA with modified SET, and integrating it with Word Sense Disambiguation algorithms specifically Lesk and Selectional Preference. Then, FOL and CNG are proposed to represent the detected semantic-based text anomaly. To demonstrate the feasibility of the technique, four selected datasets namely NIPS data, ENRON, Daily Koss blog, and 20Newsgroups were experimented on. The experimental evaluation revealed that ESET has significantly improved the accuracy of detecting semantic-based text anomaly from documents. When compared with existing measures, the experimental results outperformed benchmarked methods with an improved F1-score from all datasets respectively; NIPS data 0.75, ENRON 0.82, Daily Koss blog 0.93 and 20Newsgroups 0.97. The results generated from ESET has proven to be significant and supported a growing notion of semantic-based text anomaly which is increasingly evident in existing literatures. Practically, this study contributes to topic modelling and concept coherence for the purpose of visualizing information, knowledge sharing and optimized decision making

    A Topic Coverage Approach to Evaluation of Topic Models

    Full text link
    Topic models are widely used unsupervised models of text capable of learning topics - weighted lists of words and documents - from large collections of text documents. When topic models are used for discovery of topics in text collections, a question that arises naturally is how well the model-induced topics correspond to topics of interest to the analyst. In this paper we revisit and extend a so far neglected approach to topic model evaluation based on measuring topic coverage - computationally matching model topics with a set of reference topics that models are expected to uncover. The approach is well suited for analyzing models' performance in topic discovery and for large-scale analysis of both topic models and measures of model quality. We propose new measures of coverage and evaluate, in a series of experiments, different types of topic models on two distinct text domains for which interest for topic discovery exists. The experiments include evaluation of model quality, analysis of coverage of distinct topic categories, and the analysis of the relationship between coverage and other methods of topic model evaluation. The contributions of the paper include new measures of coverage, insights into both topic models and other methods of model evaluation, and the datasets and code for facilitating future research of both topic coverage and other approaches to topic model evaluation.Comment: Results and contributions unchanged; Added new references; Improved the contextualization and the description of the work (abstr, intro, 7.1 concl, rw, concl); Moved technical details of data and model building to appendices; Improved layout
    corecore