7 research outputs found

    A set of open-source tools for Turkish natural language processing

    Get PDF
    Abstract This paper introduces a set of freely available, open-source tools for Turkish that are built around TRmorph, a morphological analyzer introduced earlier in Çöltekin (2010a). The article first provides an update on the analyzer, which includes a complete rewrite using a different finite-state description language and tool set as well as major tagset changes to comply better with the state-of-the-art computational processing of Turkish and the user requests received so far. Besides these major changes to the analyzer, this paper introduces tools for morphological segmentation, stemming and lemmatization, guessing unknown words, grapheme to phoneme conversion, hyphenation and a morphological disambiguation

    STREAMCUBE: Hierarchical spatio-temporal hashtag clustering for event exploration over the Twitter stream

    Full text link

    A multi-modal approach towards mining social media data during natural disasters - A case study of Hurricane Irma

    Get PDF
    Streaming social media provides a real-time glimpse of extreme weather impacts. However, the volume of streaming data makes mining information a challenge for emergency managers, policy makers, and disciplinary scientists. Here we explore the effectiveness of data learned approaches to mine and filter information from streaming social media data from Hurricane Irma's landfall in Florida, USA. We use 54,383 Twitter messages (out of 784 K geolocated messages) from 16,598 users from Sept. 10–12, 2017 to develop 4 independent models to filter data for relevance: 1) a geospatial model based on forcing conditions at the place and time of each tweet, 2) an image classification model for tweets that include images, 3) a user model to predict the reliability of the tweeter, and 4) a text model to determine if the text is related to Hurricane Irma. All four models are independently tested, and can be combined to quickly filter and visualize tweets based on user-defined thresholds for each submodel. We envision that this type of filtering and visualization routine can be useful as a base model for data capture from noisy sources such as Twitter. The data can then be subsequently used by policy makers, environmental managers, emergency managers, and domain scientists interested in finding tweets with specific attributes to use during different stages of the disaster (e.g., preparedness, response, and recovery), or for detailed research

    Semantic Expansion of Tweet Contents for Enhanced Event Detection in Twitter

    No full text
    This paper aims to enhance event detection methods in a micro-blogging platform, namely Twitter. The enhancement technique we propose is based on lexico-semantic expansion of tweet contents while applying document similarity and clustering algorithms. Considering the length limitations and idiosyncratic spelling in Twitter environment, it is possible to take advantage of word similarities and to enrich texts with similar words. The semantic expansion technique we implement is based on syntagmatic and paradigmatic relationships between words, extracted from their co-occurrence statistics. As our technique does not depend on an existing ontology or a lexicon database such as WordNet, it should be applicable for any language. The proposed technique is applied on a tweet set collected for three days from the users in Turkey. The results indicate earlier detection of events and improvements in accuracy

    Semantic Expansion of Tweet Contents for Enhanced Event Detection in Twitter

    No full text
    corecore