2,258 research outputs found

    DancingLines: An Analytical Scheme to Depict Cross-Platform Event Popularity

    Full text link
    Nowadays, events usually burst and are propagated online through multiple modern media like social networks and search engines. There exists various research discussing the event dissemination trends on individual medium, while few studies focus on event popularity analysis from a cross-platform perspective. Challenges come from the vast diversity of events and media, limited access to aligned datasets across different media and a great deal of noise in the datasets. In this paper, we design DancingLines, an innovative scheme that captures and quantitatively analyzes event popularity between pairwise text media. It contains two models: TF-SW, a semantic-aware popularity quantification model, based on an integrated weight coefficient leveraging Word2Vec and TextRank; and wDTW-CD, a pairwise event popularity time series alignment model matching different event phases adapted from Dynamic Time Warping. We also propose three metrics to interpret event popularity trends between pairwise social platforms. Experimental results on eighteen real-world event datasets from an influential social network and a popular search engine validate the effectiveness and applicability of our scheme. DancingLines is demonstrated to possess broad application potentials for discovering the knowledge of various aspects related to events and different media

    A Corpus-based Approach to the Chinese Word Segmentation

    Get PDF
    For a society based upon laws and reason, it has become too easy for us to believe that we live in a world without them. And given that our linguistics wisdom was originally motivated by the search for rules, it seems strange that we now consider these rules to be the exceptions and take exceptions as the norm. The current task of contemporary computational linguistics is to describe these exceptions. In particular, it suffices for most language processing needs, to just describe the argument and predicate within an elementary sentence, under the framework of local grammar. Therefore, a corpus-based approach to the Chinese Word Segmentation problem is proposed, as the first step towards a local grammar for the Chinese language. The two main issues with existing lexicon-based approaches are (a) the classification of unknown character sequences, i.e. sequences that are not listed in the lexicon, and (b) the disambiguation of situations where two candidate words overlap. For (a), we propose an automatic method of enriching the lexicon by comparing candidate sequences to occurrences of the same strings in a manually segmented reference corpus, and using methods of machine learning to select the optimal segmentation for them. These methods are developed in the course of the thesis specifically for this task. The possibility of applying these machine learning method will be discussed in NP-extraction and alignment domain. (b) is approached by designing a general processing framework for Chinese text, which will be called multi-level processing. Under this framework, sentences are recursively split into fragments, according to a language-specific, but domainindependent heuristics. The resulting fragments then define the ultimate boundaries between candidate words and therefore resolve any segmentation ambiguity caused by overlapping sequences. A new shallow semantical annotation is also proposed under the frame work of multi-level processing. A word segmentation algorithm based on these principles has been implemented and tested; results of the evaluation are given and compared to the performance of previous approaches as reported in the literature. The first chapter of this thesis discusses the goals of segmentation and introduces some background concepts. The second chapter analyses the current state-of-theart approach to Chinese language segmentation. Chapter 3 proposes a new corpusbased approach to the identification of unknown words. In chapter 4, a new shallow semantical annotation is also proposed under the framework of multi-level processing

    Methods and algorithms for unsupervised learning of morphology

    Get PDF
    This is an accepted manuscript of a chapter published by Springer in Computational Linguistics and Intelligent Text Processing. CICLing 2014. Lecture Notes in Computer Science, vol 8403 in 2014 available online: https://doi.org/10.1007/978-3-642-54906-9_15 The accepted version of the publication may differ from the final published version.This paper is a survey of methods and algorithms for unsupervised learning of morphology. We provide a description of the methods and algorithms used for morphological segmentation from a computational linguistics point of view. We survey morphological segmentation methods covering methods based on MDL (minimum description length), MLE (maximum likelihood estimation), MAP (maximum a posteriori), parametric and non-parametric Bayesian approaches. A review of the evaluation schemes for unsupervised morphological segmentation is also provided along with a summary of evaluation results on the Morpho Challenge evaluations.Published versio

    A Neural Approach to Discourse Relation Signal Detection

    Get PDF
    Previous data-driven work investigating the types and distributions of discourse relation signals, including discourse markers such as 'however' or phrases such as 'as a result' has focused on the relative frequencies of signal words within and outside text from each discourse relation. Such approaches do not allow us to quantify the signaling strength of individual instances of a signal on a scale (e.g. more or less discourse-relevant instances of 'and'), to assess the distribution of ambiguity for signals, or to identify words that hinder discourse relation identification in context ('anti-signals' or 'distractors'). In this paper we present a data-driven approach to signal detection using a distantly supervised neural network and develop a metric, Δs (or 'delta-softmax'), to quantify signaling strength. Ranging between -1 and 1 and relying on recent advances in contextualized words embeddings, the metric represents each word's positive or negative contribution to the identifiability of a relation in specific instances in context. Based on an English corpus annotated for discourse relations using Rhetorical Structure Theory and signal type annotations anchored to specific tokens, our analysis examines the reliability of the metric, the places where it overlaps with and differs from human judgments, and the implications for identifying features that neural models may need in order to perform better on automatic discourse relation classification

    SAM-Med3D

    Full text link
    Although the Segment Anything Model (SAM) has demonstrated impressive performance in 2D natural image segmentation, its application to 3D volumetric medical images reveals significant shortcomings, namely suboptimal performance and unstable prediction, necessitating an excessive number of prompt points to attain the desired outcomes. These issues can hardly be addressed by fine-tuning SAM on medical data because the original 2D structure of SAM neglects 3D spatial information. In this paper, we introduce SAM-Med3D, the most comprehensive study to modify SAM for 3D medical images. Our approach is characterized by its comprehensiveness in two primary aspects: firstly, by comprehensively reformulating SAM to a thorough 3D architecture trained on a comprehensively processed large-scale volumetric medical dataset; and secondly, by providing a comprehensive evaluation of its performance. Specifically, we train SAM-Med3D with over 131K 3D masks and 247 categories. Our SAM-Med3D excels at capturing 3D spatial information, exhibiting competitive performance with significantly fewer prompt points than the top-performing fine-tuned SAM in the medical domain. We then evaluate its capabilities across 15 datasets and analyze it from multiple perspectives, including anatomical structures, modalities, targets, and generalization abilities. Our approach, compared with SAM, showcases pronouncedly enhanced efficiency and broad segmentation capabilities for 3D volumetric medical images. Our code is released at https://github.com/uni-medical/SAM-Med3D
    • …
    corecore