155 research outputs found

    Sentiment analysis with limited training data

    Get PDF
    Sentiments are positive and negative emotions, evaluations and stances. This dissertation focuses on learning based systems for automatic analysis of sentiments and comparisons in natural language text. The proposed approach consists of three contributions: 1. Bag-of-opinions model: For predicting document-level polarity and intensity, we proposed the bag-of-opinions model by modeling each document as a bag of sentiments, which can explore the syntactic structures of sentiment-bearing phrases for improved rating prediction of online reviews. 2. Multi-experts model: Due to the sparsity of manually-labeled training data, we designed the multi-experts model for sentence-level analysis of sentiment polarity and intensity by fully exploiting any available sentiment indicators, such as phrase-level predictors and sentence similarity measures. 3. LSSVMrae model: To understand the sentiments regarding entities, we proposed LSSVMrae model for extracting sentiments and comparisons of entities at both sentence and subsentential level. Different granularity of analysis leads to different model complexity, the finer the more complex. All proposed models aim to minimize the use of hand-labeled data by maximizing the use of the freely available resources. These models explore also different feature representations to capture the compositional semantics inherent in sentiment-bearing expressions. Our experimental results on real-world data showed that all models significantly outperform the state-of-the-art methods on the respective tasks.Sentiments sind positive und negative Gefühle, Bewertungen und Einstellungen. Die Dissertation beschäftigt sich mit lernbasierten Systemen zur automatischen Analyse von Sentiments und Vergleichen in Texten in natürlicher Sprache. Die vorliegende Abeit leistet dazu drei Beiträge: 1. Bag-of-Opinions-Modell: Zur Vorhersage der Polarität und Intensität auf Dokumentenebene haben wir das Bag-of-Opinions-Modell vorgeschlagen, bei dem jedes Dokument als ein Beutel Sentiments dargestellt wird. Das Modell kann die syntaktischen Strukturen von subjektiven Ausdrücken untersuchen, um eine verbesserte Bewertungsvorhersage von Online-Rezensionen zu erzielen. 2. Multi-Experten-Modell: Wegen des Mangels an manuell annotierten Trainingsdaten haben wir das Multi-Experten-Modell entworfen, um die Sentimentpolarität und -intensität auf Satzebene zu analysieren. Das Modell kann alle möglichen Sentiment-Indikatoren verwenden, wie Prädiktoren auf Phrasenebene und Ähnlichkeitsmaße von Sätzen. 3. LSSVMrae-Modell: Um Sentiments von Entitäten zu verstehen, wir haben wir das LSSVMrae-Modell zur Extraktion von Sentiments und Vergleichen von Entitäten auf Satz- und Ausdrucksebene vorgeschlagen. Die unterschiedliche Granularität der Analyse führt zu unterschiedlicher Modellkomplexität; je feiner, desto komplexer. Alle vorgeschlagenen Modelle zielen darauf ab, möglichst wenige manuell annotierte Daten und möglichst viele frei verfügbare Ressourcen zu verwenden. Diese Modelle untersuchen auch verschiedene Merkmalsdarstellungen, um die Kompositionssemantik abzubilden, die subjektiven Ausdrücken inhärent ist. Die Ergebnisse unserer Experimente mit Realweltdaten haben gezeigt, dass alle Modelle für die jeweiligen Aufgaben deutlich bessere Leistungen erzielen als die modernsten Methoden

    Multi-lingual Opinion Mining on YouTube

    Get PDF
    In order to successfully apply opinion mining (OM) to the large amounts of user-generated content produced every day, we need robust models that can handle the noisy input well yet can easily be adapted to a new domain or language. We here focus on opinion mining for YouTube by (i) modeling classifiers that predict the type of a comment and its polarity, while distinguishing whether the polarity is directed towards the product or video; (ii) proposing a robust shallow syntactic structure (STRUCT) that adapts well when tested across domains; and (iii) evaluating the effectiveness on the proposed structure on two languages, English and Italian. We rely on tree kernels to automatically extract and learn features with better generalization power than traditionally used bag-of-word models. Our extensive empirical evaluation shows that (i) STRUCT outperforms the bag-of-words model both within the same domain (up to 2.6% and 3% of absolute improvement for Italian and English, respectively); (ii) it is particularly useful when tested across domains (up to more than 4% absolute improvement for both languages), especially when little training data is available (up to 10% absolute improvement) and (iii) the proposed structure is also effective in a lower-resource language scenario, where only less accurate linguistic processing tools are available
    • …
    corecore