3,639 research outputs found

    Sentiment analysis on twitter for the portuguese language

    Get PDF
    Dissertação para obtenção do Grau de Mestre em Engenharia InformáticaWith the growth and popularity of the internet and more specifically of social networks, users can more easily share their thoughts, insights and experiences with others. Messages shared via social networks provide useful information for several applications, such as monitoring specific targets for sentiment or comparing the public sentiment on several targets, avoiding the traditional marketing research method with the use of surveys to explicitly get the public opinion. To extract information from the large amounts of messages that are shared, it is best to use an automated program to process these messages. Sentiment analysis is an automated process to determine the sentiment expressed in natural language in text. Sentiment is a broad term, but here we are focussed in opinions and emotions that are expressed in text. Nowadays, out of the existing social network websites, Twitter is considered the best one for this kind of analysis. Twitter allows users to share their opinion on several topics and entities, by means of short messages. The messages may be malformed and contain spelling errors, therefore some treatment of the text may be necessary before the analysis, such as spell checks. To know what the message is focusing on it is necessary to find these entities on the text such as people, locations, organizations, products, etc. and then analyse the rest of the text and obtain what is said about that specific entity. With the analysis of several messages, we can have a general idea on what the public thinks regarding many different entities. It is our goal to extract as much information concerning different entities from tweets in the Portuguese language. Here it is shown different techniques that may be used as well as examples and results on state-of-the-art related work. Using a semantic approach, from these messages we were able to find and extract named entities and assigning sentiment values for each found entity, producing a complete tool competitive with existing solutions. The sentiment classification and assigning to entities is based on the grammatical construction of the message. These results are then used to be viewed by the user in real time or stored to be viewed latter. This analysis provides ways to view and compare the public sentiment regarding these entities, showing the favourite brands, companies and people, as well as showing the growth of the sentiment over time

    Rating prediction on yelp academic dataset using paragraph vectors

    Get PDF
    This work studies the application of Paragraph Vectors to the Yelp Academic Dataset reviews in order to predict user ratings for different categories of businesses like auto repair, restaurants or veterinarians. Paragraph Vectors is a word embeddings techniques were each word or piece of text is converted to a continuous low dimensional space. Then, the opinion mining or sentiment analysis is observed as a classification task, where each user review is associated with a label the rating - and a probabilistic model is built with a logistic classifier. Following the intuition that the semantic information present in textual user reviews is generally more complex and complete than the numeric rating itself, this work applies Paragraph Vectors successfully toYelp dataset and evaluates its results.info:eu-repo/semantics/acceptedVersio

    Is the polarity of content producers strongly influenced by the results of the event?

    Full text link
    This paper presents an approach to compare two types of data, subjective data (Polarity of Pan American Games 2011 event by country) and objective data (the number of medals won by each participating country), based on the Pearson corre- lation. When dealing with events described by people, knowledge acquisition is difficult because their structure is heterogeneous and subjective. A first step towards knowing the polarity of the information provided by people consists in automatically classifying the posts into clusters according to their polarity. The authors carried out a set of experiments using a corpus that consists of 5600 posts extracted from 168 Internet resources related to a specific event: the 2011 Pan American games. The approach is based on four components: a crawler, a filter, a synthesizer and a polarity analyzer. The PanAmerican approach automatically classifies the polarity of the event into clusters with the following results: 588 positive, 336 neutral, and 76 negative. Our work found out that the polarity of the content produced was strongly influenced by the results of the event with a correlation of .74. Thus, it is possible to conclude that the polarity of content is strongly affected by the results of the event. Finally, the accuracy of the PanAmerican approach is: .87, .90, and .80 according to the precision of the three classes of polarity evaluated

    Classification of Under-Resourced Language Documents Using English Ontology

    Get PDF
    Automatic documents classification is an important task due to the rapid growth of the number of electronic documents, which aims automatically assign the document to a predefined category based on its contents. The use of automatic document classification has been plays an important role in information extraction, summarization, text retrieval, question answering, e-mail spam detection, web page content filtering, automatic message routing , etc.Most existing methods and techniques in the field of document classification are keyword based, but due to lack of semantic consideration of this technique, it incurs low performance. In contrast, documents also be classified by taking their semantics using ontology as a knowledge base for classification; however, it is very challenging of building ontology with under-resourced language. Hence, this approach is only limited to resourced language (i.e. English) support. As a result, under-resourced language written documents are not benefited such ontology based classification approach. This paper describes the design of automatic document classification of under-resourced language written documents. In this work, we propose an approach that performs classification of under-resourced language written documents on top of English ontology. We used a bilingual dictionary with Part of Speech feature for word-by-word text translation to enable the classification of document without any language barrier. The design has a concept-mapping component, which uses lexical and semantic features to map the translated sense along the ontology concepts. Beside this, the design also has a categorization component, which determines a category of a given document based on weight of mapped concept. To evaluate the performance of the proposed approach 20-test documents for Amharic and Tigrinya and 15-test document for Afaan Oromo in each news category used. In order to observe the effect of incorporated features (i.e. lemma based index term selection, pre-processing strategies during concept mapping, lexical and semantics based concept mapping) five experimental techniques conducted. The experimental result indicated that the proposed approach with incorporation of all features and components achieved an average F-measure of 92.37%, 86.07% and 88.12% for Amharic, Afaan Oromo and Tigrinya documents respectively. Keywords: under-resourced language, Multilingual, Documents or text Classification, knowledge base, Ontology based text categorization, multilingual text classification, Ontology. DOI: 10.7176/CEIS/10-6-02 Publication date:July 31st 201

    Extraction of opinionated profiles from comments on web news

    Get PDF
    Tese de mestrado integrado. Engenharia Informática e Computação. Faculdade de Engenharia. Universidade do Porto. 201

    Applying Supervised Opinion Mining Techniques on Online User Reviews

    Get PDF
    In recent years, the spectacular development of web technologies, lead to an enormous quantity of user generated information in online systems. This large amount of information on web platforms make them viable for use as data sources, in applications based on opinion mining and sentiment analysis. The paper proposes an algorithm for detecting sentiments on movie user reviews, based on naive Bayes classifier. We make an analysis of the opinion mining domain, techniques used in sentiment analysis and its applicability. We implemented the proposed algorithm and we tested its performance, and suggested directions of development

    Sentiment Analysis on Tweets about Diabetes: An Aspect-Level Approach

    Get PDF
    In recent years, some methods of sentiment analysis have been developed for the health domain; however, the diabetes domain has not been explored yet. In addition, there is a lack of approaches that analyze the positive or negative orientation of each aspect contained in a document (a review, a piece of news, and a tweet, among others). Based on this understanding, we propose an aspect-level sentiment analysis method based on ontologies in the diabetes domain. The sentiment of the aspects is calculated by considering the words around the aspect which are obtained through N-gram methods (N-gram after, N-gram before, and N-gram around). To evaluate the effectiveness of our method, we obtained a corpus from Twitter, which has been manually labelled at aspect level as positive, negative, or neutral. The experimental results show that the best result was obtained through the N-gram around method with a precision of 81.93%, a recall of 81.13%, and an F-measure of 81.24%

    Improving Knowledge Acquisition in Collaborative Knowledge Construction Tool with Virtual Catalyst

    Get PDF
    Noctua is a web tool to assist in Knowledge Acquisition and Collaborative Knowledge Construction processes. Noctua has an innovation: a Virtual Catalyst designed to facilitate the task of eliciting and validating knowledge. The Virtual Catalyst queries participants, proposing new knowledge, seeking confirmation to the knowledge already elicited, and showing conflicting opinions. The Virtual Catalyst takes into account participants' profiles in order to automatically ask them questions related to each one's field of knowledge or interest. This paper presents Noctua and its Virtual Catalyst. The tool was submitted to experimentation and the analysis of the results showed that the primary goal of increasing the rate of knowledge construction was achieved (up to 144 % in the rate of knowledge creation), and also showed some unexpected beneficial outcomes

    Framework for collaborative knowledge management in organizations

    Get PDF
    Nowadays organizations have been pushed to speed up the rate of industrial transformation to high value products and services. The capability to agilely respond to new market demands became a strategic pillar for innovation, and knowledge management could support organizations to achieve that goal. However, current knowledge management approaches tend to be over complex or too academic, with interfaces difficult to manage, even more if cooperative handling is required. Nevertheless, in an ideal framework, both tacit and explicit knowledge management should be addressed to achieve knowledge handling with precise and semantically meaningful definitions. Moreover, with the increase of Internet usage, the amount of available information explodes. It leads to the observed progress in the creation of mechanisms to retrieve useful knowledge from the huge existent amount of information sources. However, a same knowledge representation of a thing could mean differently to different people and applications. Contributing towards this direction, this thesis proposes a framework capable of gathering the knowledge held by domain experts and domain sources through a knowledge management system and transform it into explicit ontologies. This enables to build tools with advanced reasoning capacities with the aim to support enterprises decision-making processes. The author also intends to address the problem of knowledge transference within an among organizations. This will be done through a module (part of the proposed framework) for domain’s lexicon establishment which purpose is to represent and unify the understanding of the domain’s used semantic
    corecore