2,116 research outputs found

    Exploring Latent Semantic Factors to Find Useful Product Reviews

    Full text link
    Online reviews provided by consumers are a valuable asset for e-Commerce platforms, influencing potential consumers in making purchasing decisions. However, these reviews are of varying quality, with the useful ones buried deep within a heap of non-informative reviews. In this work, we attempt to automatically identify review quality in terms of its helpfulness to the end consumers. In contrast to previous works in this domain exploiting a variety of syntactic and community-level features, we delve deep into the semantics of reviews as to what makes them useful, providing interpretable explanation for the same. We identify a set of consistency and semantic factors, all from the text, ratings, and timestamps of user-generated reviews, making our approach generalizable across all communities and domains. We explore review semantics in terms of several latent factors like the expertise of its author, his judgment about the fine-grained facets of the underlying product, and his writing style. These are cast into a Hidden Markov Model -- Latent Dirichlet Allocation (HMM-LDA) based model to jointly infer: (i) reviewer expertise, (ii) item facets, and (iii) review helpfulness. Large-scale experiments on five real-world datasets from Amazon show significant improvement over state-of-the-art baselines in predicting and ranking useful reviews

    A survey on opinion summarization technique s for social media

    Get PDF
    The volume of data on the social media is huge and even keeps increasing. The need for efficient processing of this extensive information resulted in increasing research interest in knowledge engineering tasks such as Opinion Summarization. This survey shows the current opinion summarization challenges for social media, then the necessary pre-summarization steps like preprocessing, features extraction, noise elimination, and handling of synonym features. Next, it covers the various approaches used in opinion summarization like Visualization, Abstractive, Aspect based, Query-focused, Real Time, Update Summarization, and highlight other Opinion Summarization approaches such as Contrastive, Concept-based, Community Detection, Domain Specific, Bilingual, Social Bookmarking, and Social Media Sampling. It covers the different datasets used in opinion summarization and future work suggested in each technique. Finally, it provides different ways for evaluating opinion summarization

    딥러닝과 직원 의견으로 파악한 조직의 무형내부자산

    Get PDF
    학위논문 (석사) -- 서울대학교 대학원 : 공과대학 산업공학과, 2021. 2. 조성준.Intangible resources are non-physical firm resources that are critical to a firms success. Among them, we refer to those that directly impact employee experience at work as intangible internal resources (IIR). We attempted to create a comprehensive list of IIR by applying a deep learning model to a large-scale company review dataset. We collected over 1.4 million company reviews written for S&P 500 firms from Glassdoor, one of the largest anonymous company rating and review website. Since Glassdoor reviews represent the collective employee voice, we hypothesized that prominent topics from the collective voice would represent different types of IIR. By applying a deep learning model to the review data, we discovered 24 resource types, among which 15 types such as Atmosphere at Work, Coworkers, and Technological Resources aligned with frameworks from the past literature. We then implemented a keyword extraction model to identify each firms unique characteristics regarding different IIR types. We believe firms could utilize our findings to better understand and manage their strategic resources.무형자산이란 조직이 보유한 자산 중 형태가 없는 자산을 뜻하며, 최근 들어 유형자산처럼 기업의 성과에 기여하는 동력 중 하나로 주목받고 있다. 그런데 정작 무엇이 무형자산인지, 무형자산의 종류에는 무엇이 있는지에 대한 연구는 활발하게 진행되어오지 않은 실정이다. 특히 직원의 관점에서 바라본 무형자산, 즉 무형내부자산에 대한 연구 역시 이론에 기반한 프레임워크 이상으로 이루어지지 않았다. 본 연구는 대량의 회사 리뷰 데이터에 딥러닝을 접목시켜 무형내부자산의 종류를 포괄적으로 파악하고자 했다. 이를 위해 세계 최대 회사 평점 및 리뷰 사이트인 글래스도어에서 S&P 500 회사에 대해 게재된 140만 개 이상의 리뷰 데이터를 수집했다. 방대한 양의 직원의 목소리에서 자주 등장하는 주제가 무형내부자산의 종류와 일치할 것이라고 가정한 것이다. 해당 데이터에 어텐션 기반의 뉴럴 네트워크 모델을 적용하여 24개의 주제를 추출하였고, 이 중 직장 분위기, 동료, 기술적인 자원 등 15개의 주제가 기존 문헌에서 언급되어온 무형자산 종류와 일치했음을 확인했다. 이후 키워드 추출 방법을 적용해 회사별로 보유한 각 무형내부자산의 특징을 파악했다. 본 연구가 제시한 방법론을 통해 회사들이 전략적인 자산을 보다 잘 이해하고 활용할 수 있을 것으로 사료된다.Abstract i Contents ii List of Tables iv List of Figures v Chapter 1 Introduction 1 Chapter 2 Literature Review 7 2.1 Intangible Resources 7 2.2 Glassdoor 11 2.3 Unsupervised Aspect Extraction Methods 13 2.4 Unsupervised Keyword Extraction Methods 16 Chapter 3 Glassdoor Data 18 3.1 Data Collection 18 3.2 Descriptive Statistics 20 3.3 Text Preprocessing 22 Chapter 4 Unsupervised Methods for IIR and Firm Characteristic Analysis 24 4.1 ABAE Method for IIR Discovery 24 4.2 TF-IDF Method for Firm Characteristic Discovery 28 Chapter 5 Experimental Results 30 5.1 15 IIR Types from ABAE 30 5.2 Unique Firm Characteristics from TF-IDF 39 5.3 Managerial Implications 45 5.4 Evaluation of ABAE 46 Chapter 6 Conclusion 49 Bibliography 51 Appendix 58 국문초록 75 감사의 글 76Maste

    Making sense of text: artificial intelligence-enabled content analysis

    Get PDF
    Purpose: The purpose of this paper is to introduce, apply and compare how artificial intelligence (AI), and specifically the IBM Watson system, can be used for content analysis in marketing research relative to manual and computer-aided (non-AI) approaches to content analysis. Design/methodology/approach: To illustrate the use of AI-enabled content analysis, this paper examines the text of leadership speeches, content related to organizational brand. The process and results of using AI are compared to manual and computer-aided approaches by using three performance factors for content analysis: reliability, validity and efficiency. Findings: Relative to manual and computer-aided approaches, AI-enabled content analysis provides clear advantages with high reliability, high validity and moderate efficiency. Research limitations/implications: This paper offers three contributions. First, it highlights the continued importance of the content analysis research method, particularly with the explosive growth of natural language-based user-generated content. Second, it provides a road map of how to use AI-enabled content analysis. Third, it applies and compares AI-enabled content analysis to manual and computer-aided, using leadership speeches. Practical implications: For each of the three approaches, nine steps are outlined and described to allow for replicability of this study. The advantages and disadvantages of using AI for content analysis are discussed. Together these are intended to motivate and guide researchers to apply and develop AI-enabled content analysis for research in marketing and other disciplines. Originality/value: To the best of the authors' knowledge, this paper is among the first to introduce, apply and compare how AI can be used for content analysis

    OPINIONS FROM TWEETS AS GOOD INDICATORS OF LEADERSHIP AND FOLLOWERSHIP STATUS

    Get PDF
    ABSTRACT Scores of public opinion about two popular world leaders collected from tweets based on the sentiment they exhibited were classified using two Machine learning techniques (Naïve Bayes and Support vector machines), and four features (Words, unigrams, bigrams and negation) for the classification, we found that the Naïve bayes with unigram features attained a high accuracy of up to 90% therefore indicating that tweets can be used to suggest potential candidates in political election and ways to improve a leaders reputation

    Application of Common Sense Computing for the Development of a Novel Knowledge-Based Opinion Mining Engine

    Get PDF
    The ways people express their opinions and sentiments have radically changed in the past few years thanks to the advent of social networks, web communities, blogs, wikis and other online collaborative media. The distillation of knowledge from this huge amount of unstructured information can be a key factor for marketers who want to create an image or identity in the minds of their customers for their product, brand, or organisation. These online social data, however, remain hardly accessible to computers, as they are specifically meant for human consumption. The automatic analysis of online opinions, in fact, involves a deep understanding of natural language text by machines, from which we are still very far. Hitherto, online information retrieval has been mainly based on algorithms relying on the textual representation of web-pages. Such algorithms are very good at retrieving texts, splitting them into parts, checking the spelling and counting their words. But when it comes to interpreting sentences and extracting meaningful information, their capabilities are known to be very limited. Existing approaches to opinion mining and sentiment analysis, in particular, can be grouped into three main categories: keyword spotting, in which text is classified into categories based on the presence of fairly unambiguous affect words; lexical affinity, which assigns arbitrary words a probabilistic affinity for a particular emotion; statistical methods, which calculate the valence of affective keywords and word co-occurrence frequencies on the base of a large training corpus. Early works aimed to classify entire documents as containing overall positive or negative polarity, or rating scores of reviews. Such systems were mainly based on supervised approaches relying on manually labelled samples, such as movie or product reviews where the opinionist’s overall positive or negative attitude was explicitly indicated. However, opinions and sentiments do not occur only at document level, nor they are limited to a single valence or target. Contrary or complementary attitudes toward the same topic or multiple topics can be present across the span of a document. In more recent works, text analysis granularity has been taken down to segment and sentence level, e.g., by using presence of opinion-bearing lexical items (single words or n-grams) to detect subjective sentences, or by exploiting association rule mining for a feature-based analysis of product reviews. These approaches, however, are still far from being able to infer the cognitive and affective information associated with natural language as they mainly rely on knowledge bases that are still too limited to efficiently process text at sentence level. In this thesis, common sense computing techniques are further developed and applied to bridge the semantic gap between word-level natural language data and the concept-level opinions conveyed by these. In particular, the ensemble application of graph mining and multi-dimensionality reduction techniques on two common sense knowledge bases was exploited to develop a novel intelligent engine for open-domain opinion mining and sentiment analysis. The proposed approach, termed sentic computing, performs a clause-level semantic analysis of text, which allows the inference of both the conceptual and emotional information associated with natural language opinions and, hence, a more efficient passage from (unstructured) textual information to (structured) machine-processable data. The engine was tested on three different resources, namely a Twitter hashtag repository, a LiveJournal database and a PatientOpinion dataset, and its performance compared both with results obtained using standard sentiment analysis techniques and using different state-of-the-art knowledge bases such as Princeton’s WordNet, MIT’s ConceptNet and Microsoft’s Probase. Differently from most currently available opinion mining services, the developed engine does not base its analysis on a limited set of affect words and their co-occurrence frequencies, but rather on common sense concepts and the cognitive and affective valence conveyed by these. This allows the engine to be domain-independent and, hence, to be embedded in any opinion mining system for the development of intelligent applications in multiple fields such as Social Web, HCI and e-health. Looking ahead, the combined novel use of different knowledge bases and of common sense reasoning techniques for opinion mining proposed in this work, will, eventually, pave the way for development of more bio-inspired approaches to the design of natural language processing systems capable of handling knowledge, retrieving it when necessary, making analogies and learning from experience

    Application of Common Sense Computing for the Development of a Novel Knowledge-Based Opinion Mining Engine

    Get PDF
    The ways people express their opinions and sentiments have radically changed in the past few years thanks to the advent of social networks, web communities, blogs, wikis and other online collaborative media. The distillation of knowledge from this huge amount of unstructured information can be a key factor for marketers who want to create an image or identity in the minds of their customers for their product, brand, or organisation. These online social data, however, remain hardly accessible to computers, as they are specifically meant for human consumption. The automatic analysis of online opinions, in fact, involves a deep understanding of natural language text by machines, from which we are still very far. Hitherto, online information retrieval has been mainly based on algorithms relying on the textual representation of web-pages. Such algorithms are very good at retrieving texts, splitting them into parts, checking the spelling and counting their words. But when it comes to interpreting sentences and extracting meaningful information, their capabilities are known to be very limited. Existing approaches to opinion mining and sentiment analysis, in particular, can be grouped into three main categories: keyword spotting, in which text is classified into categories based on the presence of fairly unambiguous affect words; lexical affinity, which assigns arbitrary words a probabilistic affinity for a particular emotion; statistical methods, which calculate the valence of affective keywords and word co-occurrence frequencies on the base of a large training corpus. Early works aimed to classify entire documents as containing overall positive or negative polarity, or rating scores of reviews. Such systems were mainly based on supervised approaches relying on manually labelled samples, such as movie or product reviews where the opinionist’s overall positive or negative attitude was explicitly indicated. However, opinions and sentiments do not occur only at document level, nor they are limited to a single valence or target. Contrary or complementary attitudes toward the same topic or multiple topics can be present across the span of a document. In more recent works, text analysis granularity has been taken down to segment and sentence level, e.g., by using presence of opinion-bearing lexical items (single words or n-grams) to detect subjective sentences, or by exploiting association rule mining for a feature-based analysis of product reviews. These approaches, however, are still far from being able to infer the cognitive and affective information associated with natural language as they mainly rely on knowledge bases that are still too limited to efficiently process text at sentence level. In this thesis, common sense computing techniques are further developed and applied to bridge the semantic gap between word-level natural language data and the concept-level opinions conveyed by these. In particular, the ensemble application of graph mining and multi-dimensionality reduction techniques on two common sense knowledge bases was exploited to develop a novel intelligent engine for open-domain opinion mining and sentiment analysis. The proposed approach, termed sentic computing, performs a clause-level semantic analysis of text, which allows the inference of both the conceptual and emotional information associated with natural language opinions and, hence, a more efficient passage from (unstructured) textual information to (structured) machine-processable data. The engine was tested on three different resources, namely a Twitter hashtag repository, a LiveJournal database and a PatientOpinion dataset, and its performance compared both with results obtained using standard sentiment analysis techniques and using different state-of-the-art knowledge bases such as Princeton’s WordNet, MIT’s ConceptNet and Microsoft’s Probase. Differently from most currently available opinion mining services, the developed engine does not base its analysis on a limited set of affect words and their co-occurrence frequencies, but rather on common sense concepts and the cognitive and affective valence conveyed by these. This allows the engine to be domain-independent and, hence, to be embedded in any opinion mining system for the development of intelligent applications in multiple fields such as Social Web, HCI and e-health. Looking ahead, the combined novel use of different knowledge bases and of common sense reasoning techniques for opinion mining proposed in this work, will, eventually, pave the way for development of more bio-inspired approaches to the design of natural language processing systems capable of handling knowledge, retrieving it when necessary, making analogies and learning from experience

    A distributional and syntactic approach to fine-grained opinion mining

    Get PDF
    This thesis contributes to a larger social science research program of analyzing the diffusion of IT innovations. We show how to automatically discriminate portions of text dealing with opinions about innovations by finding {source, target, opinion} triples in text. In this context, we can discern a list of innovations as targets from the domain itself. We can then use this list as an anchor for finding the other two members of the triple at a ``fine-grained'' level---paragraph contexts or less. We first demonstrate a vector space model for finding opinionated contexts in which the innovation targets are mentioned. We can find paragraph-level contexts by searching for an ``expresses-an-opinion-about'' relation between sources and targets using a supervised model with an SVM that uses features derived from a general-purpose subjectivity lexicon and a corpus indexing tool. We show that our algorithm correctly filters the domain relevant subset of subjectivity terms so that they are more highly valued. We then turn to identifying the opinion. Typically, opinions in opinion mining are taken to be positive or negative. We discuss a crowd sourcing technique developed to create the seed data describing human perception of opinion bearing language needed for our supervised learning algorithm. Our user interface successfully limited the meta-subjectivity inherent in the task (``What is an opinion?'') while reliably retrieving relevant opinionated words using labour not expert in the domain. Finally, we developed a new data structure and modeling technique for connecting targets with the correct within-sentence opinionated language. Syntactic relatedness tries (SRTs) contain all paths from a dependency graph of a sentence that connect a target expression to a candidate opinionated word. We use factor graphs to model how far a path through the SRT must be followed in order to connect the right targets to the right words. It turns out that we can correctly label significant portions of these tries with very rudimentary features such as part-of-speech tags and dependency labels with minimal processing. This technique uses the data from the crowdsourcing technique we developed as training data. We conclude by placing our work in the context of a larger sentiment classification pipeline and by describing a model for learning from the data structures produced by our work. This work contributes to computational linguistics by proposing and verifying new data gathering techniques and applying recent developments in machine learning to inference over grammatical structures for highly subjective purposes. It applies a suffix tree-based data structure to model opinion in a specific domain by imposing a restriction on the order in which the data is stored in the structure

    Detecting Deception, Partisan, and Social Biases

    Full text link
    Tesis por compendio[ES] En la actualidad, el mundo político tiene tanto o más impacto en la sociedad que ésta en el mundo político. Los líderes o representantes de partidos políticos hacen uso de su poder en los medios de comunicación, para modificar posiciones ideológicas y llegar al pueblo con el objetivo de ganar popularidad en las elecciones gubernamentales.A través de un lenguaje engañoso, los textos políticos pueden contener sesgos partidistas y sociales que minan la percepción de la realidad. Como resultado, los seguidores de una ideología, o miembros de una categoría social, se sienten amenazados por otros grupos sociales o ideológicos, o los perciben como competencia, derivándose así una polarización política con agresiones físicas y verbales. La comunidad científica del Procesamiento del Lenguaje Natural (NLP, según sus siglas en inglés) contribuye cada día a detectar discursos de odio, insultos, mensajes ofensivos, e información falsa entre otras tareas computacionales que colindan con ciencias sociales. Sin embargo, para abordar tales tareas, es necesario hacer frente a diversos problemas entre los que se encuentran la dificultad de tener textos etiquetados, las limitaciones de no trabajar con un equipo interdisciplinario, y los desafíos que entraña la necesidad de soluciones interpretables por el ser humano. Esta tesis se enfoca en la detección de sesgos partidistas y sesgos sociales, tomando como casos de estudio el hiperpartidismo y los estereotipos sobre inmigrantes. Para ello, se propone un modelo basado en una técnica de enmascaramiento de textos capaz de detectar lenguaje engañoso incluso en temas controversiales, siendo capaz de capturar patrones del contenido y el estilo de escritura. Además, abordamos el problema usando modelos basados en BERT, conocidos por su efectividad al capturar patrones sintácticos y semánticos sobre las mismas representaciones de textos. Ambos enfoques, la técnica de enmascaramiento y los modelos basados en BERT, se comparan en términos de desempeño y explicabilidad en la detección de hiperpartidismo en noticias políticas y estereotipos sobre inmigrantes. Para la identificación de estos últimos, se propone una nueva taxonomía con fundamentos teóricos en sicología social, y con la que se etiquetan textos extraídos de intervenciones partidistas llevadas a cabo en el Parlamento español. Los resultados muestran que los enfoques propuestos contribuyen al estudio del hiperpartidismo, así como a identif i car cuándo los ciudadanos y políticos enmarcan a los inmigrantes en una imagen de víctima, recurso económico, o amenaza. Finalmente, en esta investigación interdisciplinaria se demuestra que los estereotipos sobre inmigrantes son usados como estrategia retórica en contextos políticos.[CA] Avui, el món polític té tant o més impacte en la societat que la societat en el món polític. Els líders polítics, o representants dels partits polítics, fan servir el seu poder als mitjans de comunicació per modif i car posicions ideològiques i arribar al poble per tal de guanyar popularitat a les eleccions governamentals. Mitjançant un llenguatge enganyós, els textos polítics poden contenir biaixos partidistes i socials que soscaven la percepció de la realitat. Com a resultat, augmenta la polarització política nociva perquè els seguidors d'una ideologia, o els membres d'una categoria social, veuen els altres grups com una amenaça o competència, que acaba en agressions verbals i físiques amb resultats desafortunats. La comunitat de Processament del llenguatge natural (PNL) té cada dia noves aportacions amb enfocaments que ajuden a detectar discursos d'odi, insults, missatges ofensius i informació falsa, entre altres tasques computacionals relacionades amb les ciències socials. No obstant això, molts obstacles impedeixen eradicar aquests problemes, com ara la dif i cultat de tenir textos anotats, les limitacions dels enfocaments no interdisciplinaris i el repte afegit per la necessitat de solucions interpretables. Aquesta tesi se centra en la detecció de biaixos partidistes i socials, prenent com a cas pràctic l'hiperpartidisme i els estereotips sobre els immigrants. Proposem un model basat en una tècnica d'emmascarament que permet detectar llenguatge enganyós en temes polèmics i no polèmics, capturant pa-trons relacionats amb l'estil i el contingut. A més, abordem el problema avaluant models basats en BERT, coneguts per ser efectius per capturar patrons semàntics i sintàctics en la mateixa representació. Comparem aquests dos enfocaments (la tècnica d'emmascarament i els models basats en BERT) en termes de rendiment i les seves solucions explicables en la detecció de l'hiperpartidisme en les notícies polítiques i els estereotips d'immigrants. Per tal d'identificar els estereotips dels immigrants, proposem una nova tax-onomia recolzada per la teoria de la psicologia social i anotem un conjunt de dades de les intervencions partidistes al Parlament espanyol. Els resultats mostren que els nostres models poden ajudar a estudiar l'hiperpartidisme i identif i car diferents marcs en què els ciutadans i els polítics perceben els immigrants com a víctimes, recursos econòmics o amenaces. Finalment, aquesta investigació interdisciplinària demostra que els estereotips dels immigrants s'utilitzen com a estratègia retòrica en contextos polítics.[EN] Today, the political world has as much or more impact on society than society has on the political world. Political leaders, or representatives of political parties, use their power in the media to modify ideological positions and reach the people in order to gain popularity in government elections. Through deceptive language, political texts may contain partisan and social biases that undermine the perception of reality. As a result, harmful political polarization increases because the followers of an ideology, or members of a social category, see other groups as a threat or competition, ending in verbal and physical aggression with unfortunate outcomes. The Natural Language Processing (NLP) community has new contri-butions every day with approaches that help detect hate speech, insults, of f ensive messages, and false information, among other computational tasks related to social sciences. However, many obstacles prevent eradicating these problems, such as the dif f i culty of having annotated texts, the limitations of non-interdisciplinary approaches, and the challenge added by the necessity of interpretable solutions. This thesis focuses on the detection of partisan and social biases, tak-ing hyperpartisanship and stereotypes about immigrants as case studies. We propose a model based on a masking technique that can detect deceptive language in controversial and non-controversial topics, capturing patterns related to style and content. Moreover, we address the problem by evalu-ating BERT-based models, known to be ef f ective at capturing semantic and syntactic patterns in the same representation. We compare these two approaches (the masking technique and the BERT-based models) in terms of their performance and the explainability of their decisions in the detection of hyperpartisanship in political news and immigrant stereotypes. In order to identify immigrant stereotypes, we propose a new taxonomy supported by social psychology theory and annotate a dataset from partisan interventions in the Spanish parliament. Results show that our models can help study hyperpartisanship and identify dif f erent frames in which citizens and politicians perceive immigrants as victims, economic resources, or threat. Finally, this interdisciplinary research proves that immigrant stereotypes are used as a rhetorical strategy in political contexts.This PhD thesis was funded by the MISMIS-FAKEnHATE research project (PGC2018-096212-B-C31) of the Spanish Ministry of Science and Innovation.Sánchez Junquera, JJ. (2022). Detecting Deception, Partisan, and Social Biases [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/185784Compendi
    corecore