5 research outputs found

    Japanese Sentiment Classification with Stacked Denoising Auto-Encoder using Distributed Word Representation

    Get PDF
    Traditional sentiment classification methods often require polarity dictionaries or crafted features to utilize machine learning. How-ever, those approaches incur high costs in the making of dictionaries and/or features, which hinder generalization of tasks. Ex-amples of these approaches include an ap-proach that uses a polarity dictionary that can-not handle unknown or newly invented words and another approach that uses a complex model with 13 types of feature templates. We propose a novel high performance sentiment classification method with stacked denoising auto-encoders that uses distributed word rep-resentation instead of building dictionaries or utilizing engineering features. The results of experiments conducted indicate that our model achieves state-of-the-art performance in Japanese sentiment classification tasks.

    Japanese Sentiment Classification with Stacked Denoising Auto-Encoder using Distributed Word Representation

    Get PDF
    ウェブが普及し,テキストによる評価情報が注目を集めている.中でもソーシャルメディアやプログといったユーザ発信サービスの使用率は年々上昇し,特定のショッピングサイトに記述されていない商品やサービスに関する莫大な極性情報を含んでいる.そのため,極性を含む評価(肯定・否定)情報を適切に分類することは必要不可欠な技術といえる.このような評価極性分類タスクにおいての一般的な目標は,ある文もしくは文章を与え,それをポジティブ(望ましい)とネガティブ(望ましくない)に分類することである.その際に機械学習で用いられる最も基礎的な素性・特徴量(以下素性と記述)は単語(Bag-of-Words)素性であり,例えば「便利」という単語にはポジティブに分類される重みが学習されやすく,「不便」という単語ネガティブに分類される重みが学習されやすい.しかし,単語素性のみを使った分類では限界がある.まず,従来のヒューリスティックに分類ルールを作成する手法や極性辞書を用いた手法は言語に依存していたり,極性辞書を作る必要があったりと人的なコストが大きい.さらに日々増加する新語や未知語に対応することが困難であることが予想される.これらによってデータスパースネス問題が引き起こされる可能性がある.また,n項関係を考慮する上で多項式カーネルを用いる必要があるが,それでは素性が離散的になってしまい上手く学習できない.したがって,これらのことにより表層の素性だけでは不十分といえる.さらに,単語素性は統語構造を考えないため「良いデザインだけど不便」と「不便だけど良いデザイン」が同じ極性になってしまうが,主節が主な意見情報を担っていることを考慮すると,前者はややネガティブ,後者はややポジティブな意見情報を表現していると考えられるが,このような違いをBag-of-Words素性では表現できない.この統語構造の問題を解決するため,Nakagawaらは係り受け関係を用いて構文木を作成し,その依存部分木に極性を割り当てる手法を提案した.しかしそれには複雑な係り受けに基づく素性テンプレートを設計するため,専門的知識が必要かつモデルが複雑になってしまう.そこで我々は極性辞書のような語彙知識を用いらなくても単語の意味を表現でき,かつデータスパースネスに頑健な分散表現と,複雑な素性やモデル設計を必要とせず,かつ高い表現力を持つ深層ニューラルネットワークに注目した.したがって,本研究では単語の分散表現と深層ニューラルネットワークの一種である多層Denoising Auto-Encoder(以下SdAと略す)を用いて評価極性分類タスクに取り組む.単語の分散表現では,単語をベクトルとして表現する.しかし分散表現では単語ベクトルを従来の1-of-Kのような疎なベクトルではなく,単語自体の意味を表す密なベクトルとして扱う.つまり意味が近い単語同士のベクトルの距離は近いということになる.極性分類タスクにおいて単語の意味は極性を大きく左右するため,記号的な意味しかない 1-of-K表現よりも,より意味を捉えることができる分散表現の方が望ましい.その単語分散表現を学習する手法としてMikolovらのSkip-gramやCBOWのモデルが近年大きな成果を収めている.SdAはDenoising Auto-Encoder(以下dAと略す)を多層に重ねたもので,高い汎化能力を持ったニューラルネットワークである.積み重ねられたdAによって抽出される特徴は層が深くなっていくのに連れてより抽象的な表現を獲得できる.この手法は音声認識や画像処理,分野適応などで高い表現能力を証明している.以上のことを踏まえて,本研究の主要な貢献を以下の2点とする.・評価極性分類タスクにおいて,大規模コーパスから学習した単語分散表現を用いることと,多層(3層以上)のSdAを用いることが分類性能に大きく寄与することを示した.・日本語の評価極性分類タスクに対し,複雑なモデルを設計することなく,現時点における世界最高精度を達成した.As the popularity of social media continues to rise, serious attention is being given to review information nowadays. Reviews with positive/negative ratings, in particular, help (potential) customers to compare products and to make purchasing decisions. Consequently, automatic classification of the polarities (such as positive and negative) of reviews is extremely important. The general goal of our task is to classify the input sentences or articles into positive or negative labels. The most basic feature for text classification in machine learning is bag-of-words feature. For instance, the word "convenient" has the weight that tends to be learned as positive and conversely the word "inconvenient" are prone to be learned as negative. However, the bag-of-words feature has numbers of shortcomings. To begin with, huge human labor is necessary for making polarity dictionaries and building classification rules, and thus the outcome cannot cope with the new words that emerge daily. This will lead to a data sparseness problem. In addition, although the polynomial kernel is often used to consider nary relations, it will not learn well because of these discrete features. Furthermore, the bag-of-words feature cannot take syntactic structures into account. This leads to mistakes such as "a great design but inconvenient" and "inconvenient but a great design" being deemed to have the same meaning, even though their nuances are different; the former is somewhat negative whereas the latter is slightly positive. To solve this syntactic problem, one of the previous studies proposed a sentiment analysis model that used dependency trees with polarities assigned to their subtrees. However, the proposed model requires specialized knowledge to design complicated feature templates. In this study, we propose an approach that uses distributed word representation to overcome the first problem and deep neural networks to alleviate the second problem. The former is an unsupervised method capable of representing a word\u27s meaning without using hand-tagged resources such as a polarity dictionary. In addition, it is robust to the data sparseness problem. The latter is a highly expressive model that does not utilize complex feature engineering or models. Therefore, we work on the sentiment classification task with distributed word representation and stacked denoising auto-encoder, which is one of the deep neural networks. Distributed word representations, or word embedding, represent words as vectors. Distributed representations of word vectors are not sparse but dense vectors that can express the meaning of words. Sentiment classification tasks are significantly influenced by the data sparseness problem. As a result, distributed word representation is more suitable than traditional 1-of K representation, which only treats words as symbols. In our proposed method, to learn the word embedding, we employ a state-of-the-art word embedding technique called word2vec. Although several word embedding techniques currently exist, word2vec is one of the most computationally efficient and is considered to be state-of-the-art. A stacked denoising auto-encoder (SdA) is a deep neural network that extends a stacked auto-encoder with denoising auto-encoders WA). Stacking multiple layers and introducing noise to the input layer adds high generalization ability to auto-encoders. This method is used in speech recognition, image processing and domain adaptation; further, it exhibits high representation ability. Our research makes the following two main contributions: • We show that distributed word representation learned from a large-scale corpus and stacked denoising auto-encoder with multiple layers (more than three layers) contributes significantly to classification accuracy in sentiment classification tasks. • We achieve state-of-the-art performance in Japanese sentiment classification tasks without designing complex features and models.首都大学東京, 2016-03-25, 修士(工学)首都大学東
    corecore