800 research outputs found
Basic tasks of sentiment analysis
Subjectivity detection is the task of identifying objective and subjective
sentences. Objective sentences are those which do not exhibit any sentiment.
So, it is desired for a sentiment analysis engine to find and separate the
objective sentences for further analysis, e.g., polarity detection. In
subjective sentences, opinions can often be expressed on one or multiple
topics. Aspect extraction is a subtask of sentiment analysis that consists in
identifying opinion targets in opinionated text, i.e., in detecting the
specific aspects of a product or service the opinion holder is either praising
or complaining about
A Re-ranking Model for Dependency Parser with Recursive Convolutional Neural Network
In this work, we address the problem to model all the nodes (words or
phrases) in a dependency tree with the dense representations. We propose a
recursive convolutional neural network (RCNN) architecture to capture syntactic
and compositional-semantic representations of phrases and words in a dependency
tree. Different with the original recursive neural network, we introduce the
convolution and pooling layers, which can model a variety of compositions by
the feature maps and choose the most informative compositions by the pooling
layers. Based on RCNN, we use a discriminative model to re-rank a -best list
of candidate dependency parsing trees. The experiments show that RCNN is very
effective to improve the state-of-the-art dependency parsing on both English
and Chinese datasets
From Paraphrase Database to Compositional Paraphrase Model and Back
The Paraphrase Database (PPDB; Ganitkevitch et al., 2013) is an extensive
semantic resource, consisting of a list of phrase pairs with (heuristic)
confidence estimates. However, it is still unclear how it can best be used, due
to the heuristic nature of the confidences and its necessarily incomplete
coverage. We propose models to leverage the phrase pairs from the PPDB to build
parametric paraphrase models that score paraphrase pairs more accurately than
the PPDB's internal scores while simultaneously improving its coverage. They
allow for learning phrase embeddings as well as improved word embeddings.
Moreover, we introduce two new, manually annotated datasets to evaluate
short-phrase paraphrasing models. Using our paraphrase model trained using
PPDB, we achieve state-of-the-art results on standard word and bigram
similarity tasks and beat strong baselines on our new short phrase paraphrase
tasks.Comment: 2015 TACL paper updated with an appendix describing new 300
dimensional embeddings. Submitted 1/2015. Accepted 2/2015. Published 6/201
๊ตฌ๋ฌธ๋ก ์ ํ์ฉํ ์ ๊ฒฝ๋ง ๊ธฐ๋ฐ ๋ฌธ์ฅ ํํ์ ํ์ต ๋ฐ ๋ถ์
ํ์๋
ผ๋ฌธ(๋ฐ์ฌ) -- ์์ธ๋ํ๊ต๋ํ์ : ๊ณต๊ณผ๋ํ ์ปดํจํฐ๊ณตํ๋ถ, 2021.8. ๊นํ์ฑ.๊ตฌ๋ฌธ๋ก (syntax)์ ์ธ์ดํ์ ํ ๊ฐ๋๋ก์จ, ์์ฐ์ด ๋ฌธ์ฅ์ ํ์ฑ ๊ณผ์ ์ ๋ดํฌ๋์ด ์ ๋ ์๋ฆฌ์ ๊ทธ๋ก ์ธํด ์ด๋ฐ๋๋ ์ฌ๋ฌ ์ธ์ด์ ํ์์ ๊ท์ ํ๊ณ ์ด๋ฅผ ๊ฒ์ฆํ๋ ์ฐ๊ตฌ ๋ถ์ผ๋ฅผ ์ด์นญํ๋ค. ๊ตฌ๋ฌธ๋ก ์ ๋จ์ด, ๊ตฌ ๋ฐ ์ ๊ณผ ๊ฐ์ ๋ฌธ์ฅ ๋ด์ ๊ตฌ์ฑ ์์๋ก๋ถํฐ ํด๋น ๋ฌธ์ฅ์ ์๋ฏธ๋ฅผ ์ ์ง์ ์ผ๋ก ๊ตฌ์ถํด ๋๊ฐ๋ ๊ณผ์ ์ ๋ํ ์ฒด๊ณ์ ์ธ ์ด๋ก ์ ์ ์ฐจ๋ฅผ ์ ๊ณตํ๋ฉฐ, ๋ฐ๋ผ์ ์ด๋ ์์ฐ์ด์ฒ๋ฆฌ์์ ๋ฌธ์ฅ ํํ ํ์ต ๋ฐ ๋ถ์์ ์ํ ๋ฐฉ๋ฒ๋ก ์ ๊ตฌ์ํ๋๋ฐ ์์ด ํ์ฉ๋ ์ ์๋ ์ ์ฌ์ฑ์ ์ง๋๊ณ ์๋ค.
๋ณธ ๋
ผ๋ฌธ์์๋ ์ ๊ฒฝ๋ง ๊ธฐ๋ฐ์ ๋ฌธ์ฅ ํํ ๋ฐฉ๋ฒ๋ก ์ ๊ฐ๋ฐํ๋ ๋ฐ ์์ด ๊ตฌ๋ฌธ๋ก ์ ํ์ฉํ๋ ๋ ์ธก๋ฉด์ ๊ดํ์ฌ ๋
ผํ๋ค. ๋จผ์ , ์ธ์ดํ์ ์ธ ํ์ค ํธ๋ฆฌ์ ํํ๋ก ํํ๋ ์ด ์๊ฑฐ๋ ํน์ ํ ์ ๊ฒฝ๋ง ๋ชจ๋ธ์ ํ๋ผ๋ฏธํฐ์ ์์์ ์ผ๋ก ์ ์ฅ๋์ด ์๋ ๊ตฌ๋ฌธ๋ก ์ ์ง์์ ๋์
ํ์ฌ ๋ ๋์ ๋ฌธ์ฅ ํํ์ ๋ง๋๋ ๋ณด๋ค ์ง์ ์ ์ธ ๋ฐฉ๋ฒ๋ก ์ ์ ์ํ๋ค. ์ด์ ๋ํ์ฌ, ๊ตฌ๋ฌธ๋ก ์ ๋ฐํํ ๋ฌธ๋ฒ์ ์ฒด๊ณ๋ฅผ ์ด์ฉํ์ฌ ํ์ต์ด ์๋ฃ๋ ์ ๊ฒฝ๋ง ๊ธฐ๋ฐ ๋ฌธ์ฅ ํํ ๋ชจ๋ธ๋ค์ ์๋ ์๋ฆฌ๋ฅผ ๊ท๋ช
ํ๊ณ ์ด๋ค์ ๊ฐ์ ์ ์ ์ฐพ๋๋ฐ ๋์์ ์ค ์ ์ ๋ ๋ถ์์ ์ ๊ทผ๋ฒ ๋ํ ์๊ฐํ๋ค. ์ค์ ํ๊ฒฝ์์์ ๋ค๊ฐ์ ์ธ ์คํ๊ณผ ๊ฒ์ฆ์ ํตํ์ฌ ๊ท์น ๋ฐ ํต๊ณ ๊ธฐ๋ฐ ์์ฐ์ด์ฒ๋ฆฌ์์ ๊ท์คํ ์์์ผ๋ก ๊ฐ์ฃผ๋์๋ ๊ตฌ๋ฌธ๋ก ์ด ์ ๊ฒฝ๋ง ๊ธฐ๋ฐ์ ๋ชจ๋ธ์ด ๋์ค์ ์ผ๋ก ์ฌ์ฉ๋๊ณ ์๋ ํ์ฌ์ ์์ฐ์ด์ฒ๋ฆฌ์์๋ ๋ณด์์ฌ๋ก์จ ๊ธฐ๋ฅํ ์ ์์์ ๋ณด์ธ๋ค. ๊ตฌ์ฒด์ ์ผ๋ก, ๊ตฌ๋ฌธ๋ก ์ด ๊ณ ์ฑ๋ฅ์ ๋ฌธ์ฅ ํํ์ ์ํ ์ ๊ฒฝ ๋ง ๋ชจ๋ธ ํน์ ์ด๋ฅผ ์ํ ํ์ต ๋ฐฉ๋ฒ๋ก ์ ๊ฐ๋ฐํ๋๋ฐ ์์ด ํจ๊ณผ์ ์ธ ์ง๊ด์ ์ ๊ณตํ ์ ์์์ ์ค์ฆํ๊ณ , ๋ฌธ์ฅ ํํ ์ ๊ฒฝ๋ง ๋ชจ๋ธ์ด ์์ฒด์ ์ผ๋ก ํ์ค ํธ๋ฆฌ๋ฅผ ๋ณต์ํด๋ผ ์ ์๋ ๋ฅ๋ ฅ์ ํ๊ฐํจ์ผ๋ก์จ ๊ตฌ๋ฌธ๋ก ์ ๋ด๋ถ ์๋ ์ฒด๊ณ๊ฐ ๋ถ๋ช
ํํ ์ ๊ฒฝ๋ง ๋ชจ๋ธ์ ์๋ ์๋ฆฌ์ ๋ํ ์ดํด๋๋ฅผ ์ฆ์ง์ํค๋ ๋ถ์ ๋๊ตฌ๋ก ํ์ฉํ๋ค.Syntax is a theory in linguistics that deals with the principles underlying the composition of sentences. As this theoretical framework provides formal instructions regarding the procedure of constructing a sentence with its constituents, it has been considered as a valuable reference in sentence representation learning, whose objective is to discover an approach of transforming a sentence into the vector that illustrates its meaning in a computationally tractable manner.
This dissertation provides two particular perspectives on harmonizing syntax with neural sentence representation models, especially focusing on constituency grammar. We ๏ฌrst propose two methods for enriching the quality of sentence embeddings by exploiting the syntactic knowledge either represented as explicit parse trees or implicitly stored in neural models. Second, we regard syntactic formalism as a lens through which we reveal the inner workings of pre-trained language models which are state-of-the-art in sentence representation learning. With a series of demonstrations in practical scenarios, we show that syntax is useful even in the neural era where the models trained with huge corpora in an end-to-end manner are prevalent, functioning as either (i) a source of inductive biases that facilitate fast and e๏ฌective learning of such models or (ii) an analytic tool that increases the interpretability of the black-box models.Chapter 1 Introduction 1
1.1 Dissertation Outline 5
1.2 Related Publications 6
Chapter 2 Background 8
2.1 Introduction to Syntax 8
2.2 Neural Networks for Sentence Representations 10
2.2.1 Recursive Neural Network 11
2.2.2 Transformer 12
2.2.3 Pre-trained Language Models 14
2.3 Related Literature 16
2.3.1 Sentence Representation Learning 16
2.3.2 Probing Methods for Neural NLP Models 17
2.3.3 Grammar Induction and Unsupervised Parsing 18
Chapter 3 Sentence Representation Learning with Explicit Syntactic Structure 19
3.1 Introduction 19
3.2 Related Work 21
3.3 Method 23
3.3.1 Tree-LSTM 24
3.3.2 Structure-aware Tag Representation 25
3.3.3 Leaf-LSTM 28
3.3.4 SATA Tree-LSTM 29
3.4 Experiments 31
3.4.1 General Con๏ฌgurations 31
3.4.2 Sentence Classi๏ฌcation Tasks 32
3.4.3 Natural Language Inference 35
3.5 Analysis 36
3.5.1 Ablation Study 36
3.5.2 Representation Visualization 38
3.6 Limitations and Future Work 39
3.7 Summary 40
Chapter 4 Sentence Representation Learning with Implicit Syntactic Knowledge 41
4.1 Introduction 41
4.2 Related Work 44
4.3 Method 46
4.3.1 Contrastive Learning with Self-Guidance 47
4.3.2 Learning Objective Optimization 50
4.4 Experiments 52
4.4.1 General Con๏ฌgurations 52
4.4.2 Semantic Textual Similarity Tasks 53
4.4.3 Multilingual STS Tasks 58
4.4.4 SentEval Benchmark 59
4.5 Analysis 60
4.5.1 Ablation Study 60
4.5.2 Robustness to Domain Shifts 61
4.5.3 Computational Efficiency 62
4.5.4 Representation Visualization 63
4.6 Limitations and Future Work 63
4.7 Summary 65
Chapter 5 Syntactic Analysis of Sentence Representation Models 66
5.1 Introduction 66
5.2 Related Work 68
5.3 Motivation 70
5.4 Method 72
5.4.1 CPE-PLM 72
5.4.2 Top-down CPE-PLM 73
5.4.3 Pre-trained Language Models 74
5.4.4 Distance Measure Functions 76
5.4.5 Injecting Bias into Syntactic Distances 77
5.5 Experiments 78
5.5.1 General Con๏ฌgurations 78
5.5.2 Experimental Results on PTB 80
5.5.3 Experimental Results on MNLI 83
5.6 Analysis 85
5.6.1 Performance Comparison by Layer 85
5.6.2 Estimating the Upper Limit of Distance Measure Functions 86
5.6.3 Constituency Tree Examples 88
5.7 Summary 93
Chapter 6 Multilingual Syntactic Analysis with Enhanced Techniques 94
6.1 Introduction 94
6.2 Related work 96
6.3 Method 97
6.3.1 Chart-based CPE-PLM 97
6.3.2 Top-K Ensemble for CPE-PLM 100
6.4 Experiments 100
6.4.1 General Con๏ฌgurations 100
6.4.2 Experiments on Monolingual Settings 102
6.4.3 Experiments on Multilingual Settings 103
6.5 Analysis 106
6.5.1 Factor Correlation Analysis 108
6.5.2 Visualization of Attention Heads 108
6.5.3 Recall Scores on Noun and Verb Phrases 109
6.6 Limitations and Future Work 110
6.7 Summary 111
Chapter 7 Conclusion 112
Bibliography 116
์ด๋ก 138๋ฐ
Syntax-Aware Multi-Sense Word Embeddings for Deep Compositional Models of Meaning
Deep compositional models of meaning acting on distributional representations
of words in order to produce vectors of larger text constituents are evolving
to a popular area of NLP research. We detail a compositional distributional
framework based on a rich form of word embeddings that aims at facilitating the
interactions between words in the context of a sentence. Embeddings and
composition layers are jointly learned against a generic objective that
enhances the vectors with syntactic information from the surrounding context.
Furthermore, each word is associated with a number of senses, the most
plausible of which is selected dynamically during the composition process. We
evaluate the produced vectors qualitatively and quantitatively with positive
results. At the sentence level, the effectiveness of the framework is
demonstrated on the MSRPar task, for which we report results within the
state-of-the-art range.Comment: Accepted for presentation at EMNLP 201
- โฆ