Syntactic parsing, the process of obtaining the internal structure of
sentences in natural languages, is a crucial task for artificial intelligence
applications that need to extract meaning from natural language text or speech.
Sentiment analysis is one example of application for which parsing has recently
proven useful.
In recent years, there have been significant advances in the accuracy of
parsing algorithms. In this article, we perform an empirical, task-oriented
evaluation to determine how parsing accuracy influences the performance of a
state-of-the-art rule-based sentiment analysis system that determines the
polarity of sentences from their parse trees. In particular, we evaluate the
system using four well-known dependency parsers, including both current models
with state-of-the-art accuracy and more innacurate models which, however,
require less computational resources.
The experiments show that all of the parsers produce similarly good results
in the sentiment analysis task, without their accuracy having any relevant
influence on the results. Since parsing is currently a task with a relatively
high computational cost that varies strongly between algorithms, this suggests
that sentiment analysis researchers and users should prioritize speed over
accuracy when choosing a parser; and parsing researchers should investigate
models that improve speed further, even at some cost to accuracy.Comment: 19 pages. Accepted for publication in Artificial Intelligence Review.
This update only adds the DOI link to comply with journal's term