Data mining, machine learning, and natural language processing are powerful
techniques that can be used together to extract information from large texts.
Depending on the task or problem at hand, there are many different approaches
that can be used. The methods available are continuously being optimized, but
not all these methods have been tested and compared in a set of problems that
can be solved using supervised machine learning algorithms. The question is
what happens to the quality of the methods if we increase the training data
size from, say, 100 MB to over 1 GB? Moreover, are quality gains worth it when
the rate of data processing diminishes? Can we trade quality for time
efficiency and recover the quality loss by just being able to process more
data? We attempt to answer these questions in a general way for text processing
tasks, considering the trade-offs involving training data size, learning time,
and quality obtained. We propose a performance trade-off framework and apply it
to three important text processing problems: Named Entity Recognition,
Sentiment Analysis and Document Classification. These problems were also chosen
because they have different levels of object granularity: words, paragraphs,
and documents. For each problem, we selected several supervised machine
learning algorithms and we evaluated the trade-offs of them on large publicly
available data sets (news, reviews, patents). To explore these trade-offs, we
use different data subsets of increasing size ranging from 50 MB to several GB.
We also consider the impact of the data set and the evaluation technique. We
find that the results do not change significantly and that most of the time the
best algorithms is the fastest. However, we also show that the results for
small data (say less than 100 MB) are different from the results for big data
and in those cases the best algorithm is much harder to determine.Comment: Ten pages, long version of paper that will be presented at IEEE Big
Data 2017 (8 pages