When assessing relations between argumentative units (e.g., support or
attack), computational systems often exploit disclosing indicators or markers
that are not part of elementary argumentative units (EAUs) themselves, but are
gained from their context (position in paragraph, preceding tokens, etc.). We
show that this dependency is much stronger than previously assumed. In fact, we
show that by completely masking the EAU text spans and only feeding information
from their context, a competitive system may function even better. We argue
that an argument analysis system that relies more on discourse context than the
argument's content is unsafe, since it can easily be tricked. To alleviate this
issue, we separate argumentative units from their context such that the system
is forced to model and rely on an EAU's content. We show that the resulting
classification system is more robust, and argue that such models are better
suited for predicting argumentative relations across documents.Comment: accepted at 6th Workshop on Argument Minin