Recent advances in distributed language modeling have led to large
performance increases on a variety of natural language processing (NLP) tasks.
However, it is not well understood how these methods may be augmented by
knowledge-based approaches. This paper compares the performance and internal
representation of an Enhanced Sequential Inference Model (ESIM) between three
experimental conditions based on the representation method: Bidirectional
Encoder Representations from Transformers (BERT), Embeddings of Semantic
Predications (ESP), or Cui2Vec. The methods were evaluated on the Medical
Natural Language Inference (MedNLI) subtask of the MEDIQA 2019 shared task.
This task relied heavily on semantic understanding and thus served as a
suitable evaluation set for the comparison of these representation methods