In order for artificial neural networks to begin accurately mimicking
biological ones, they must be able to adapt to new exigencies without
forgetting what they have learned from previous training. Lifelong learning
approaches to artificial neural networks attempt to strive towards this goal,
yet have not progressed far enough to be realistically deployed for natural
language processing tasks. The proverbial roadblock of catastrophic forgetting
still gate-keeps researchers from an adequate lifelong learning model. While
efforts are being made to quell catastrophic forgetting, there is a lack of
research that looks into the importance of class ordering when training on new
classes for incremental learning. This is surprising as the ordering of
"classes" that humans learn is heavily monitored and incredibly important.
While heuristics to develop an ideal class order have been researched, this
paper examines class ordering as it relates to priming as a scheme for
incremental class learning. By examining the connections between various
methods of priming found in humans and how those are mimicked yet remain
unexplained in life-long machine learning, this paper provides a better
understanding of the similarities between our biological systems and the
synthetic systems while simultaneously improving current practices to combat
catastrophic forgetting. Through the merging of psychological priming practices
with class ordering, this paper is able to identify a generalizable method for
class ordering in NLP incremental learning tasks that consistently outperforms
random class ordering.Comment: Accepted to IEEE International Conference on Semantic Computing
(ICSC) 202