A Dialogue State Tracker is a key component in dialogue systems which
estimates the beliefs of possible user goals at each dialogue turn. Deep
learning approaches using recurrent neural networks have shown state-of-the-art
performance for the task of dialogue state tracking. Generally, these
approaches assume a predefined candidate list and struggle to predict any new
dialogue state values that are not seen during training. This makes extending
the candidate list for a slot without model retaining infeasible and also has
limitations in modelling for low resource domains where training data for slot
values are expensive. In this paper, we propose a novel dialogue state tracker
based on copying mechanism that can effectively track such unseen slot values
without compromising performance on slot values seen during training. The
proposed model is also flexible in extending the candidate list without
requiring any retraining or change in the model. We evaluate the proposed model
on various benchmark datasets (DSTC2, DSTC3 and WoZ2.0) and show that our
approach, outperform other end-to-end data-driven approaches in tracking unseen
slot values and also provides significant advantages in modelling for DST