Whereas deep neural network (DNN) is increasingly applied to choice analysis,
it is challenging to reconcile domain-specific behavioral knowledge with
generic-purpose DNN, to improve DNN's interpretability and predictive power,
and to identify effective regularization methods for specific tasks. This study
designs a particular DNN architecture with alternative-specific utility
functions (ASU-DNN) by using prior behavioral knowledge. Unlike a fully
connected DNN (F-DNN), which computes the utility value of an alternative k by
using the attributes of all the alternatives, ASU-DNN computes it by using only
k's own attributes. Theoretically, ASU-DNN can dramatically reduce the
estimation error of F-DNN because of its lighter architecture and sparser
connectivity. Empirically, ASU-DNN has 2-3% higher prediction accuracy than
F-DNN over the whole hyperparameter space in a private dataset that we
collected in Singapore and a public dataset in R mlogit package. The
alternative-specific connectivity constraint, as a domain-knowledge-based
regularization method, is more effective than the most popular generic-purpose
explicit and implicit regularization methods and architectural hyperparameters.
ASU-DNN is also more interpretable because it provides a more regular
substitution pattern of travel mode choices than F-DNN does. The comparison
between ASU-DNN and F-DNN can also aid in testing the behavioral knowledge. Our
results reveal that individuals are more likely to compute utility by using an
alternative's own attributes, supporting the long-standing practice in choice
modeling. Overall, this study demonstrates that prior behavioral knowledge
could be used to guide the architecture design of DNN, to function as an
effective domain-knowledge-based regularization method, and to improve both the
interpretability and predictive power of DNN in choice analysis