In classical Machine Learning, "overfitting" is the phenomenon occurring when
a given model learns the training data excessively well, and it thus performs
poorly on unseen data. A commonly employed technique in Machine Learning is the
so called "dropout", which prevents computational units from becoming too
specialized, hence reducing the risk of overfitting. With the advent of Quantum
Neural Networks as learning models, overfitting might soon become an issue,
owing to the increasing depth of quantum circuits as well as multiple embedding
of classical features, which are employed to give the computational
nonlinearity. Here we present a generalized approach to apply the dropout
technique in Quantum Neural Network models, defining and analysing different
quantum dropout strategies to avoid overfitting and achieve a high level of
generalization. Our study allows to envision the power of quantum dropout in
enabling generalization, providing useful guidelines on determining the maximal
dropout probability for a given model, based on overparametrization theory. It
also highlights how quantum dropout does not impact the features of the Quantum
Neural Networks model, such as expressibility and entanglement. All these
conclusions are supported by extensive numerical simulations, and may pave the
way to efficiently employing deep Quantum Machine Learning models based on
state-of-the-art Quantum Neural Networks