Quantum re-uploading models have been extensively investigated as a form of
machine learning within the context of variational quantum algorithms. Their
trainability and expressivity are not yet fully understood and are critical to
their performance. In this work, we address trainability through the lens of
the magnitude of the gradients of the cost function. We prove bounds for the
differences between gradients of the better-studied data-less parameterized
quantum circuits and re-uploading models. We coin the concept of {\sl
absorption witness} to quantify such difference. For the expressivity, we prove
that quantum re-uploading models output functions with vanishing high-frequency
components and upper-bounded derivatives with respect to data. As a
consequence, such functions present limited sensitivity to fine details, which
protects against overfitting. We performed numerical experiments extending the
theoretical results to more relaxed and realistic conditions. Overall, future
designs of quantum re-uploading models will benefit from the strengthened
knowledge delivered by the uncovering of absorption witnesses and vanishing
high frequencies