While advances in artificial intelligence and neuroscience have enabled the
emergence of neural networks capable of learning a wide variety of tasks, our
understanding of the temporal dynamics of these networks remains limited. Here,
we study the temporal dynamics during learning of Hebbian Feedforward (HebbFF)
neural networks in tasks of continual familiarity detection. Drawing
inspiration from the field of network neuroscience, we examine the network's
dynamic reconfiguration, focusing on how network modules evolve throughout
learning. Through a comprehensive assessment involving metrics like network
accuracy, modular flexibility, and distribution entropy across diverse learning
modes, our approach reveals various previously unknown patterns of network
reconfiguration. In particular, we find that the emergence of network
modularity is a salient predictor of performance, and that modularization
strengthens with increasing flexibility throughout learning. These insights not
only elucidate the nuanced interplay of network modularity, accuracy, and
learning dynamics but also bridge our understanding of learning in artificial
and biological realms