442 research outputs found

    Emergent hydrodynamics in non-equilibrium quantum systems

    Get PDF
    A tremendous amount of recent attention has focused on characterizing the dynamical properties of periodically driven many-body systems. Here, we use a novel numerical tool termed `density matrix truncation' (DMT) to investigate the late-time dynamics of large-scale Floquet systems. We find that DMT accurately captures two essential pieces of Floquet physics, namely, prethermalization and late-time heating to infinite temperature. Moreover, by implementing a spatially inhomogeneous drive, we demonstrate that an interplay between Floquet heating and diffusive transport is crucial to understanding the system's dynamics. Finally, we show that DMT also provides a powerful method for quantitatively capturing the emergence of hydrodynamics in static (un-driven) Hamiltonians; in particular, by simulating the dynamics of generic, large-scale quantum spin chains (up to L = 100), we are able to directly extract the energy diffusion coefficient.Comment: 6+21 pages, 4+23 figure

    Emergent Hydrodynamics in Nonequilibrium Quantum Systems

    Get PDF
    A tremendous amount of recent attention has focused on characterizing the dynamical properties of periodically driven many-body systems. Here, we use a novel numerical tool termed “density matrix truncation” (DMT) to investigate the late-time dynamics of large-scale Floquet systems. We find that DMT accurately captures two essential pieces of Floquet physics, namely, prethermalization and late-time heating to infinite temperature. Moreover, by implementing a spatially inhomogeneous drive, we demonstrate that an interplay between Floquet heating and diffusive transport is crucial to understanding the system’s dynamics. Finally, we show that DMT also provides a powerful method for quantitatively capturing the emergence of hydrodynamics in static (undriven) Hamiltonians; in particular, by simulating the dynamics of generic, large-scale quantum spin chains (up to L=100), we are able to directly extract the energy diffusion coefficient

    Efficient Sharpness-aware Minimization for Improved Training of Neural Networks

    Full text link
    Overparametrized Deep Neural Networks (DNNs) often achieve astounding performances, but may potentially result in severe generalization error. Recently, the relation between the sharpness of the loss landscape and the generalization error has been established by Foret et al. (2020), in which the Sharpness Aware Minimizer (SAM) was proposed to mitigate the degradation of the generalization. Unfortunately, SAM s computational cost is roughly double that of base optimizers, such as Stochastic Gradient Descent (SGD). This paper thus proposes Efficient Sharpness Aware Minimizer (ESAM), which boosts SAM s efficiency at no cost to its generalization performance. ESAM includes two novel and efficient training strategies-StochasticWeight Perturbation and Sharpness-Sensitive Data Selection. In the former, the sharpness measure is approximated by perturbing a stochastically chosen set of weights in each iteration; in the latter, the SAM loss is optimized using only a judiciously selected subset of data that is sensitive to the sharpness. We provide theoretical explanations as to why these strategies perform well. We also show, via extensive experiments on the CIFAR and ImageNet datasets, that ESAM enhances the efficiency over SAM from requiring 100% extra computations to 40% vis-a-vis base optimizers, while test accuracies are preserved or even improved
    corecore