266 research outputs found
Re-entrant magic-angle phenomena in twisted bilayer graphene in integer magnetic fluxes
In this work we address the re-entrance of magic-angle phenomena (band
flatness and quantum-geometric transport) in twisted bilayer graphene (TBG)
subjected to strong magnetic fluxes , , ... ( is the flux quantum per moir\'e cell). The moir\'e
translation invariance is restored at the integer fluxes, for which we
calculate the TBG band structure using accurate atomistic models with lattice
relaxations. Similarly to the zero-flux physics outside the magic angle
condition, the reported effect breaks down rapidly with the twist. We conclude
that the magic-angle physics re-emerges in high magnetic fields, witnessed by
the appearance of flat electronic bands distinct from Landau levels, and
manifesting non-trivial quantum geometry. We further discuss the possible
flat-band quantum geometric contribution to the superfluid weight in strong
magnetic fields (28 T at 1.08 twist), according to Peotta-T\"{o}rm\"{a}
mechanism.Comment: 5 pages, 5 figure
Learning physics-constrained subgrid-scale closures in the small-data regime for stable and accurate LES
We demonstrate how incorporating physics constraints into convolutional
neural networks (CNNs) enables learning subgrid-scale (SGS) closures for stable
and accurate large-eddy simulations (LES) in the small-data regime (i.e., when
the availability of high-quality training data is limited). Using several
setups of forced 2D turbulence as the testbeds, we examine the {\it a priori}
and {\it a posteriori} performance of three methods for incorporating physics:
1) data augmentation (DA), 2) CNN with group convolutions (GCNN), and 3) loss
functions that enforce a global enstrophy-transfer conservation (EnsCon). While
the data-driven closures from physics-agnostic CNNs trained in the big-data
regime are accurate and stable, and outperform dynamic Smagorinsky (DSMAG)
closures, their performance substantially deteriorate when these CNNs are
trained with 40x fewer samples (the small-data regime). We show that CNN with
DA and GCNN address this issue and each produce accurate and stable data-driven
closures in the small-data regime. Despite its simplicity, DA, which adds
appropriately rotated samples to the training set, performs as well or in some
cases even better than GCNN, which uses a sophisticated equivariance-preserving
architecture. EnsCon, which combines structural modeling with aspect of
functional modeling, also produces accurate and stable closures in the
small-data regime. Overall, GCNN+EnCon, which combines these two physics
constraints, shows the best {\it a posteriori} performance in this regime.
These results illustrate the power of physics-constrained learning in the
small-data regime for accurate and stable LES.Comment: 23 pages, 9 figure
- …