229 research outputs found

    Systematic Analysis of Frustration Effects in Anisotropic Checkerboard Lattice Hubbard Model

    Full text link
    We study the ground state properties of the geometrically frustrated Hubbard model on the anisotropic checkerboard lattice with nearest-neighbor hopping tt and next nearest-neighbor hopping t′t'. By using the path-integral renormalization group method, we study the phase diagram in the parameter space of the Hubbard interaction UU and the frustration-control parameter t′/tt'/t. Close examinations of the effective hopping, the double occupancy, the momentum distribution and the spin/charge correlation functions allow us to determine the phase diagram at zero temperature, where the plaquette-singlet insulator emerges besides the antiferromagnetic insulator and the paramagnetic metal. Spin-liquid insulating states without any kind of symmetry breaking cannot be found in our frustrated model.Comment: 7pages, 5figure

    Breaking the trade-off in personalized speech enhancement with cross-task knowledge distillation

    Full text link
    Personalized speech enhancement (PSE) models achieve promising results compared with unconditional speech enhancement models due to their ability to remove interfering speech in addition to background noise. Unlike unconditional speech enhancement, causal PSE models may occasionally remove the target speech by mistake. The PSE models also tend to leak interfering speech when the target speaker is silent for an extended period. We show that existing PSE methods suffer from a trade-off between speech over-suppression and interference leakage by addressing one problem at the expense of the other. We propose a new PSE model training framework using cross-task knowledge distillation to mitigate this trade-off. Specifically, we utilize a personalized voice activity detector (pVAD) during training to exclude the non-target speech frames that are wrongly identified as containing the target speaker with hard or soft classification. This prevents the PSE model from being too aggressive while still allowing the model to learn to suppress the input speech when it is likely to be spoken by interfering speakers. Comprehensive evaluation results are presented, covering various PSE usage scenarios.Comment: Submitted to ICASSP 202
    • …
    corecore