3,945 research outputs found

    Collective magnetization dynamics in ferromagnetic (Ga,Mn)As mediated by photo-excited carriers

    Full text link
    We present a study of photo-excited magnetization dynamics in ferromagnetic (Ga,Mn)As films observed by time-resolved magneto-optical measurements. The magnetization precession triggered by linearly polarized optical pulses in the absence of an external field shows a strong dependence on photon frequency when the photo-excitation energy approaches the band-edge of (Ga,Mn)As. This can be understood in terms of magnetic anisotropy modulation by both laser heating of the sample and by hole-induced non-thermal paths. Our findings provide a means for identifying the transition of laser-triggered magnetization dynamics from thermal to non-thermal mechanisms, a result that is of importance for ultrafast optical spin manipulation in ferromagnetic materials via non-thermal paths.Comment: 11 pages, 9 figure

    Poly[bis[3,10-bis(2-hydroxyethyl)-1,3,5,8,10,12-hexa­azacyclo­tetra­deca­ne]tetra-μ-cyanido-tetracyanidodicopper(II)molybdenum(IV)] tetra­hydrate]

    Get PDF
    In the title complex, {[Cu2Mo(CN)8(C12H30N6O2)2]·4H2O}n, the polyhedron around Mo has site symmetry with a distorted square-antiprismatic shape, while the Cu atom (2 symmetry) is in a distorted axially elongated octa­hedral coordination environment. The uncoordinated water molecule is disordered over three sites with occupancies of 0.445 (7), 0.340 (7) and 0.215 (7). Mo and Cu atoms acting as basic components are connected by an Mo—CN—Cu—NC—Mo— linkage to form a distorted diamond-like network. Additional hydrogen bonding between the N—H groups and the water molecules stabilizes this arrangement

    Effects of relative orientation of the molecules on electron transport in molecular devices

    Full text link
    Effects of relative orientation of the molecules on electron transport in molecular devices are studied by non-equilibrium Green's function method based on density functional theory. In particular, two molecular devices, with the planer Au7_{7} and Ag3_{3} clusters sandwiched between the Al(100) electrodes are studied. In each device, two typical configurations with the clusters parallel and vertical to the electrodes are considered. It is found that the relative orientation affects the transport properties of these two devices completely differently. In the Al(100)-Au7_7-Al(100) device, the conductance and the current of the parallel configuration are much larger than those in the vertical configuration, while in the Al(100)-Ag3_{3}-Al(100) device, an opposite conclusion is obtained

    Diverse Target and Contribution Scheduling for Domain Generalization

    Full text link
    Generalization under the distribution shift has been a great challenge in computer vision. The prevailing practice of directly employing the one-hot labels as the training targets in domain generalization~(DG) can lead to gradient conflicts, making it insufficient for capturing the intrinsic class characteristics and hard to increase the intra-class variation. Besides, existing methods in DG mostly overlook the distinct contributions of source (seen) domains, resulting in uneven learning from these domains. To address these issues, we firstly present a theoretical and empirical analysis of the existence of gradient conflicts in DG, unveiling the previously unexplored relationship between distribution shifts and gradient conflicts during the optimization process. In this paper, we present a novel perspective of DG from the empirical source domain's risk and propose a new paradigm for DG called Diverse Target and Contribution Scheduling (DTCS). DTCS comprises two innovative modules: Diverse Target Supervision (DTS) and Diverse Contribution Balance (DCB), with the aim of addressing the limitations associated with the common utilization of one-hot labels and equal contributions for source domains in DG. In specific, DTS employs distinct soft labels as training targets to account for various feature distributions across domains and thereby mitigates the gradient conflicts, and DCB dynamically balances the contributions of source domains by ensuring a fair decline in losses of different source domains. Extensive experiments with analysis on four benchmark datasets show that the proposed method achieves a competitive performance in comparison with the state-of-the-art approaches, demonstrating the effectiveness and advantages of the proposed DTCS

    Rethinking Domain Generalization: Discriminability and Generalizability

    Full text link
    Domain generalization (DG) endeavors to develop robust models that possess strong generalizability while preserving excellent discriminability. Nonetheless, pivotal DG techniques tend to improve the feature generalizability by learning domain-invariant representations, inadvertently overlooking the feature discriminability. On the one hand, the simultaneous attainment of generalizability and discriminability of features presents a complex challenge, often entailing inherent contradictions. This challenge becomes particularly pronounced when domain-invariant features manifest reduced discriminability owing to the inclusion of unstable factors, \emph{i.e.,} spurious correlations. On the other hand, prevailing domain-invariant methods can be categorized as category-level alignment, susceptible to discarding indispensable features possessing substantial generalizability and narrowing intra-class variations. To surmount these obstacles, we rethink DG from a new perspective that concurrently imbues features with formidable discriminability and robust generalizability, and present a novel framework, namely, Discriminative Microscopic Distribution Alignment (DMDA). DMDA incorporates two core components: Selective Channel Pruning~(SCP) and Micro-level Distribution Alignment (MDA). Concretely, SCP attempts to curtail redundancy within neural networks, prioritizing stable attributes conducive to accurate classification. This approach alleviates the adverse effect of spurious domain invariance and amplifies the feature discriminability. Besides, MDA accentuates micro-level alignment within each class, going beyond mere category-level alignment. This strategy accommodates sufficient generalizable features and facilitates within-class variations. Extensive experiments on four benchmark datasets corroborate the efficacy of our method
    corecore