7 research outputs found

    Correction: Fine tuning of ferromagnet/antiferromagnet interface magnetic anisotropy for field-free switching of antiferromagnetic spins.

    Get PDF
    Correction for 'Fine tuning of ferromagnet/antiferromagnet interface magnetic anisotropy for field-free switching of antiferromagnetic spins' by M. Ślęzak et al., Nanoscale, 2020, DOI: 10.1039/d0nr04193a

    Tunable interplay between exchange coupling and uniaxial magnetic anisotropy in epitaxial CoO/Au/Fe trilayers

    No full text
    Abstract We show that the interaction between ferromagnetic Fe(110) and antiferromagnetic CoO(111) sublayers can be mediated and precisely tuned by a nonmagnetic Au spacer. Our results prove that the thickness of the Fe and Au layers can be chosen to modify the effective anisotropy of the Fe layer and the strength of the exchange bias interaction between Fe and CoO sublayers. Well-defined and tailorable magnetic anisotropy of the ferromagnet above Néel temperature of the antiferromagnet is a determining factor that governs exchange bias and interfacial CoO spins orientation at low temperatures. In particular, depending on the room temperature magnetic state of Fe, the low-temperature exchange bias in a zero-field cooled system can be turned “off” or “on”. The other way around, we show that exchange bias can be the dominating magnetic anisotropy source for the ferromagnet and it is feasible to induce a 90-degree rotation of the easy axis as compared to the initial, exchange bias-free easy axis orientation

    A survey on deep learning tools dealing with data scarcity: definitions, challenges, solutions, tips, and applications

    No full text
    Data scarcity is a major challenge when training deep learning (DL) models. DL demands a large amount of data to achieve exceptional performance. Unfortunately, many applications have small or inadequate data to train DL frameworks. Usually, manual labeling is needed to provide labeled data, which typically involves human annotators with a vast background of knowledge. This annotation process is costly, time-consuming, and error-prone. Usually, every DL framework is fed by a significant amount of labeled data to automatically learn representations. Ultimately, a larger amount of data would generate a better DL model and its performance is also application dependent. This issue is the main barrier for many applications dismissing the use of DL. Having sufficient data is the first step toward any successful and trustworthy DL application. This paper presents a holistic survey on state-of-the-art techniques to deal with training DL models to overcome three challenges including small, imbalanced datasets, and lack of generalization. This survey starts by listing the learning techniques. Next, the types of DL architectures are introduced. After that, state-of-the-art solutions to address the issue of lack of training data are listed, such as Transfer Learning (TL), Self-Supervised Learning (SSL), Generative Adversarial Networks (GANs), Model Architecture (MA), Physics-Informed Neural Network (PINN), and Deep Synthetic Minority Oversampling Technique (DeepSMOTE). Then, these solutions were followed by some related tips about data acquisition needed prior to training purposes, as well as recommendations for ensuring the trustworthiness of the training dataset. The survey ends with a list of applications that suffer from data scarcity, several alternatives are proposed in order to generate more data in each application including Electromagnetic Imaging (EMI), Civil Structural Health Monitoring, Medical imaging, Meteorology, Wireless Communications, Fluid Mechanics, Microelectromechanical system, and Cybersecurity. To the best of the authors’ knowledge, this is the first review that offers a comprehensive overview on strategies to tackle data scarcity in DL
    corecore