82 research outputs found

    Magnetic domain wall motion in a nanowire: depinning and creep

    Full text link
    The domain wall motion in a magnetic nanowire is examined theoretically in the regime where the domain wall driving force is weak and its competition against disorders is assisted by thermal agitations. Two types of driving forces are considered; magnetic field and current. While the field induces the domain wall motion through the Zeeman energy, the current induces the domain wall motion by generating the spin transfer torque, of which effects in this regime remain controversial. The spin transfer torque has two mutually orthogonal vector components, the adiabatic spin transfer torque and the nonadiabatic spin transfer torque. We investigate separate effects of the two components on the domain wall depinning rate in one-dimensional systems and on the domain wall creep velocity in two-dimensional systems, both below the Walker breakdown threshold. In addition to the leading order contribution coming from the field and/or the nonadiabatic spin transfer torque, we find that the adiabatic spin transfer torque generates corrections, which can be of relevance for an unambiguous analysis of experimental results. For instance, it is demonstrated that the neglect of the corrections in experimental analysis may lead to incorrect evaluation of the nonadiabaticity parameter. Effects of the Rashba spin-orbit coupling on the domain wall motion are also analyzed.Comment: 14 pages, 3 figure

    Pivotal Role of Language Modeling in Recommender Systems: Enriching Task-specific and Task-agnostic Representation Learning

    Full text link
    Recent studies have proposed unified user modeling frameworks that leverage user behavior data from various applications. Many of them benefit from utilizing users' behavior sequences as plain texts, representing rich information in any domain or system without losing generality. Hence, a question arises: Can language modeling for user history corpus help improve recommender systems? While its versatile usability has been widely investigated in many domains, its applications to recommender systems still remain underexplored. We show that language modeling applied directly to task-specific user histories achieves excellent results on diverse recommendation tasks. Also, leveraging additional task-agnostic user histories delivers significant performance benefits. We further demonstrate that our approach can provide promising transfer learning capabilities for a broad spectrum of real-world recommender systems, even on unseen domains and services.Comment: 14 pages, 5 figures, 9 table

    Scaling Law for Recommendation Models: Towards General-purpose User Representations

    Full text link
    Recent advancement of large-scale pretrained models such as BERT, GPT-3, CLIP, and Gopher, has shown astonishing achievements across various task domains. Unlike vision recognition and language models, studies on general-purpose user representation at scale still remain underexplored. Here we explore the possibility of general-purpose user representation learning by training a universal user encoder at large scales. We demonstrate that the scaling law is present in user representation learning areas, where the training error scales as a power-law with the amount of computation. Our Contrastive Learning User Encoder (CLUE), optimizes task-agnostic objectives, and the resulting user embeddings stretch our expectation of what is possible to do in various downstream tasks. CLUE also shows great transferability to other domains and companies, as performances on an online experiment shows significant improvements in Click-Through-Rate (CTR). Furthermore, we also investigate how the model performance is influenced by the scale factors, such as training data size, model capacity, sequence length, and batch size. Finally, we discuss the broader impacts of CLUE in general.Comment: Accepted at AAAI 2023. This version includes the technical appendi

    Deformable Graph Transformer

    Full text link
    Transformer-based models have recently shown success in representation learning on graph-structured data beyond natural language processing and computer vision. However, the success is limited to small-scale graphs due to the drawbacks of full dot-product attention on graphs such as the quadratic complexity with respect to the number of nodes and message aggregation from enormous irrelevant nodes. To address these issues, we propose Deformable Graph Transformer (DGT) that performs sparse attention via dynamically sampled relevant nodes for efficiently handling large-scale graphs with a linear complexity in the number of nodes. Specifically, our framework first constructs multiple node sequences with various criteria to consider both structural and semantic proximity. Then, combining with our learnable Katz Positional Encodings, the sparse attention is applied to the node sequences for learning node representations with a significantly reduced computational cost. Extensive experiments demonstrate that our DGT achieves state-of-the-art performance on 7 graph benchmark datasets with 2.5 - 449 times less computational cost compared to transformer-based graph models with full attention.Comment: 16 pages, 3 figure

    Magnetization dynamics induced by in-plane currents in ultrathin magnetic nanostructures

    Full text link
    Ultrathin magnetic systems have properties qualitatively different from their thicker counterparts, implying that different physics governs their properties. We demonstrate that various such properties can be explained naturally by the Rashba spin-orbit coupling in ultrathin magnetic systems. This work will be valuable for the development of next generation spintronic devices based on ultrathin magnetic systems.Comment: 4+ pages, 3 figure

    IL-6-mediated cross-talk between human preadipocytes and ductal carcinoma in situ in breast cancer progression

    Get PDF
    Background The function of preadipocytes in the progression of early stage breast cancer has not been fully elucidated at the molecular level. To delineate the role of preadipocytes in breast cancer progression, we investigated the cross-talk between human breast ductal carcinoma in situ (DCIS) cells and preadipocytes with both an in vitro culture and xenograft tumor model. Methods GFP or RFP was transduced into human DCIS cell line MCF10DCIS.com cells or preadipocytes using lentivirus. Cell sorter was used to separate pure, viable populations of GFP- or RFP-transduced cells. Cell viability and proliferation was assessed by crystal violet assays and cell migration and invasion capability was assayed by the transwell strategy. Gene and protein levels were measured by western blot, RT-PCR and immunostaining. Adipokines and cytokines were quantified using ELISA. Human tumor xenografts in a nude mice model were used. Ultrasound imaging of tumors was performed to evaluate the therapeutic potential of a IL-6 neutralizing antibody. Results In the co-culture system with the MCF10DCIS.com and preadipocytes, MCF10DCIS.com proliferation, migration and invasion were enhanced by preadipocytes. Preadipocytes exhibited in an increased IL-6 secretion and cancer-associated fibroblast markers expression, FSP1 and α-SMC in co-culture with MCF10DCIS.com or in MCF10DCIS.com conditioned media, whereas the adipocyte differentiation capacity was suppressed by co-culture with MCF10DCIS.com. A neutralizing antibody of IL-6 or IL-6R suppressed the promotion of MCF10DCIS.com proliferation and migration by co-culture with preadipocytes. In the xenograft tumor model, the tumor growth of MCF10DCIS.com was enhanced by the co-injection of preadipocytes, and the administration of IL-6 neutralizing antibodies resulted in potent effects on tumor inhibition. Conclusions Our findings suggest that IL-6-mediated cross-talk between preadipocytes and breast DCIS cells can promote the progression of early stage breast cancer. Therefore, blocking IL-6 signaling might be a potential therapeutic strategy for breast DCIS characterized by pathological IL-6 overproduction.This work was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT and future Planning (2015R1A2A1A05001860) and by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2015R1D1A1A01059376). Sul Ki Choi, Hyelim Kim and Yin Ji Piao are the awardees of graduate student fellowship funded by Brain Korea 21 Plus (BK21 Plus)

    Generalised optical printing of photocurable metal chalcogenides

    Get PDF
    Optical three-dimensional (3D) printing techniques have attracted tremendous attention owing to their applicability to mask-less additive manufacturing, which enables the cost-effective and straightforward creation of patterned architectures. However, despite their potential use as alternatives to traditional lithography, the printable materials obtained from these methods are strictly limited to photocurable resins, thereby restricting the functionality of the printed objects and their application areas. Herein, we report a generalised direct optical printing technique to obtain functional metal chalcogenides via digital light processing. We developed universally applicable photocurable chalcogenidometallate inks that could be directly used to create 2D patterns or micrometre-thick 2.5D architectures of various sizes and shapes. Our process is applicable to a diverse range of functional metal chalcogenides for compound semiconductors and 2D transition-metal dichalcogenides. We then demonstrated the feasibility of our technique by fabricating and evaluating a micro-scale thermoelectric generator bearing tens of patterned semiconductors. Our approach shows potential for simple and cost-effective architecturing of functional inorganic materials
    corecore