891 research outputs found

    R&D offshoring and technology learning in emerging economies: Firm-level evidence from the ICT industry

    Get PDF
    This paper studies the impact of the R&D offshoring of multinational enterprises on the firms in host emerging economies. We develop a two-stage non-cooperative game to analyze the strategic interaction between multinational and host country enterprises engaged in R&D investment. An empirical analysis of 12,309 manufacturing firms in the ICT industry in China shows that R&D offshoring has a positive effect on the intensity of the R&D of host country firms. However, the magnitude of the impact depends on both the technological and geographical distance between the multinational and host country firms. The policy implications of these findings are that the governments of host country should be cautious about allowing advanced multinational R&D investment in under-developed sectors, but they should encourage such investment in developed sectors; and that local governments should be involved in R&D policy making because the positive impact of multinational R&D offshoring diminishes as the geographical distance between the multinational and host country firms increases.Research and Development, Offshoring, Spillovers, Emerging Economies

    Chronic inflammation triggered by the NLRP3 inflammasome in myeloid cells promotes growth plate dysplasia by mesenchymal cells

    Get PDF
    AbstractSkeletal complications are common features of neonatal-onset multisystem inflammatory disease (NOMID), a disorder caused by NLRP3-activating mutations. NOMID mice in which NLRP3 is activated globally exhibit several characteristics of the human disease, including systemic inflammation and cartilage dysplasia, but the mechanisms of skeletal manifestations remain unknown. In this study, we find that activation of NLRP3 in myeloid cells, but not mesenchymal cells triggers chronic inflammation, which ultimately, causes growth plate and epiphyseal dysplasia in mice. These responses are IL-1 signaling-dependent, but independent of PARP1, which also functions downstream of NLRP3 and regulates skeletal homeostasis. Mechanistically, inflammation causes severe anemia and hypoxia in the bone environment, yet down-regulates the HIF-1α pathway in chondrocytes, thereby promoting the demise of these cells. Thus, activation of NLRP3 in hematopoietic cells initiates IL-1β-driven paracrine cascades, which promote abnormal growth plate development in NOMID mice.</jats:p

    Triplex inducer-directed self-assembly of single-walled carbon nanotubes: a triplex DNA-based approach for controlled manipulation of nanostructures

    Get PDF
    As a promising strategy for artificially control of gene expression, reversible assembly of nanomaterials and DNA nanomachine, DNA triplex formation has received much attention. Carbon nanotubes as gene and drug delivery vector or as ‘building blocks’ in nano/microelectronic devices have been successfully explored. Therefore, studies on triplex DNA-based carbon nanotube hybrid materials are important for development of smart nanomaterials and for gene therapy. In this report, a small molecule directed single-walled carbon nanotubes (SWNTs) self-assembly assay has been developed by disproportionation of SWNTs–dT22·dA22 duplex into triplex dT22·dA22·dT22 and dA22 by a triplex formation inducer, coralyne. This has been studied by circular dichroism, light scattering (LS) spectroscopy, scanning electron microscopy (SEM), atomic force microscopy (AFM), electrophoretic mobility shift assay and supported by using DNA random sequence. In contrast, SWNTs do not aggregate under the same experimental conditions when the small molecules used can not induce dT22·dA22·dT22 triplex formation. Therefore, this novel small molecule-directed SWNTs self-assembly assay has also been used for screening of triplex inducers in our studies

    Short-selling prior to analyst recommendations

    Get PDF

    Compressible Dynamics in Deep Overparameterized Low-Rank Learning & Adaptation

    Full text link
    While overparameterization in machine learning models offers great benefits in terms of optimization and generalization, it also leads to increased computational requirements as model sizes grow. In this work, we show that by leveraging the inherent low-dimensional structures of data and compressible dynamics within the model parameters, we can reap the benefits of overparameterization without the computational burdens. In practice, we demonstrate the effectiveness of this approach for deep low-rank matrix completion as well as fine-tuning language models. Our approach is grounded in theoretical findings for deep overparameterized low-rank matrix recovery, where we show that the learning dynamics of each weight matrix are confined to an invariant low-dimensional subspace. Consequently, we can construct and train compact, highly compressed factorizations possessing the same benefits as their overparameterized counterparts. In the context of deep matrix completion, our technique substantially improves training efficiency while retaining the advantages of overparameterization. For language model fine-tuning, we propose a method called "Deep LoRA", which improves the existing low-rank adaptation (LoRA) technique, leading to reduced overfitting and a simplified hyperparameter setup, while maintaining comparable efficiency. We validate the effectiveness of Deep LoRA on natural language tasks, particularly when fine-tuning with limited data. Our code is available at https://github.com/cjyaras/deep-lora-transformers.Comment: Accepted at ICML'24 (Oral

    Neural Collapse with Normalized Features: A Geometric Analysis over the Riemannian Manifold

    Full text link
    When training overparameterized deep networks for classification tasks, it has been widely observed that the learned features exhibit a so-called "neural collapse" phenomenon. More specifically, for the output features of the penultimate layer, for each class the within-class features converge to their means, and the means of different classes exhibit a certain tight frame structure, which is also aligned with the last layer's classifier. As feature normalization in the last layer becomes a common practice in modern representation learning, in this work we theoretically justify the neural collapse phenomenon for normalized features. Based on an unconstrained feature model, we simplify the empirical loss function in a multi-class classification task into a nonconvex optimization problem over the Riemannian manifold by constraining all features and classifiers over the sphere. In this context, we analyze the nonconvex landscape of the Riemannian optimization problem over the product of spheres, showing a benign global landscape in the sense that the only global minimizers are the neural collapse solutions while all other critical points are strict saddles with negative curvature. Experimental results on practical deep networks corroborate our theory and demonstrate that better representations can be learned faster via feature normalization.Comment: The first two authors contributed to this work equally; 38 pages, 13 figures. Accepted at NeurIPS'2

    UniMOS: A Universal Framework For Multi-Organ Segmentation Over Label-Constrained Datasets

    Full text link
    Machine learning models for medical images can help physicians diagnose and manage diseases. However, due to the fact that medical image annotation requires a great deal of manpower and expertise, as well as the fact that clinical departments perform image annotation based on task orientation, there is the problem of having fewer medical image annotation data with more unlabeled data and having many datasets that annotate only a single organ. In this paper, we present UniMOS, the first universal framework for achieving the utilization of fully and partially labeled images as well as unlabeled images. Specifically, we construct a Multi-Organ Segmentation (MOS) module over fully/partially labeled data as the basenet and designed a new target adaptive loss. Furthermore, we incorporate a semi-supervised training module that combines consistent regularization and pseudolabeling techniques on unlabeled data, which significantly improves the segmentation of unlabeled data. Experiments show that the framework exhibits excellent performance in several medical image segmentation tasks compared to other advanced methods, and also significantly improves data utilization and reduces annotation cost. Code and models are available at: https://github.com/lw8807001/UniMOS.Comment: Accepted by BIBM202

    Dense matter with eXTP

    Full text link
    In this White Paper we present the potential of the Enhanced X-ray Timing and Polarimetry (eXTP) mission for determining the nature of dense matter; neutron star cores host an extreme density regime which cannot be replicated in a terrestrial laboratory. The tightest statistical constraints on the dense matter equation of state will come from pulse profile modelling of accretion-powered pulsars, burst oscillation sources, and rotation-powered pulsars. Additional constraints will derive from spin measurements, burst spectra, and properties of the accretion flows in the vicinity of the neutron star. Under development by an international Consortium led by the Institute of High Energy Physics of the Chinese Academy of Science, the eXTP mission is expected to be launched in the mid 2020s.Comment: Accepted for publication on Sci. China Phys. Mech. Astron. (2019

    The Law of Parsimony in Gradient Descent for Learning Deep Linear Networks

    Full text link
    Over the past few years, an extensively studied phenomenon in training deep networks is the implicit bias of gradient descent towards parsimonious solutions. In this work, we investigate this phenomenon by narrowing our focus to deep linear networks. Through our analysis, we reveal a surprising "law of parsimony" in the learning dynamics when the data possesses low-dimensional structures. Specifically, we show that the evolution of gradient descent starting from orthogonal initialization only affects a minimal portion of singular vector spaces across all weight matrices. In other words, the learning process happens only within a small invariant subspace of each weight matrix, despite the fact that all weight parameters are updated throughout training. This simplicity in learning dynamics could have significant implications for both efficient training and a better understanding of deep networks. First, the analysis enables us to considerably improve training efficiency by taking advantage of the low-dimensional structure in learning dynamics. We can construct smaller, equivalent deep linear networks without sacrificing the benefits associated with the wider counterparts. Second, it allows us to better understand deep representation learning by elucidating the linear progressive separation and concentration of representations from shallow to deep layers. We also conduct numerical experiments to support our theoretical results. The code for our experiments can be found at https://github.com/cjyaras/lawofparsimony.Comment: The first two authors contributed to this work equally; 32 pages, 12 figure

    Tet2 loss leads to hypermutagenicity in haematopoietic stem/progenitor cells

    Get PDF
    TET2 is a dioxygenase that catalyses multiple steps of 5-methylcytosine oxidation. Although TET2 mutations frequently occur in various types of haematological malignancies, the mechanism by which they increase risk for these cancers remains poorly understood. Here we show that Tet2-/- mice develop spontaneous myeloid, T- and B-cell malignancies after long latencies. Exome sequencing of Tet2-/- tumours reveals accumulation of numerous mutations, including Apc, Nf1, Flt3, Cbl, Notch1 and Mll2, which are recurrently deleted/mutated in human haematological malignancies. Single-cell-targeted sequencing of wild-type and premalignant Tet2-/- Lin-c-Kit+ cells shows higher mutation frequencies in Tet2-/- cells. We further show that the increased mutational burden is particularly high at genomic sites that gained 5-hydroxymethylcytosine, where TET2 normally binds. Furthermore, TET2-mutated myeloid malignancy patients have significantly more mutational events than patients with wild-type TET2. Thus, Tet2 loss leads to hypermutagenicity in haematopoietic stem/progenitor cells, suggesting a novel TET2 loss-mediated mechanism of haematological malignancy pathogenesis
    corecore