153 research outputs found

    EpCAM Is an Endoderm-Specific Wnt Derepressor that Licenses Hepatic Development

    Get PDF
    SummaryMechanisms underlying cell-type-specific response to morphogens or signaling molecules during embryonic development are poorly understood. To learn how response to the liver-inductive Wnt2bb signal is achieved, we identify an endoderm-enriched, single transmembrane protein, epithelial-cell-adhesion-molecule (EpCAM), as an endoderm-specific Wnt derepressor in zebrafish. hi2151/epcam mutants exhibit defective liver development similar to prt/wnt2bb mutants. EpCAM directly binds to Kremen1 and disrupts the Kremen1-Dickkopf2 (Dkk2) interaction, which prevents Kremen1-Dkk2-mediated removal of Lipoprotein-receptor-related protein 6 (Lrp6) from the cell surface. These data lead to a model in which EpCAM derepresses Lrp6 and cooperates with Wnt ligand to activate Wnt signaling through stabilizing membrane Lrp6 and allowing Lrp6 clustering into active signalosomes. Thus, EpCAM cell autonomously licenses and cooperatively activates Wnt2bb signaling in endodermal cells. Our results identify EpCAM as the key molecule and its functional mechanism to confer endodermal cells the competence to respond to the liver-inductive Wnt2bb signal

    Cognitively diagnostic analysis using the G-DINA model in R

    Full text link
    Cognitive diagnosis models (CDMs) have increasingly been applied in education and other fields. This article provides an overview of a widely used CDM, namely, the G-DINA model, and demonstrates a hands-on example of using multiple R packages for a series of CDM analyses. This overview involves a step-by-step illustration and explanation of performing Q-matrix evaluation, CDM calibration, model fit evaluation, item diagnosticity investigation, classification reliability examination, and the result presentation and visualization. Some limitations of conducting CDM analysis in R are also discusse

    Enhancing Semantic Code Search with Multimodal Contrastive Learning and Soft Data Augmentation

    Full text link
    Code search aims to retrieve the most semantically relevant code snippet for a given natural language query. Recently, large-scale code pre-trained models such as CodeBERT and GraphCodeBERT learn generic representations of source code and have achieved substantial improvement on code search task. However, the high-quality sequence-level representations of code snippets have not been sufficiently explored. In this paper, we propose a new approach with multimodal contrastive learning and soft data augmentation for code search. Multimodal contrastive learning is used to pull together the representations of code-query pairs and push apart the unpaired code snippets and queries. Moreover, data augmentation is critical in contrastive learning for learning high-quality representations. However, only semantic-preserving augmentations for source code are considered in existing work. In this work, we propose to do soft data augmentation by dynamically masking and replacing some tokens in code sequences to generate code snippets that are similar but not necessarily semantic-preserving as positive samples for paired queries. We conduct extensive experiments to evaluate the effectiveness of our approach on a large-scale dataset with six programming languages. The experimental results show that our approach significantly outperforms the state-of-the-art methods. We also adapt our techniques to several pre-trained models such as RoBERTa and CodeBERT, and significantly boost their performance on the code search task

    Residential demand for green electricity

    Get PDF
    corecore