72 research outputs found

    From criticism to imitation: Rethinking Tuwei culture in the Chinese cultural order

    Get PDF
    The cultural order dominated by China’s mainstream society has long criticized the popular Tuwei culture on the Internet and its rural background. However, in recent years, mainstream media and official discourse have participated in Tuwei hashtag discussions and even borrowed its cultural form. This paper uses the popular "Cao County" hashtag video produced in May 2021 as a case to evaluate the narrative characteristics of Tuwei culture. Through critical discourse analysis and research on the discussions and comments of netizens and the media on this hashtag, this paper assesses the formation and dissemination of different ideological perspectives on this culture and uncovers the reasons behind Tuwei culture’s growing acceptance by mainstream Chinese popular culture

    One-Shot Relational Learning for Knowledge Graphs

    Full text link
    Knowledge graphs (KGs) are the key components of various natural language processing applications. To further expand KGs' coverage, previous studies on knowledge graph completion usually require a large number of training instances for each relation. However, we observe that long-tail relations are actually more common in KGs and those newly added relations often do not have many known triples for training. In this work, we aim at predicting new facts under a challenging setting where only one training instance is available. We propose a one-shot relational learning framework, which utilizes the knowledge extracted by embedding models and learns a matching metric by considering both the learned embeddings and one-hop graph structures. Empirically, our model yields considerable performance improvements over existing embedding models, and also eliminates the need of re-training the embedding models when dealing with newly added relations.Comment: EMNLP 201

    Learning Multiscale Consistency for Self-supervised Electron Microscopy Instance Segmentation

    Full text link
    Instance segmentation in electron microscopy (EM) volumes is tough due to complex shapes and sparse annotations. Self-supervised learning helps but still struggles with intricate visual patterns in EM. To address this, we propose a pretraining framework that enhances multiscale consistency in EM volumes. Our approach leverages a Siamese network architecture, integrating both strong and weak data augmentations to effectively extract multiscale features. We uphold voxel-level coherence by reconstructing the original input data from these augmented instances. Furthermore, we incorporate cross-attention mechanisms to facilitate fine-grained feature alignment between these augmentations. Finally, we apply contrastive learning techniques across a feature pyramid, allowing us to distill distinctive representations spanning various scales. After pretraining on four large-scale EM datasets, our framework significantly improves downstream tasks like neuron and mitochondria segmentation, especially with limited finetuning data. It effectively captures voxel and feature consistency, showing promise for learning transferable representations for EM analysis
    • …
    corecore