88 research outputs found

    The Effect of Attractive Interactions and Macromolecular Crowding on Crystallins Association.

    Get PDF
    In living systems proteins are typically found in crowded environments where their effective interactions strongly depend on the surrounding medium. Yet, their association and dissociation needs to be robustly controlled in order to enable biological function. Uncontrolled protein aggregation often causes disease. For instance, cataract is caused by the clustering of lens proteins, i.e., crystallins, resulting in enhanced light scattering and impaired vision or blindness. To investigate the molecular origins of cataract formation and to design efficient treatments, a better understanding of crystallin association in macromolecular crowded environment is needed. Here we present a theoretical study of simple coarse grained colloidal models to characterize the general features of how the association equilibrium of proteins depends on the magnitude of intermolecular attraction. By comparing the analytic results to the available experimental data on the osmotic pressure in crystallin solutions, we identify the effective parameters regimes applicable to crystallins. Moreover, the combination of two models allows us to predict that the number of binding sites on crystallin is small, i.e. one to three per protein, which is different from previous estimates. We further observe that the crowding factor is sensitive to the size asymmetry between the reactants and crowding agents, the shape of the protein clusters, and to small variations of intermolecular attraction. Our work may provide general guidelines on how to steer the protein interactions in order to control their association

    Transition-based directed graph construction for emotion-cause pair extraction

    Get PDF
    Emotion-cause pair extraction aims to extract all potential pairs of emotions and corresponding causes from unannotated emotion text. Most existing methods are pipelined framework, which identifies emotions and extracts causes separately, leading to a drawback of error propagation. Towards this issue, we propose a transition-based model to transform the task into a procedure of parsing-like directed graph construction. The proposed model incrementally generates the directed graph with labeled edges based on a sequence of actions, from which we can recognize emotions with the corresponding causes simultaneously, thereby optimizing separate subtasks jointly and maximizing mutual benefits of tasks interdependently. Experimental results show that our approach achieves the best performance, outperforming the state-of-the-art methods by 6.71% (p<0.01) in F1 measure

    VCKSCF: Efficient Verifiable Conjunctive Keyword Search Based on Cuckoo Filter for Cloud Storage

    Get PDF
    Searchable Symmetric Encryption(SSE) remains to be one of the hot topics in the field of cloud storage technology. However, malicious servers may return incorrect search results intentionally, which will bring significant security risks to users. Therefore, verifiable searchable encryption emerged. In the meantime, single-keyword query limits the applications of searchable encryption. Accordingly, more expressive searchable encryption schemes are desirable. In this paper, we propose a verifiable conjunctive keyword search scheme based on Cuckoo filter (VCKSCF), which significantly reduces verification and storage overhead. Security analysis indicates that the proposed scheme achieves security in the face of indistinguishability under chosen keyword attack and the unforgeability of proofs and search tokens. Meanwhile, the experimental evaluation demonstrates that it achieves preferable performance in real-world settings

    A knowledge regularized hierarchical approach for emotion cause analysis

    Get PDF
    Emotion cause analysis, which aims to identify the reasons behind emotions, is a key topic in sentiment analysis. A variety of neural network models have been proposed recently, however, these previous models mostly focus on the learning architecture with local textual information, ignoring the discourse and prior knowledge, which play crucial roles in human text comprehension. In this paper, we propose a new method to extract emotion cause with a hierarchical neural model and knowledge-based regularizations, which aims to incorporate discourse context information and restrain the parameters by sentiment lexicon and common knowledge. The experimental results demonstrate that our proposed method achieves the state-of-the-art performance on two public datasets in different languages (Chinese and English), outperforming a number of competitive baselines by at least 2.08% in F-measure

    CodeFuse-13B: A Pretrained Multi-lingual Code Large Language Model

    Full text link
    Code Large Language Models (Code LLMs) have gained significant attention in the industry due to their wide applications in the full lifecycle of software engineering. However, the effectiveness of existing models in understanding non-English inputs for multi-lingual code-related tasks is still far from well studied. This paper introduces CodeFuse-13B, an open-sourced pre-trained code LLM. It is specifically designed for code-related tasks with both English and Chinese prompts and supports over 40 programming languages. CodeFuse achieves its effectiveness by utilizing a high quality pre-training dataset that is carefully filtered by program analyzers and optimized during the training process. Extensive experiments are conducted using real-world usage scenarios, the industry-standard benchmark HumanEval-x, and the specially designed CodeFuseEval for Chinese prompts. To assess the effectiveness of CodeFuse, we actively collected valuable human feedback from the AntGroup's software development process where CodeFuse has been successfully deployed. The results demonstrate that CodeFuse-13B achieves a HumanEval pass@1 score of 37.10%, positioning it as one of the top multi-lingual code LLMs with similar parameter sizes. In practical scenarios, such as code generation, code translation, code comments, and testcase generation, CodeFuse performs better than other models when confronted with Chinese prompts.Comment: 10 pages with 2 pages for reference
    corecore