1,191 research outputs found

    A PDTB-Styled End-to-End Discourse Parser

    Full text link
    We have developed a full discourse parser in the Penn Discourse Treebank (PDTB) style. Our trained parser first identifies all discourse and non-discourse relations, locates and labels their arguments, and then classifies their relation types. When appropriate, the attribution spans to these relations are also determined. We present a comprehensive evaluation from both component-wise and error-cascading perspectives.Comment: 15 pages, 5 figures, 7 table

    Stochastic theta methods for random periodic solution of stochastic differential equations under non-globally Lipschitz conditions

    Full text link
    This work focuses on the numerical approximations of random periodic solutions of stochastic differential equations (SDEs). Under non-globally Lipschitz conditions, we prove the existence and uniqueness of random periodic solutions for the considered equations and its numerical approximations generated by the stochastic theta (ST) methods with theta within (1/2,1]. It is shown that the random periodic solution of each ST method converges strongly in the mean square sense to that of SDEs for all step size. More precisely, the mean square convergence order is 1/2 for SDEs with multiplicative noise and 1 for SDEs with additive noise. Numerical results are finally reported to confirm these theoretical findings

    Improving Biomedical Entity Linking with Retrieval-enhanced Learning

    Full text link
    Biomedical entity linking (BioEL) has achieved remarkable progress with the help of pre-trained language models. However, existing BioEL methods usually struggle to handle rare and difficult entities due to long-tailed distribution. To address this limitation, we introduce a new scheme kkNN-BioEL, which provides a BioEL model with the ability to reference similar instances from the entire training corpus as clues for prediction, thus improving the generalization capabilities. Moreover, we design a contrastive learning objective with dynamic hard negative sampling (DHNS) that improves the quality of the retrieved neighbors during inference. Extensive experimental results show that kkNN-BioEL outperforms state-of-the-art baselines on several datasets.Comment: Accepted by ICASSP 202

    PMC-VQA: Visual Instruction Tuning for Medical Visual Question Answering

    Full text link
    In this paper, we focus on the problem of Medical Visual Question Answering (MedVQA), which is crucial in efficiently interpreting medical images with vital clinic-relevant information. Firstly, we reframe the problem of MedVQA as a generation task that naturally follows the human-machine interaction, we propose a generative-based model for medical visual understanding by aligning visual information from a pre-trained vision encoder with a large language model. Secondly, we establish a scalable pipeline to construct a large-scale medical visual question-answering dataset, named PMC-VQA, which contains 227k VQA pairs of 149k images that cover various modalities or diseases. Thirdly, we pre-train our proposed model on PMC-VQA and then fine-tune it on multiple public benchmarks, e.g., VQA-RAD and SLAKE, outperforming existing work by a large margin. Additionally, we propose a test set that has undergone manual verification, which is significantly more challenging, even the best models struggle to solve
    corecore