5 research outputs found

    BioNMT: A Biomedical Neural Machine Translation System

    Get PDF
    To solve the problem of translation of professional vocabulary in the biomedical field and help biological researchers to translate and understand foreign language documents, we proposed a semantic disambiguation model and external dictionaries to build a novel translation model for biomedical texts based on the transformer model. The proposed biomedical neural machine translation system (BioNMT) adopts the sequence-to-sequence translation framework, which is based on deep neural networks. To construct the specialized vocabulary of biology and medicine, a hybrid corpus was obtained using a crawler system extracting from universal corpus and biomedical corpus. The experimental results showed that BioNMT which composed by professional biological dictionary and Transformer model increased the bilingual evaluation understudy (BLEU) value by 14.14%, and the perplexity was reduced by 40%. And compared with Google Translation System and Baidu Translation System, BioNMT achieved better translations about paragraphs and resolve the ambiguity of biomedical name entities to greatly improved

    FREDPC: A Feasible Residual Error-Based Density Peak Clustering Algorithm With the Fragment Merging Strategy

    Get PDF
    Funding Agency: 10.13039/501100001809-National Natural Science Foundation of China; Science and Technology Development Foundation of Jilin Province; Science Foundation of Education Department of Guangdong Province; Social Science Foundation of Education Department of Jilin Province; Shaanxi Key Laboratory of Complex System Control and Intelligent Information Processing, Xi’an University of Technology;Peer reviewedPublisher PD

    DANet: Temporal Action Localization with Double Attention

    No full text
    Temporal action localization (TAL) aims to predict action instance categories in videos and identify their start and end times. However, existing Transformer-based backbones focus only on global or local features, resulting in the loss of information. In addition, both global and local self-attention mechanisms tend to average embeddings, thereby reducing the preservation of critical features. To solve these two problems better, we propose two kinds of attention mechanisms, namely multi-headed local self-attention (MLSA) and max-average pooling attention (MA) to extract simultaneously local and global features. In MA, max-pooling is used to select the most critical information from local clip embeddings instead of averaging embeddings, and average-pooling is used to aggregate global features. We use MLSA for modeling local temporal context. In addition, to enhance collaboration between MA and MLSA, we propose the double attention block (DABlock), comprising MA and MLSA. Finally, we propose the final network double attention network (DANet), composed of DABlocks and other advanced blocks. To evaluate DANet’s performance, we conduct extensive experiments for the TAL task. Experimental results demonstrate that DANet outperforms the other state-of-the-art models on all datasets. Finally, ablation studies demonstrate the effectiveness of the proposed MLSA and MA. Compared with structures using backbone with convolution and global Transformer, DABlock consisting of MLSA and MA has a superior performance, achieving an 8% and 0.5% improvement on overall average mAP, respectively
    corecore