4 research outputs found

    The performance of the base (CNN+BiLSTM+MHSA) model with different calculation class weight methods and MLSMOTE on the test set.

    No full text
    The highest value is highlighted in bold. On all performance metrics, Base+CW (our model) is significantly better compared with the other methods. The mean ± standard deviation on 5-fold cross-validation is shown for models. *, **, *** and **** mean that CNN+BiLSTM+MHSA (our model) is significantly better at P-value (DOCX)</p

    The framework of PrMFTP.

    No full text
    First, peptide sequences are encoded as an input vector using numbers, and converted into a fixed-size matrix through the embedding layer. Second, DNN layer, a combination of multi-scale CNN and BiLSTM architectures, is used to capture the sequence features. Third, multi-head self-attention mechanism (MSHA) is used to make the model attend the more important and discriminating sequence features for prediction of multi-functional therapeutic peptides. Finally, the resulting feature matrix is fed into a classification layer and applied to score the different therapeutic peptides to achieve the predicted result.</p

    The performance of PrMFTP and their variants on the test set.

    No full text
    The highest value is highlighted in bold. w/o is abbreviation of without. The mean ± standard deviation on 5-fold cross-validation is shown for models. *, **, *** and **** mean that PrMFTP is significantly better at P-value < 0.05, P-value < 0.01, P-value < 0.001 and P-value < 0.0001 (t-test), respectively.</p

    Class weights calculated by different methods.

    No full text
    (PDF)</p
    corecore