17,744 research outputs found
LinearCoFold and LinearCoPartition: Linear-Time Algorithms for Secondary Structure Prediction of Interacting RNA molecules
Many ncRNAs function through RNA-RNA interactions. Fast and reliable RNA
structure prediction with consideration of RNA-RNA interaction is useful. Some
existing tools are less accurate due to omitting the competing of
intermolecular and intramolecular base pairs, or focus more on predicting the
binding region rather than predicting the complete secondary structure of two
interacting strands. Vienna RNAcofold, which reduces the problem into the
classical single sequence folding by concatenating two strands, scales in cubic
time against the combined sequence length, and is slow for long sequences. To
address these issues, we present LinearCoFold, which predicts the complete
minimum free energy structure of two strands in linear runtime, and
LinearCoPartition, which calculates the cofolding partition function and base
pairing probabilities in linear runtime. LinearCoFold and LinearCoPartition
follows the concatenation strategy of RNAcofold, but are orders of magnitude
faster than RNAcofold. For example, on a sequence pair with combined length of
26,190 nt, LinearCoFold is 86.8x faster than RNAcofold MFE mode (0.6 minutes
vs. 52.1 minutes), and LinearCoPartition is 642.3x faster than RNAcofold
partition function mode (1.8 minutes vs. 1156.2 minutes). Different from the
local algorithms, LinearCoFold and LinearCoPartition are global cofolding
algorithms without restriction on base pair length. Surprisingly, LinearCoFold
and LinearCoPartition's predictions have higher PPV and sensitivity of
intermolecular base pairs. Furthermore, we apply LinearCoFold to predict the
RNA-RNA interaction between SARS-CoV-2 gRNA and human U4 snRNA, which has been
experimentally studied, and observe that LinearCoFold's prediction correlates
better to the wet lab results
Quantum-Inspired Support Vector Machine
Support vector machine (SVM) is a particularly powerful and flexible
supervised learning model that analyzes data for both classification and
regression, whose usual algorithm complexity scales polynomially with the
dimension of data space and the number of data points. To tackle the big data
challenge, a quantum SVM algorithm was proposed, which is claimed to achieve
exponential speedup for least squares SVM (LS-SVM). Here, inspired by the
quantum SVM algorithm, we present a quantum-inspired classical algorithm for
LS-SVM. In our approach, a improved fast sampling technique, namely indirect
sampling, is proposed for sampling the kernel matrix and classifying. We first
consider the LS-SVM with a linear kernel, and then discuss the generalization
of our method to non-linear kernels. Theoretical analysis shows our algorithm
can make classification with arbitrary success probability in logarithmic
runtime of both the dimension of data space and the number of data points for
low rank, low condition number and high dimensional data matrix, matching the
runtime of the quantum SVM
A Graph Isomorphism Network with Weighted Multiple Aggregators for Speech Emotion Recognition
Speech emotion recognition (SER) is an essential part of human-computer
interaction. In this paper, we propose an SER network based on a Graph
Isomorphism Network with Weighted Multiple Aggregators (WMA-GIN), which can
effectively handle the problem of information confusion when neighbour nodes'
features are aggregated together in GIN structure. Moreover, a Full-Adjacent
(FA) layer is adopted for alleviating the over-squashing problem, which is
existed in all Graph Neural Network (GNN) structures, including GIN.
Furthermore, a multi-phase attention mechanism and multi-loss training strategy
are employed to avoid missing the useful emotional information in the stacked
WMA-GIN layers. We evaluated the performance of our proposed WMA-GIN on the
popular IEMOCAP dataset. The experimental results show that WMA-GIN outperforms
other GNN-based methods and is comparable to some advanced non-graph-based
methods by achieving 72.48% of weighted accuracy (WA) and 67.72% of unweighted
accuracy (UA).Comment: Accepted by Interspeech 202
- …