142 research outputs found
Knowledge Rich Natural Language Queries over Structured Biological Databases
Increasingly, keyword, natural language and NoSQL queries are being used for
information retrieval from traditional as well as non-traditional databases
such as web, document, image, GIS, legal, and health databases. While their
popularity are undeniable for obvious reasons, their engineering is far from
simple. In most part, semantics and intent preserving mapping of a well
understood natural language query expressed over a structured database schema
to a structured query language is still a difficult task, and research to tame
the complexity is intense. In this paper, we propose a multi-level
knowledge-based middleware to facilitate such mappings that separate the
conceptual level from the physical level. We augment these multi-level
abstractions with a concept reasoner and a query strategy engine to dynamically
link arbitrary natural language querying to well defined structured queries. We
demonstrate the feasibility of our approach by presenting a Datalog based
prototype system, called BioSmart, that can compute responses to arbitrary
natural language queries over arbitrary databases once a syntactic
classification of the natural language query is made
Deep fusion of multi-channel neurophysiological signal for emotion recognition and monitoring
How to fuse multi-channel neurophysiological signals for emotion recognition is emerging as a hot research topic in community of Computational Psychophysiology. Nevertheless, prior feature engineering based approaches require extracting various domain knowledge related features at a high time cost. Moreover, traditional fusion method cannot fully utilise correlation information between different channels and frequency components. In this paper, we design a hybrid deep learning model, in which the 'Convolutional Neural Network (CNN)' is utilised for extracting task-related features, as well as mining inter-channel and inter-frequency correlation, besides, the 'Recurrent Neural Network (RNN)' is concatenated for integrating contextual information from the frame cube sequence. Experiments are carried out in a trial-level emotion recognition task, on the DEAP benchmarking dataset. Experimental results demonstrate that the proposed framework outperforms the classical methods, with regard to both of the emotional dimensions of Valence and Arousal
Attention-based CNN-LSTM and XGBoost hybrid model for stock prediction
Stock market plays an important role in the economic development. Due to the
complex volatility of the stock market, the research and prediction on the
change of the stock price, can avoid the risk for the investors. The
traditional time series model ARIMA can not describe the nonlinearity, and can
not achieve satisfactory results in the stock prediction. As neural networks
are with strong nonlinear generalization ability, this paper proposes an
attention-based CNN-LSTM and XGBoost hybrid model to predict the stock price.
The model constructed in this paper integrates the time series model, the
Convolutional Neural Networks with Attention mechanism, the Long Short-Term
Memory network, and XGBoost regressor in a non-linear relationship, and
improves the prediction accuracy. The model can fully mine the historical
information of the stock market in multiple periods. The stock data is first
preprocessed through ARIMA. Then, the deep learning architecture formed in
pretraining-finetuning framework is adopted. The pre-training model is the
Attention-based CNN-LSTM model based on sequence-to-sequence framework. The
model first uses convolution to extract the deep features of the original stock
data, and then uses the Long Short-Term Memory networks to mine the long-term
time series features. Finally, the XGBoost model is adopted for fine-tuning.
The results show that the hybrid model is more effective and the prediction
accuracy is relatively high, which can help investors or institutions to make
decisions and achieve the purpose of expanding return and avoiding risk. Source
code is available at
https://github.com/zshicode/Attention-CLX-stock-prediction.Comment: arXiv admin note: text overlap with arXiv:2202.1380
TTMFN: Two-stream Transformer-based Multimodal Fusion Network for Survival Prediction
Survival prediction plays a crucial role in assisting clinicians with the
development of cancer treatment protocols. Recent evidence shows that
multimodal data can help in the diagnosis of cancer disease and improve
survival prediction. Currently, deep learning-based approaches have experienced
increasing success in survival prediction by integrating pathological images
and gene expression data. However, most existing approaches overlook the
intra-modality latent information and the complex inter-modality correlations.
Furthermore, existing modalities do not fully exploit the immense
representational capabilities of neural networks for feature aggregation and
disregard the importance of relationships between features. Therefore, it is
highly recommended to address these issues in order to enhance the prediction
performance by proposing a novel deep learning-based method. We propose a novel
framework named Two-stream Transformer-based Multimodal Fusion Network for
survival prediction (TTMFN), which integrates pathological images and gene
expression data. In TTMFN, we present a two-stream multimodal co-attention
transformer module to take full advantage of the complex relationships between
different modalities and the potential connections within the modalities.
Additionally, we develop a multi-head attention pooling approach to effectively
aggregate the feature representations of the two modalities. The experiment
results on four datasets from The Cancer Genome Atlas demonstrate that TTMFN
can achieve the best performance or competitive results compared to the
state-of-the-art methods in predicting the overall survival of patients
Subject-Independent Emotion Recognition Based on Physiological Signals: A Three-Stage Decision Method
Background: Collaboration between humans and computers has become pervasive and ubiquitous, however current computer systems are limited in that they fail to address the emotional component. An accurate understanding of human emotions is necessary for these computers to trigger proper feedback. Among multiple emotional channels, physiological signals are synchronous with emotional responses; therefore, analyzing physiological changes is a recognized way to estimate human emotions. In this paper, a three-stage decision method is proposed to recognize four emotions based on physiological signals in the multi-subject context. Emotion detection is achieved by using a stage-divided strategy in which each stage deals with a fine-grained goal.
Methods: The decision method consists of three stages. During the training process, the initial stage transforms mixed training subjects to separate groups, thus eliminating the effect of individual differences. The second stage categorizes four emotions into two emotion pools in order to reduce recognition complexity. The third stage trains a classifier based on emotions in each emotion pool. During the testing process, a test case or test trial will be initially classified to a group followed by classification into an emotion pool in the second stage. An emotion will be assigned to the test trial in the final stage. In this paper we consider two different ways of allocating four emotions into two emotion pools. A comparative analysis is also carried out between the proposal and other methods.
Results: An average recognition accuracy of 77.57% was achieved on the recognition of four emotions with the best accuracy of 86.67% to recognize the positive and excited emotion. Using differing ways of allocating four emotions into two emotion pools, we found there is a difference in the effectiveness of a classifier on learning each emotion. When compared to other methods, the proposed method demonstrates a significant improvement in recognizing four emotions in the multi-subject context.
Conclusions: The proposed three-stage decision method solves a crucial issue which is \u27individual differences\u27 in multi-subject emotion recognition and overcomes the suboptimal performance with respect to direct classification of multiple emotions. Our study supports the observation that the proposed method represents a promising methodology for recognizing multiple emotions in the multi-subject context
- …