4 research outputs found

    Attentional Multi-Channel Convolution With Bidirectional LSTM Cell Toward Hate Speech Prediction

    No full text
    Online social networks(OSNs) facilitate their users in real-time communication but also open the door for several challenging problems like hate speech and fake news. This study discusses hate speech on OSNs and presents an automatic method to identify hate messages. We introduce an attentional multi-channel convolutional-BiLSTM network for the classification of hateful content. Our model uses existing word representation techniques in a multi-channel environment having several filters with different kernel sizes to capture semantics relations at various windows. The encoded representation from multiple channels passes through an attention-aware stacked 2-layer BiLSTM network. The output from stacked 2-layer BiLSTM is weighted by an attention layer and further concatenated and passes via a dense layer. Finally, an output layer employing a sigmoid function classifies the text. We investigate the efficacy of the presented model on three Twitter-related benchmark datasets considering four evaluation metrics. In comparative evaluation, our model beats the five state-of-the-art and the same number of baseline models. The ablation study shows that the exclusion of channels and attention mechanism has the highest impact on the performance of the presented model. The empirical analysis analyzing the impact of different word representation techniques, optimization algorithms, activation functions, and batch size on the presented model ascertains the use of their optimal values

    Bridged-U-Net-ASPP-EVO and Deep Learning Optimization for Brain Tumor Segmentation

    No full text
    Brain tumor segmentation from Magnetic Resonance Images (MRI) is considered a big challenge due to the complexity of brain tumor tissues, and segmenting these tissues from the healthy tissues is an even more tedious challenge when manual segmentation is undertaken by radiologists. In this paper, we have presented an experimental approach to emphasize the impact and effectiveness of deep learning elements like optimizers and loss functions towards a deep learning optimal solution for brain tumor segmentation. We evaluated our performance results on the most popular brain tumor datasets (MICCAI BraTS 2020 and RSNA-ASNR-MICCAI BraTS 2021). Furthermore, a new Bridged U-Net-ASPP-EVO was introduced that exploits Atrous Spatial Pyramid Pooling to enhance capturing multi-scale information to help in segmenting different tumor sizes, Evolving Normalization layers, squeeze and excitation residual blocks, and the max-average pooling for down sampling. Two variants of this architecture were constructed (Bridged U-Net_ASPP_EVO v1 and Bridged U-Net_ASPP_EVO v2). The best results were achieved using these two models when compared with other state-of-the-art models; we have achieved average segmentation dice scores of 0.84, 0.85, and 0.91 from variant1, and 0.83, 0.86, and 0.92 from v2 for the Enhanced Tumor (ET), Tumor Core (TC), and Whole Tumor (WT) tumor sub-regions, respectively, in the BraTS 2021validation dataset

    U-Net-Based Models towards Optimal MR Brain Image Segmentation

    No full text
    Brain tumor segmentation from MRIs has always been a challenging task for radiologists, therefore, an automatic and generalized system to address this task is needed. Among all other deep learning techniques used in medical imaging, U-Net-based variants are the most used models found in the literature to segment medical images with respect to different modalities. Therefore, the goal of this paper is to examine the numerous advancements and innovations in the U-Net architecture, as well as recent trends, with the aim of highlighting the ongoing potential of U-Net being used to better the performance of brain tumor segmentation. Furthermore, we provide a quantitative comparison of different U-Net architectures to highlight the performance and the evolution of this network from an optimization perspective. In addition to that, we have experimented with four U-Net architectures (3D U-Net, Attention U-Net, R2 Attention U-Net, and modified 3D U-Net) on the BraTS 2020 dataset for brain tumor segmentation to provide a better overview of this architecture’s performance in terms of Dice score and Hausdorff distance 95%. Finally, we analyze the limitations and challenges of medical image analysis to provide a critical discussion about the importance of developing new architectures in terms of optimization

    Transformer architecture-based transfer learning forpoliteness prediction in conversation

    No full text
    Politeness is an essential part of a conversation. Like verbal communication, politeness in textual conversation and social media posts is also stimulating. Therefore, the automatic detection of politeness is a significant and relevant problem. The existing literature generally employs classical machine learning-based models like naive Bayes and Support Vector-based trained models for politeness prediction. This paper exploits the state-of-the-art (SOTA) transformer architecture and transfer learning for respectability prediction. The proposed model employs the strengths of context-incorporating large language models, a feed-forward neural network, and an attention mechanism for representation learning of natural language requests. The trained representation is further classified using a softmax function into polite, impolite, and neutral classes. We evaluate the presented model employing two SOTA pre-trained large language models on two benchmark datasets. Our model outperformed the two SOTA and six baseline models, including two domain-specific transformer-based models using both the BERT and RoBERTa language models. The ablation investigation shows that the exclusion of the feed-forward layer displays the highest impact on the presented model. The analysis reveals the batch size and optimization algorithms as effective parameters affecting the model performance.</p
    corecore