14 research outputs found

    A Brief Analysis of Multimodal Medical Image Fusion Techniques

    No full text
    Recently, image fusion has become one of the most promising fields in image processing since it plays an essential role in different applications, such as medical diagnosis and clarification of medical images. Multimodal Medical Image Fusion (MMIF) enhances the quality of medical images by combining two or more medical images from different modalities to obtain an improved fused image that is clearer than the original ones. Choosing the best MMIF technique which produces the best quality is one of the important problems in the assessment of image fusion techniques. In this paper, a complete survey on MMIF techniques is presented, along with medical imaging modalities, medical image fusion steps and levels, and the assessment methodology of MMIF. There are several image modalities, such as Computed Tomography (CT), Positron Emission Tomography (PET), Magnetic Resonance Imaging (MRI), and Single Photon Emission Computed Tomography (SPECT). Medical image fusion techniques are categorized into six main categories: spatial domain, transform fusion, fuzzy logic, morphological methods, and sparse representation methods. The MMIF levels are pixel-level, feature-level, and decision-level. The fusion quality evaluation metrics can be categorized as subjective/qualitative and objective/quantitative assessment methods. Furthermore, a detailed comparison between obtained results for significant MMIF techniques is also presented to highlight the pros and cons of each fusion technique

    An optimized hybrid deep learning model based on word embeddings and statistical features for extractive summarization

    No full text
    Extractive summarization has recently gained significant attention as a classification problem at the sentence level. Most current summarization methods rely on only one way of representing sentences in a document (i.e., extracted features, word embeddings, BERT embeddings). However, classification performance and summary generation quality will be improved if we combine two ways of representing sentences. This paper presents a novel extractive text summarization method based on word embeddings and statistical features of a single document. Each sentence is encoded using a Convolutional Neural Network (CNN) and a Feed-Forward Neural Network (FFNN) based on word embeddings and statistical features. CNN and FFNN outputs are concatenated to classify the sentence using a Multilayer Perceptron (MLP). In addition, hybrid model parameters are optimized by the KerasTuner optimization technique to determine the most efficient hybrid model. The proposed method was evaluated on the standard Newsroom dataset. Experiments show that the proposed method effectively captures the document’s semantic and statistical information and outperforms deep learning, machine learning, and state-of-the-art approaches with scores of 78.64, 74.05, and 72.08 for ROUGE-1 ROUGE-2, and ROUGE-L, respectively

    Heart disease risk factors detection from electronic health records using advanced NLP and deep learning techniques

    No full text
    Abstract Heart disease remains the major cause of death, despite recent improvements in prediction and prevention. Risk factor identification is the main step in diagnosing and preventing heart disease. Automatically detecting risk factors for heart disease in clinical notes can help with disease progression modeling and clinical decision-making. Many studies have attempted to detect risk factors for heart disease, but none have identified all risk factors. These studies have proposed hybrid systems that combine knowledge-driven and data-driven techniques, based on dictionaries, rules, and machine learning methods that require significant human effort. The National Center for Informatics for Integrating Biology and Beyond (i2b2) proposed a clinical natural language processing (NLP) challenge in 2014, with a track (track2) focused on detecting risk factors for heart disease risk factors in clinical notes over time. Clinical narratives provide a wealth of information that can be extracted using NLP and Deep Learning techniques. The objective of this paper is to improve on previous work in this area as part of the 2014 i2b2 challenge by identifying tags and attributes relevant to disease diagnosis, risk factors, and medications by providing advanced techniques of using stacked word embeddings. The i2b2 heart disease risk factors challenge dataset has shown significant improvement by using the approach of stacking embeddings, which combines various embeddings. Our model achieved an F1 score of 93.66% by using BERT and character embeddings (CHARACTER-BERT Embedding) stacking. The proposed model has significant results compared to all other models and systems that we developed for the 2014 i2b2 challenge

    A Robust and Efficient System to Detect Human Faces Based on Facial Features

    No full text
    Face detection is considered as a one of the most important issues in the identification and authentication systems which use biometric features. Face detection is not straightforward as long as it has lots of dissimilarity of image appearance. Some challenges are required to be resolved to improve the detection process. These challenges include environmental constraints, device specific constraints and the facial feature constraints. Here in our paper we present a modified method to enrich face detection by using combination of Haar cascade files using skin detection, eye detection and nose detection. Our proposed system has been evaluated using Wild database. The experimental results have shown that the proposed system can achieve accuracy of detection up to 96%. Also, here we compared the proposed system with the other face detection systems and the success rate of the proposed system is better than the considered systems

    An Efficient Snort NIDSaaS based on Danger Theory and Machine Learning

    No full text
    Network Intrusion Detection System (NIDS) is a hardware or software application that allows computer networks to detect, recognize and avoid the harmful activities, which attempt to compromise the integrity, privacy or accessibility of computer network. Two detection techniques are used by the NIDSs, namely, the signature-based and anomaly-based. Signature-based intrusion detection depends on the detection of the signature of the known attacks. On the other hand, the anomaly-based intrusion detection depends on the detection of anomalous behaviours in the networks. Snort is an open source signature-based NIDS and can be used effectively to detect and prevent the known network attacks. It uses a set of predefined signatures (rules) to trigger an alert if any network packet matches one of its rules. However, it fails to detect new attacks that do not have signatures in its predefined rules. Thus, it requires constant update of its rules to detect new attacks. To overcome this deficiency, the present paper recommends using Danger Theory concepts inspired from biological immune system with a machine learning algorithm to automatically create new Snort rules, which can detect new attacks. Snort NIDS as a software as a Service (NIDSaaS) in cloud computing has been suggested. Experimental results showed that the proposed modifications of the Snort improved its ability to detect the new attacks

    A New Chaotic-Based RGB Image Encryption Technique Using a Nonlinear Rotational 16 Ă— 16 DNA Playfair Matrix

    No full text
    Due to great interest in the secure storage and transmission of color images, the necessity for an efficient and robust RGB image encryption technique has grown. RGB image encryption ensures the confidentiality of color images during storage and transmission. In the literature, a large number of chaotic-based image encryption techniques have been proposed, but there is still a need for a robust, efficient and secure technique against different kinds of attacks. In this paper, a novel RGB image encryption technique is proposed for encrypting individual pixels of RGB images using chaotic systems and 16 rounds of DNA encoding, transpositions and substitutions. First, round keys are generated randomly using a logistic chaotic function. Then, these keys are used across different rounds to alter individual pixels using a nonlinear randomly generated 16Ă—16 DNA Playfair matrix. Experimental results show the robustness of the proposed technique against most attacks while reducing the consumed time for encryption and decryption. The quantitative metrics show the ability of the proposed technique to maintain reference evaluation values while resisting statistical and differential attacks. The obtained horizontal, vertical and diagonal correlation is less than 0.01, and the NPCR and UACI are larger than 0.99 and 0.33, respectively. Finally, NIST analysis is presented to evaluate the randomness of the proposed technique

    Multiple Strategies Boosted Orca Predation Algorithm for Engineering Optimization Problems

    No full text
    Abstract This paper proposes an enhanced orca predation algorithm (OPA) called the Lévy flight orca predation algorithm (LFOPA). LFOPA improves OPA by integrating the Lévy flight (LF) strategy into the chasing phase of OPA and employing the greedy selection (GS) strategy at the end of each optimization iteration. This enhancement is made to avoid the entrapment of local optima and to improve the quality of acquired solutions. OPA is a novel, efficient population-based optimizer that surpasses other reliable optimizers. However, owing to the low diversity of orcas, OPA is prone to stalling at local optima in some scenarios. In this paper, LFOPA is proposed for addressing global and real-world optimization challenges. To investigate the validity of the proposed LFOPA, it is compared with seven robust optimizers, including the improved multi-operator differential evolution algorithm (IMODE), covariance matrix adaptation evolution strategy (CMA-ES), gravitational search algorithm (GSA), grey wolf optimizer (GWO), moth-flame optimization algorithm (MFO), Harris hawks optimization (HHO), and the original OPA on 10 unconstrained test functions linked to 2020 IEEE Congress on Evolutionary Computation (CEC’20). Furthermore, four different design engineering issues, including the welded beam, the tension/compression spring, the pressure vessel, and the speed reducer, are solved using the proposed LFOPA, to test its applicability. It was also employed to address node localization challenges in wireless sensor networks (WSNs) as an example of real-world applications. Results and tests of significance show that the proposed LFOPA performs much better than OPA and other competitors. LFOPA simulation results on node localization challenges are much superior to other competitors in terms of minimizing squared errors and localization errors

    Predicting Coronavirus Pandemic in Real-Time Using Machine Learning and Big Data Streaming System

    No full text
    Twitter is a virtual social network where people share their posts and opinions about the current situation, such as the coronavirus pandemic. It is considered the most significant streaming data source for machine learning research in terms of analysis, prediction, knowledge extraction, and opinions. Sentiment analysis is a text analysis method that has gained further significance due to social networks’ emergence. Therefore, this paper introduces a real-time system for sentiment prediction on Twitter streaming data for tweets about the coronavirus pandemic. The proposed system aims to find the optimal machine learning model that obtains the best performance for coronavirus sentiment analysis prediction and then uses it in real-time. The proposed system has been developed into two components: developing an offline sentiment analysis and modeling an online prediction pipeline. The system has two components: the offline and the online components. For the offline component of the system, the historical tweets’ dataset was collected in duration 23/01/2020 and 01/06/2020 and filtered by #COVID-19 and #Coronavirus hashtags. Two feature extraction methods of textual data analysis were used, n-gram and TF-ID, to extract the dataset’s essential features, collected using coronavirus hashtags. Then, five regular machine learning algorithms were performed and compared: decision tree, logistic regression, k-nearest neighbors, random forest, and support vector machine to select the best model for the online prediction component. The online prediction pipeline was developed using Twitter Streaming API, Apache Kafka, and Apache Spark. The experimental results indicate that the RF model using the unigram feature extraction method has achieved the best performance, and it is used for sentiment prediction on Twitter streaming data for coronavirus

    Ensemble Learning Based on Hybrid Deep Learning Model for Heart Disease Early Prediction

    No full text
    Many epidemics have afflicted humanity throughout history, claiming many lives. It has been noted in our time that heart disease is one of the deadliest diseases that humanity has confronted in the contemporary period. The proliferation of poor habits such as smoking, overeating, and lack of physical activity has contributed to the rise in heart disease. The killing feature of heart disease, which has earned it the moniker the “silent killer,” is that it frequently has no apparent signs in advance. As a result, research is required to develop a promising model for the early identification of heart disease using simple data and symptoms. The paper’s aim is to propose a deep stacking ensemble model to enhance the performance of the prediction of heart disease. The proposed ensemble model integrates two optimized and pre-trained hybrid deep learning models with the Support Vector Machine (SVM) as the meta-learner model. The first hybrid model is Convolutional Neural Network (CNN)-Long Short-Term Memory (LSTM) (CNN-LSTM), which integrates CNN and LSTM. The second hybrid model is CNN-GRU, which integrates CNN with a Gated Recurrent Unit (GRU). Recursive Feature Elimination (RFE) is also used for the feature selection optimization process. The proposed model has been optimized and tested using two different heart disease datasets. The proposed ensemble is compared with five machine learning models including Logistic Regression (LR), Random Forest (RF), K-Nearest Neighbors (K-NN), Decision Tree (DT), Naïve Bayes (NB), and hybrid models. In addition, optimization techniques are used to optimize ML, DL, and the proposed models. The results obtained by the proposed model achieved the highest performance using the full feature set

    Automated Diagnosis for Colon Cancer Diseases Using Stacking Transformer Models and Explainable Artificial Intelligence

    No full text
    Colon cancer is the third most common cancer type worldwide in 2020, almost two million cases were diagnosed. As a result, providing new, highly accurate techniques in detecting colon cancer leads to early and successful treatment of this disease. This paper aims to propose a heterogenic stacking deep learning model to predict colon cancer. Stacking deep learning is integrated with pretrained convolutional neural network (CNN) models with a metalearner to enhance colon cancer prediction performance. The proposed model is compared with VGG16, InceptionV3, Resnet50, and DenseNet121 using different evaluation metrics. Furthermore, the proposed models are evaluated using the LC25000 and WCE binary and muticlassified colon cancer image datasets. The results show that the stacking models recorded the highest performance for the two datasets. For the LC25000 dataset, the stacked model recorded the highest performance accuracy, recall, precision, and F1 score (100). For the WCE colon image dataset, the stacked model recorded the highest performance accuracy, recall, precision, and F1 score (98). Stacking-SVM achieved the highest performed compared to existing models (VGG16, InceptionV3, Resnet50, and DenseNet121) because it combines the output of multiple single models and trains and evaluates a metalearner using the output to produce better predictive results than any single model. Black-box deep learning models are represented using explainable AI (XAI)
    corecore