10 research outputs found

    Recognition Covid-19 cases using deep type-2 fuzzy neural networks based on chest X-ray image

    Get PDF
    Today, the new coronavirus (Covid-19) has become a major global epidemic. Every day, a large proportion of the world's population is infected with the Covid-19 virus, and a significant proportion of those infected dies as a result of this virus. Because of the virus's infectious nature, prompt diagnosis, treatment, and quarantine are considered critical. In this paper, an automated method for detecting Covid-19 from chest X-ray images based on deep learning networks is presented. For the proposed deep learning network, a combination of convolutional neural networks with type-2 fuzzy activation function is used to deal with noise and uncertainty. In this study, Generative Adversarial Networks (GANs) were also used for data augmentation. Furthermore, the proposed network is resistant to Gaussian noise up to 10 dB. The final accuracy for the classification of the first scenario (healthy and Covid-19) and the second scenario (healthy, Pneumonia and Covid-19) is about 99% and 95%, respectively. In addition, the results of the proposed method in terms of accuracy, precision, sensitivity, and specificity in comparison with recent research are promising. For example, the proposed method for classifying the first scenario has 100% and 99% sensitivity and specificity, respectively. In the field of medical application, the proposed method can be used as a physician's assistant during patient treatment

    Removing mixture of Gaussian and Impulse noise of images using sparse coding

    Get PDF
    Real images contain different types of noises and a very difficult process is to remove mixed noise in any type of them. Additive White Gaussian Noise (AWGN) coupled with Impulse Noise (IN) is a typical method. Many mixed noise removal methods are based on a detection method that generates artificial products in case of high noise levels. In this article, we suggest an active weighted approach for mixed noise reduction, defined as Weighted Encoding Sparse Noise Reduction (WESNR), encoded in sparse non-local regulation. The algorithm utilizes a non-local self-similarity feature of image in the sparse coding framework and a pre-learned Principal Component Analysis (PCA) dictionary. Experimental results show that both the quantitative and the visual quality, the proposed WESNR method achieves better results of the other technique in terms of PSNR

    Developing an efficient deep neural network for automatic detection of COVID-19 using chest X-ray images

    Get PDF
    The novel coronavirus (COVID-19) could be described as the greatest human challenge of the 21st century. The development and transmission of the disease have increased mortality in all countries. Therefore, a rapid diagnosis of COVID-19 is necessary to treat and control the disease. In this paper, a new method for the automatic identification of pneumonia (including COVID-19) is presented using a proposed deep neural network. In the proposed method, the chest X-ray images are used to separate 2–4 classes in 7 different and functional scenarios according to healthy, viral, bacterial, and COVID-19 classes. In the proposed architecture, Generative Adversarial Networks (GANs) are used together with a fusion of the deep transfer learning and LSTM networks, without involving feature extraction/selection for classification of pneumonia. We have achieved more than 90% accuracy for all scenarios except one and also achieved 99% accuracy for separating COVID 19 from healthy group. We also compared our deep proposed network with other deep transfer learning networks (including Inception-ResNet V2, Inception V4, VGG16 and MobileNet) that have been recently widely used in pneumonia detection studies. The results based on the proposed network were very promising in terms of accuracy, precision, sensitivity, and specificity compared to the other deep transfer learning approaches. Depending on the high performance of the proposed method, it can be used during the treatment of patients

    Automatic Detection of Driver Fatigue Based on EEG Signals Using a Developed Deep Neural Network

    Get PDF
    In recent years, detecting driver fatigue has been a significant practical necessity and issue. Even though several investigations have been undertaken to examine driver fatigue, there are relatively few standard datasets on identifying driver fatigue. For earlier investigations, conventional methods relying on manual characteristics were utilized to assess driver fatigue. In any case study, such approaches need previous information for feature extraction, which could raise computing complexity. The current work proposes a driver fatigue detection system, which is a fundamental necessity to minimize road accidents. Data from 11 people are gathered for this purpose, resulting in a comprehensive dataset. The dataset is prepared in accordance with previously published criteria. A deep convolutional neural network–long short-time memory (CNN–LSTM) network is conceived and evolved to extract characteristics from raw EEG data corresponding to the six active areas A, B, C, D, E (based on a single channel), and F. The study’s findings reveal that the suggested deep CNN–LSTM network could learn features hierarchically from raw EEG data and attain a greater precision rate than previous comparative approaches for two-stage driver fatigue categorization. The suggested approach may be utilized to construct automatic fatigue detection systems because of their precision and high speed

    Automatic Detection of Driver Fatigue Based on EEG Signals Using a Developed Deep Neural Network

    No full text
    In recent years, detecting driver fatigue has been a significant practical necessity and issue. Even though several investigations have been undertaken to examine driver fatigue, there are relatively few standard datasets on identifying driver fatigue. For earlier investigations, conventional methods relying on manual characteristics were utilized to assess driver fatigue. In any case study, such approaches need previous information for feature extraction, which could raise computing complexity. The current work proposes a driver fatigue detection system, which is a fundamental necessity to minimize road accidents. Data from 11 people are gathered for this purpose, resulting in a comprehensive dataset. The dataset is prepared in accordance with previously published criteria. A deep convolutional neural network–long short-time memory (CNN–LSTM) network is conceived and evolved to extract characteristics from raw EEG data corresponding to the six active areas A, B, C, D, E (based on a single channel), and F. The study’s findings reveal that the suggested deep CNN–LSTM network could learn features hierarchically from raw EEG data and attain a greater precision rate than previous comparative approaches for two-stage driver fatigue categorization. The suggested approach may be utilized to construct automatic fatigue detection systems because of their precision and high speed

    Developing a Deep Neural Network for Driver Fatigue Detection Using EEG Signals Based on Compressed Sensing

    No full text
    In recent years, driver fatigue has become one of the main causes of road accidents. As a result, fatigue detection systems have been developed to warn drivers, and, among the available methods, EEG signal analysis is recognized as the most reliable method for detecting driver fatigue. This study presents an automated system for a two-stage classification of driver fatigue, using a combination of compressed sensing (CS) theory and deep neural networks (DNNs), that is based on EEG signals. First, CS theory is used to compress the recorded EEG data in order to reduce the computational load. Then, the compressed EEG data is fed into the proposed deep convolutional neural network for automatic feature extraction/selection and classification purposes. The proposed network architecture includes seven convolutional layers together with three long short-term memory (LSTM) layers. For compression rates of 40, 50, 60, 70, 80, and 90, the simulation results for a single-channel recording show accuracies of 95, 94.8, 94.6, 94.4, 94.4, and 92%, respectively. Furthermore, by comparing the results to previous methods, the accuracy of the proposed method for the two-stage classification of driver fatigue has been improved and can be used to effectively detect driver fatigue

    Developing a Deep Neural Network for Driver Fatigue Detection Using EEG Signals Based on Compressed Sensing

    Get PDF
    In recent years, driver fatigue has become one of the main causes of road accidents. As a result, fatigue detection systems have been developed to warn drivers, and, among the available methods, EEG signal analysis is recognized as the most reliable method for detecting driver fatigue. This study presents an automated system for a two-stage classification of driver fatigue, using a combination of compressed sensing (CS) theory and deep neural networks (DNNs), that is based on EEG signals. First, CS theory is used to compress the recorded EEG data in order to reduce the computational load. Then, the compressed EEG data is fed into the proposed deep convolutional neural network for automatic feature extraction/selection and classification purposes. The proposed network architecture includes seven convolutional layers together with three long short-term memory (LSTM) layers. For compression rates of 40, 50, 60, 70, 80, and 90, the simulation results for a single channel recording show accuracies of 95, 94.8, 94.6, 94.4, 94.4, and 92%, respectively. Furthermore, by comparing the results to previous methods, the accuracy of the proposed method for the two-stage classification of driver fatigue has been improved and can be used to effectively detect driver fatigue

    Visual Saliency and Image Reconstruction from EEG Signals via an Effective Geometric Deep Network-Based Generative Adversarial Network

    No full text
    Reaching out the function of the brain in perceiving input data from the outside world is one of the great targets of neuroscience. Neural decoding helps us to model the connection between brain activities and the visual stimulation. The reconstruction of images from brain activity can be achieved through this modelling. Recent studies have shown that brain activity is impressed by visual saliency, the important parts of an image stimuli. In this paper, a deep model is proposed to reconstruct the image stimuli from electroencephalogram (EEG) recordings via visual saliency. To this end, the proposed geometric deep network-based generative adversarial network (GDN-GAN) is trained to map the EEG signals to the visual saliency maps corresponding to each image. The first part of the proposed GDN-GAN consists of Chebyshev graph convolutional layers. The input of the GDN part of the proposed network is the functional connectivity-based graph representation of the EEG channels. The output of the GDN is imposed to the GAN part of the proposed network to reconstruct the image saliency. The proposed GDN-GAN is trained using the Google Colaboratory Pro platform. The saliency metrics validate the viability and efficiency of the proposed saliency reconstruction network. The weights of the trained network are used as initial weights to reconstruct the grayscale image stimuli. The proposed network realizes the image reconstruction from EEG signals

    Automatically Identified EEG Signals of Movement Intention Based on CNN Network (End-To-End)

    No full text
    Movement-based brain–computer Interfaces (BCI) rely significantly on the automatic identification of movement intent. They also allow patients with motor disorders to communicate with external devices. The extraction and selection of discriminative characteristics, which often boosts computer complexity, is one of the issues with automatically discovered movement intentions. This research introduces a novel method for automatically categorizing two-class and three-class movement-intention situations utilizing EEG data. In the suggested technique, the raw EEG input is applied directly to a convolutional neural network (CNN) without feature extraction or selection. According to previous research, this is a complex approach. Ten convolutional layers are included in the suggested network design, followed by two fully connected layers. The suggested approach could be employed in BCI applications due to its high accuracy

    Automatically Identified EEG Signals of Movement Intention Based on CNN Network (End-To-End)

    No full text
    Movement-based brain–computer Interfaces (BCI) rely significantly on the automatic identification of movement intent. They also allow patients with motor disorders to communicate with external devices. The extraction and selection of discriminative characteristics, which often boosts computer complexity, is one of the issues with automatically discovered movement intentions. This research introduces a novel method for automatically categorizing two-class and three-class movement-intention situations utilizing EEG data. In the suggested technique, the raw EEG input is applied directly to a convolutional neural network (CNN) without feature extraction or selection. According to previous research, this is a complex approach. Ten convolutional layers are included in the suggested network design, followed by two fully connected layers. The suggested approach could be employed in BCI applications due to its high accuracy
    corecore