3 research outputs found
Ensembles of Convolutional Neural Networks models for pediatric pneumonia diagnosis
Pneumonia is a lung infection that causes 15% of childhood mortality, over
800,000 children under five every year, all over the world. This pathology is
mainly caused by viruses or bacteria. X-rays imaging analysis is one of the
most used methods for pneumonia diagnosis. These clinical images can be
analyzed using machine learning methods such as convolutional neural networks
(CNN), which learn to extract critical features for the classification.
However, the usability of these systems is limited in medicine due to the lack
of interpretability, because of these models cannot be used to generate an
understandable explanation (from a human-based perspective), about how they
have reached those results. Another problem that difficults the impact of this
technology is the limited amount of labeled data in many medicine domains. The
main contributions of this work are two fold: the first one is the design of a
new explainable artificial intelligence (XAI) technique based on combining the
individual heatmaps obtained from each model in the ensemble. This allows to
overcome the explainability and interpretability problems of the CNN "black
boxes", highlighting those areas of the image which are more relevant to
generate the classification. The second one is the development of new ensemble
deep learning models to classify chest X-rays that allow highly competitive
results using small datasets for training. We tested our ensemble model using a
small dataset of pediatric X-rays (950 samples) with low quality and anatomical
variability (which represents one of the biggest challenges). We also tested
other strategies such as single CNNs trained from scratch and transfer learning
using CheXNet. Our results show that our ensemble model outperforms these
strategies obtaining highly competitive results. Finally, we confirmed the
robustness of our approach using another pneumonia diagnosis dataset [1]
Effective EEG analysis for advanced AI-driven motor imagery BCI systems
Developing effective signal processing for brain-computer interfaces (BCIs) and brain-machine interfaces (BMIs) involves factoring in three aspects of functionality: classification performance, execution time, and the number of data channels used. The contributions in this thesis are centered on these three issues. Contributions are focused on the classification of motor imagery (MI) data, which is generated during imagined movements. Typically, EEG time-series data is segmented for data augmentation or to mimic buffering that happens in an online BCI. A multi-segment decision fusion approach is presented, which takes consecutive temporal segments of EEG data, and uses decision fusion to boost classification performance. It was computationally lightweight and improved the performance of four conventional classifiers. Also, an analysis of the contributions of electrodes from different scalp regions is presented, and a subset of channels is recommended. Sparse learning (SL) classifiers have exhibited strong classification performance in the literature. However, they are computationally expensive. To reduce the test-set execution times, a novel EEG classification pipeline consisting of a genetic-algorithm (GA) for channel selection and a dictionary-based SL module for classification, called GABSLEEG, is presented. Subject-specific channel selection was carried out, in which the channels are selected based on training data from the subject. Using the GA-recommended subset of EEG channels reduced the execution time by 60% whilst preserving classification performance.
Although subject-specific channel selection is widely used in the literature, effective subject-independent channel selection, in which channels are detected using data from other subjects, is an ideal aim because it leads to lower training latency and reduces the number of electrodes needed. A novel convolutional neural network (CNN)-based subject-independent channels
selection method is presented, called the integrated channel selection (ICS) layer. It performed on-a-par with or better than subject-specific channel selection. It was computationally efficient, operating 12-17 times faster than the GA channel
selection module. The ICS layer method was versatile, performing well with two different CNN architectures and datasets.Developing effective signal processing for brain-computer interfaces (BCIs) and brain-machine interfaces (BMIs) involves factoring in three aspects of functionality: classification performance, execution time, and the number of data channels used. The contributions in this thesis are centered on these three issues. Contributions are focused on the classification of motor imagery (MI) data, which is generated during imagined movements. Typically, EEG time-series data is segmented for data augmentation or to mimic buffering that happens in an online BCI. A multi-segment decision fusion approach is presented, which takes consecutive temporal segments of EEG data, and uses decision fusion to boost classification performance. It was computationally lightweight and improved the performance of four conventional classifiers. Also, an analysis of the contributions of electrodes from different scalp regions is presented, and a subset of channels is recommended. Sparse learning (SL) classifiers have exhibited strong classification performance in the literature. However, they are computationally expensive. To reduce the test-set execution times, a novel EEG classification pipeline consisting of a genetic-algorithm (GA) for channel selection and a dictionary-based SL module for classification, called GABSLEEG, is presented. Subject-specific channel selection was carried out, in which the channels are selected based on training data from the subject. Using the GA-recommended subset of EEG channels reduced the execution time by 60% whilst preserving classification performance.
Although subject-specific channel selection is widely used in the literature, effective subject-independent channel selection, in which channels are detected using data from other subjects, is an ideal aim because it leads to lower training latency and reduces the number of electrodes needed. A novel convolutional neural network (CNN)-based subject-independent channels
selection method is presented, called the integrated channel selection (ICS) layer. It performed on-a-par with or better than subject-specific channel selection. It was computationally efficient, operating 12-17 times faster than the GA channel
selection module. The ICS layer method was versatile, performing well with two different CNN architectures and datasets