41 research outputs found
Transfer Learning with Deep Convolutional Neural Network (CNN) for Pneumonia Detection using Chest X-ray
Pneumonia is a life-threatening disease, which occurs in the lungs caused by
either bacterial or viral infection. It can be life-endangering if not acted
upon in the right time and thus an early diagnosis of pneumonia is vital. The
aim of this paper is to automatically detect bacterial and viral pneumonia
using digital x-ray images. It provides a detailed report on advances made in
making accurate detection of pneumonia and then presents the methodology
adopted by the authors. Four different pre-trained deep Convolutional Neural
Network (CNN)- AlexNet, ResNet18, DenseNet201, and SqueezeNet were used for
transfer learning. 5247 Bacterial, viral and normal chest x-rays images
underwent preprocessing techniques and the modified images were trained for the
transfer learning based classification task. In this work, the authors have
reported three schemes of classifications: normal vs pneumonia, bacterial vs
viral pneumonia and normal, bacterial and viral pneumonia. The classification
accuracy of normal and pneumonia images, bacterial and viral pneumonia images,
and normal, bacterial and viral pneumonia were 98%, 95%, and 93.3%
respectively. This is the highest accuracy in any scheme than the accuracies
reported in the literature. Therefore, the proposed study can be useful in
faster-diagnosing pneumonia by the radiologist and can help in the fast airport
screening of pneumonia patients.Comment: 13 Figures, 5 tables. arXiv admin note: text overlap with
arXiv:2003.1314
An Intelligent and Low-cost Eye-tracking System for Motorized Wheelchair Control
In the 34 developed and 156 developing countries, there are about 132 million
disabled people who need a wheelchair constituting 1.86% of the world
population. Moreover, there are millions of people suffering from diseases
related to motor disabilities, which cause inability to produce controlled
movement in any of the limbs or even head.The paper proposes a system to aid
people with motor disabilities by restoring their ability to move effectively
and effortlessly without having to rely on others utilizing an eye-controlled
electric wheelchair. The system input was images of the users eye that were
processed to estimate the gaze direction and the wheelchair was moved
accordingly. To accomplish such a feat, four user-specific methods were
developed, implemented and tested; all of which were based on a benchmark
database created by the authors.The first three techniques were automatic,
employ correlation and were variants of template matching, while the last one
uses convolutional neural networks (CNNs). Different metrics to quantitatively
evaluate the performance of each algorithm in terms of accuracy and latency
were computed and overall comparison is presented. CNN exhibited the best
performance (i.e. 99.3% classification accuracy), and thus it was the model of
choice for the gaze estimator, which commands the wheelchair motion. The system
was evaluated carefully on 8 subjects achieving 99% accuracy in changing
illumination conditions outdoor and indoor. This required modifying a motorized
wheelchair to adapt it to the predictions output by the gaze estimation
algorithm. The wheelchair control can bypass any decision made by the gaze
estimator and immediately halt its motion with the help of an array of
proximity sensors, if the measured distance goes below a well-defined safety
margin.Comment: Accepted for publication in Sensor, 19 Figure, 3 Table
Addressing cognitive bias in medical language models
There is increasing interest in the application large language models (LLMs)
to the medical field, in part because of their impressive performance on
medical exam questions. While promising, exam questions do not reflect the
complexity of real patient-doctor interactions. In reality, physicians'
decisions are shaped by many complex factors, such as patient compliance,
personal experience, ethical beliefs, and cognitive bias. Taking a step toward
understanding this, our hypothesis posits that when LLMs are confronted with
clinical questions containing cognitive biases, they will yield significantly
less accurate responses compared to the same questions presented without such
biases. In this study, we developed BiasMedQA, a benchmark for evaluating
cognitive biases in LLMs applied to medical tasks. Using BiasMedQA we evaluated
six LLMs, namely GPT-4, Mixtral-8x70B, GPT-3.5, PaLM-2, Llama 2 70B-chat, and
the medically specialized PMC Llama 13B. We tested these models on 1,273
questions from the US Medical Licensing Exam (USMLE) Steps 1, 2, and 3,
modified to replicate common clinically-relevant cognitive biases. Our analysis
revealed varying effects for biases on these LLMs, with GPT-4 standing out for
its resilience to bias, in contrast to Llama 2 70B-chat and PMC Llama 13B,
which were disproportionately affected by cognitive bias. Our findings
highlight the critical need for bias mitigation in the development of medical
LLMs, pointing towards safer and more reliable applications in healthcare
A Lightweight Deep Learning Based Microwave Brain Image Network Model for Brain Tumor Classification Using Reconstructed Microwave Brain (RMB) Images
Computerized brain tumor classification from the reconstructed microwave brain (RMB) images is important for the examination and observation of the development of brain disease. In this paper, an eight-layered lightweight classifier model called microwave brain image network (MBINet) using a self-organized operational neural network (Self-ONN) is proposed to classify the reconstructed microwave brain (RMB) images into six classes. Initially, an experimental antenna sensor-based microwave brain imaging (SMBI) system was implemented, and RMB images were collected to create an image dataset. It consists of a total of 1320 images: 300 images for the non-tumor, 215 images for each single malignant and benign tumor, 200 images for each double benign tumor and double malignant tumor, and 190 images for the single benign and single malignant tumor classes. Then, image resizing and normalization techniques were used for image preprocessing. Thereafter, augmentation techniques were applied to the dataset to make 13,200 training images per fold for 5-fold cross-validation. The MBINet model was trained and achieved accuracy, precision, recall, F1-score, and specificity of 96.97%, 96.93%, 96.85%, 96.83%, and 97.95%, respectively, for six-class classification using original RMB images. The MBINet model was compared with four Self-ONNs, two vanilla CNNs, ResNet50, ResNet101, and DenseNet201 pre-trained models, and showed better classification outcomes (almost 98%). Therefore, the MBINet model can be used for reliably classifying the tumor(s) using RMB images in the SMBI system. 2023 by the authors.This work was supported by the Universiti Kebangsaan Malaysia project grant code DIP-2021-024. This work was also supported by Grant NPRP12S-0227-190164 from the Qatar National Research Fund, a member of the Qatar Foundation, Doha, Qatar, and the claims made herein are solely the responsibility of the authors. Open access publication is supported by the Qatar National Library.Scopu