237 research outputs found
Deep Convolutional Neural Network with Image Processing Techniques and Resnet252v2 for Detection of Covid19 from X-Ray Images
The 2019 coronavirus disease, also known as SARS-CoV-2, has emerged as a highly contagious viral infection with a significant global impact. It has rapidly spread across various regions, resulting in a substantial number of individuals being affected by this disease. Research findings indicate that the rapid and widespread transmission of the disease has posed significant challenges for healthcare professionals in promptly diagnosing the condition and implementing effective measures to contain its propagation. The automation of the diagnostic procedure has emerged as a critical necessity. According to research findings, the implementation of this particular measure has been shown to significantly enhance work efficiency while simultaneously safeguarding healthcare workers from potential exposure to harmful viruses. Medical image analysis is a rapidly growing area of research that offers a promising solution to address this problem with greater precision. This research paper introduces a novel approach for predicting SARS-CoV-2 infection using chest radiography images..
Medical image registration using unsupervised deep neural network: A scoping literature review
In medicine, image registration is vital in image-guided interventions and
other clinical applications. However, it is a difficult subject to be addressed
which by the advent of machine learning, there have been considerable progress
in algorithmic performance has recently been achieved for medical image
registration in this area. The implementation of deep neural networks provides
an opportunity for some medical applications such as conducting image
registration in less time with high accuracy, playing a key role in countering
tumors during the operation. The current study presents a comprehensive scoping
review on the state-of-the-art literature of medical image registration studies
based on unsupervised deep neural networks is conducted, encompassing all the
related studies published in this field to this date. Here, we have tried to
summarize the latest developments and applications of unsupervised deep
learning-based registration methods in the medical field. Fundamental and main
concepts, techniques, statistical analysis from different viewpoints,
novelties, and future directions are elaborately discussed and conveyed in the
current comprehensive scoping review. Besides, this review hopes to help those
active readers, who are riveted by this field, achieve deep insight into this
exciting field
A Comprehensive Review on Cancer Detection and Classification using Medical Images by Machine Learning and Deep Learning Models
In day-to-day life, machine learning and deep learning plays a vital role in healthcare applications to predict various diseases such as cancer, heart attack, mental problem, Parkinson, etc. Among these diseases, cancer is the life-threatening disease that leads a human being to death. The primary aim of this study is to provide a quick overview of various cancers and provides a comprehensive overview of machine learning and deep learning techniques in the detection and classification of several types of cancers. The significance of machine learning and deep learning in detecting various cancers using medical images were concentrated in this study. It also discusses various machine learning and deep learning algorithms that lead to accurate classification of medical images, early diagnosis, and immediate treatment for the patients and explores the methodologies which has been used to predict the cancer with the help of low dose computer tomography to reduce cancer related deaths. As the study narrows down the research into lung cancer, it combats the findings limitations in lung cancer detection models and highlights the need for a deep study of novel cancer detection algorithms. In addition, the review also finds the role of setting up data in lung cancer and the potential of genetic markers in stabilizing the accuracy of machine learning models. Overall, this study gives valuable suggestions to achieve more accuracy in cancer detection and classification using machine learning and deep learning techniques.
A Survey of the Impact of Self-Supervised Pretraining for Diagnostic Tasks with Radiological Images
Self-supervised pretraining has been observed to be effective at improving
feature representations for transfer learning, leveraging large amounts of
unlabelled data. This review summarizes recent research into its usage in
X-ray, computed tomography, magnetic resonance, and ultrasound imaging,
concentrating on studies that compare self-supervised pretraining to fully
supervised learning for diagnostic tasks such as classification and
segmentation. The most pertinent finding is that self-supervised pretraining
generally improves downstream task performance compared to full supervision,
most prominently when unlabelled examples greatly outnumber labelled examples.
Based on the aggregate evidence, recommendations are provided for practitioners
considering using self-supervised learning. Motivated by limitations identified
in current research, directions and practices for future study are suggested,
such as integrating clinical knowledge with theoretically justified
self-supervised learning methods, evaluating on public datasets, growing the
modest body of evidence for ultrasound, and characterizing the impact of
self-supervised pretraining on generalization.Comment: 32 pages, 6 figures, a literature survey submitted to BMC Medical
Imagin
High-Resolution Conductivity Reconstruction by Electrical Impedance Tomography using Structure-Aware Hybrid-Fusion Learning
Self-Supervised Pretraining Improves Performance and Inference Efficiency in Multiple Lung Ultrasound Interpretation Tasks
In this study, we investigated whether self-supervised pretraining could
produce a neural network feature extractor applicable to multiple
classification tasks in B-mode lung ultrasound analysis. When fine-tuning on
three lung ultrasound tasks, pretrained models resulted in an improvement of
the average across-task area under the receiver operating curve (AUC) by 0.032
and 0.061 on local and external test sets respectively. Compact nonlinear
classifiers trained on features outputted by a single pretrained model did not
improve performance across all tasks; however, they did reduce inference time
by 49% compared to serial execution of separate fine-tuned models. When
training using 1% of the available labels, pretrained models consistently
outperformed fully supervised models, with a maximum observed test AUC increase
of 0.396 for the task of view classification. Overall, the results indicate
that self-supervised pretraining is useful for producing initial weights for
lung ultrasound classifiers.Comment: 10 pages, 5 figures, submitted to IEEE Acces
Multimodal Deep Dilated Convolutional Learning for Lung Disease Diagnosis
Abstract Accurate and timely identification of pulmonary disease is critical for effective therapeutic intervention. Computed tomography (CT), chest radiography (x-ray) and positron emission tomography (PET) scans are some examples of traditional diagnostic methods that rely on single-modality imaging. However, these methods are not always accurate or useful. This paper presents a novel strategy to overcome this obstacle by developing a multimodal deep learning framework. Current diagnostic techniques mostly prioritize the analysis of a single modality, which limits the holistic understanding of lung diseases. This limitation hinders the accuracy of diagnoses and the ability to tailor therapies to individual patients. To address this disparity, the proposed research presents a novel multimodal deep learning framework that effectively incorporates data from CT, X-ray, and PET scans. This approach allows for the extraction of features that are unique to each modality. Fusion methods, such as late or early fusion, are used to effectively capture synergistic information from multiple modalities. Adding more convolutional neural network (CNN) layers and pooling operations to the model improves the ability to obtain abstract representations. This is followed by the use of fully connected layers for classification purposes. The model is trained using appropriate loss functions and optimized using gradient-based techniques. The proposed methodology shows a significant improvement in the accuracy of lung disease diagnosis compared to conventional methods using a single modality
CCTCOVID: COVID-19 detection from chest X-ray images using Compact Convolutional Transformers
COVID-19 is a novel virus that attacks the upper respiratory tract and the lungs. Its person-to-person transmissibility is considerably rapid and this has caused serious problems in approximately every facet of individuals' lives. While some infected individuals may remain completely asymptomatic, others have been frequently witnessed to have mild to severe symptoms. In addition to this, thousands of death cases around the globe indicated that detecting COVID-19 is an urgent demand in the communities. Practically, this is prominently done with the help of screening medical images such as Computed Tomography (CT) and X-ray images. However, the cumbersome clinical procedures and a large number of daily cases have imposed great challenges on medical practitioners. Deep Learning-based approaches have demonstrated a profound potential in a wide range of medical tasks. As a result, we introduce a transformer-based method for automatically detecting COVID-19 from X-ray images using Compact Convolutional Transformers (CCT). Our extensive experiments prove the efficacy of the proposed method with an accuracy of 99.22% which outperforms the previous works
Efficient Feature Selection and ML Algorithm for Accurate Diagnostics
Machine learning algorithms have been deployed in numerous optimization, prediction and classification problems. This has endeared them for application in fields such as computer networks and medical diagnosis. Although these machine learning algorithms achieve convincing results in these fields, they face numerous challenges when deployed on imbalanced dataset. Consequently, these algorithms are often biased towards majority class, hence unable to generalize the learning process. In addition, they are unable to effectively deal with high-dimensional datasets. Moreover, the utilization of conventional feature selection techniques from a dataset based on attribute significance render them ineffective for majority of the diagnosis applications. In this paper, feature selection is executed using the more effective Neighbour Components Analysis (NCA). During the classification process, an ensemble classifier comprising of K-Nearest Neighbours (KNN), Naive Bayes (NB), Decision Tree (DT) and Support Vector Machine (SVM) is built, trained and tested. Finally, cross validation is carried out to evaluate the developed ensemble model. The results shows that the proposed classifier has the best performance in terms of precision, recall, F-measure and classification accuracy
Evaluation of automated airway morphological quantification for assessing fibrosing lung disease
Abnormal airway dilatation, termed traction bronchiectasis, is a typical feature of idiopathic pulmonary fibrosis (IPF). Volumetric computed tomography (CT) imaging captures the loss of normal airway tapering in IPF. We postulated that automated quantification of airway abnormalities could provide estimates of IPF disease extent and severity. We propose AirQuant, an automated computational pipeline that takes an airway segmentation and CT image as input and systematically parcellates the airway tree into its lobes and generational branches, deriving airway structural measures from chest CT. Importantly, AirQuant prevents the occurrence of spurious airway branches by thick wave propagation and removes loops in the airway-tree by graph search, overcoming limitations of existing airway skeletonisation algorithms. Tapering between airway segments (intertapering) and airway tortuosity computed by AirQuant were compared between 14 healthy participants and 14 IPF patients. Airway intertapering was significantly reduced in IPF patients, and airway tortuosity was significantly increased when compared to healthy controls. Differences were most marked in the lower lobes, conforming to the typical distribution of IPF-related damage. AirQuant is an open-source pipeline that avoids limitations of existing airway quantification algorithms and has clinical interpretability. Automated airway measurements may have potential as novel imaging biomarkers of IPF severity and disease extent
- …
