7,601 research outputs found
Segmentation of Pathology Images: A Deep Learning Strategy with Annotated Data
Cancer has significantly threatened human life and health for many years. In the clinic, histopathology image segmentation is the golden stand for evaluating the prediction of patient prognosis and treatment outcome. Generally, manually labelling tumour regions in hundreds of high-resolution histopathological images is time-consuming and expensive for pathologists. Recently, the advancements in hardware and computer vision have allowed deep-learning-based methods to become mainstream to segment tumours automatically, significantly reducing the workload of pathologists. However, most current methods rely on large-scale labelled histopathological images. Therefore, this research studies label-effective tumour segmentation methods using deep-learning paradigms to relieve the annotation limitations. Chapter 3 proposes an ensemble framework for fully-supervised tumour segmentation. Usually, the performance of an individual-trained network is limited by significant morphological variances in histopathological images. We propose a fully-supervised learning ensemble fusion model that uses both shallow and deep U-Nets, trained with images of different resolutions and subsets of images, for robust predictions of tumour regions. Noise elimination is achieved with Convolutional Conditional Random Fields. Two open datasets are used to evaluate the proposed method: the ACDC@LungHP challenge at ISBI2019 and the DigestPath challenge at MICCAI2019. With a dice coefficient of 79.7 %, the proposed method takes third place in ACDC@LungHP. In DigestPath 2019, the proposed method achieves a dice coefficient 77.3 %. Well-annotated images are an indispensable part of training fully-supervised segmentation strategies. However, large-scale histopathology images are hardly annotated finely in clinical practice. It is common for labels to be of poor quality or for only a few images to be manually marked by experts. Consequently, fully-supervised methods cannot perform well in these cases. Chapter 4 proposes a self-supervised contrast learning for tumour segmentation. A self-supervised cancer segmentation framework is proposed to reduce label dependency. An innovative contrastive learning scheme is developed to represent tumour features based on unlabelled images. Unlike a normal U-Net, the backbone is a patch-based segmentation network. Additionally, data augmentation and contrastive losses are applied to improve the discriminability of tumour features. A convolutional Conditional Random Field is used to smooth and eliminate noise. Three labelled, and fourteen unlabelled images are collected from a private skin cancer dataset called BSS. Experimental results show that the proposed method achieves better tumour segmentation performance than other popular self-supervised methods. However, by evaluated on the same public dataset as chapter 3, the proposed self-supervised method is hard to handle fine-grained segmentation around tumour boundaries compared to the supervised method we proposed. Chapter 5 proposes a sketch-based weakly-supervised tumour segmentation method. To segment tumour regions precisely with coarse annotations, a sketch-supervised method is proposed, containing a dual CNN-Transformer network and a global normalised class activation map. CNN-Transformer networks simultaneously model global and local tumour features. With the global normalised class activation map, a gradient-based tumour representation can be obtained from the dual network predictions. We invited experts to mark fine and coarse annotations in the private BSS and the public PAIP2019 datasets to facilitate reproducible performance comparisons. Using the BSS dataset, the proposed method achieves 76.686 % IOU and 86.6 % Dice scores, outperforming state-of-the-art methods. Additionally, the proposed method achieves a Dice gain of 8.372 % compared with U-Net on the PAIP2019 dataset. The thesis presents three approaches to segmenting cancers from histology images: fully-supervised, unsupervised, and weakly supervised methods. This research effectively segments tumour regions based on histopathological annotations and well-designed modules. Our studies comprehensively demonstrate label-effective automatic histopathological image segmentation. Experimental results prove that our works achieve state-of-the-art segmentation performances on private and public datasets. In the future, we plan to integrate more tumour feature representation technologies with other medical modalities and apply them to clinical research
StabJGL: a stability approach to sparsity and similarity selection in multiple network reconstruction
In recent years, network models have gained prominence for their ability to
capture complex associations. In statistical omics, networks can be used to
model and study the functional relationships between genes, proteins, and other
types of omics data. If a Gaussian graphical model is assumed, a gene
association network can be determined from the non-zero entries of the inverse
covariance matrix of the data. Due to the high-dimensional nature of such
problems, integrative methods that leverage similarities between multiple
graphical structures have become increasingly popular. The joint graphical
lasso is a powerful tool for this purpose, however, the current AIC-based
selection criterion used to tune the network sparsities and similarities leads
to poor performance in high-dimensional settings. We propose stabJGL, which
equips the joint graphical lasso with a stable and accurate penalty parameter
selection approach that combines the notion of model stability with
likelihood-based similarity selection. The resulting method makes the powerful
joint graphical lasso available for use in omics settings, and outperforms the
standard joint graphical lasso, as well as state-of-the-art joint methods, in
terms of all performance measures we consider. Applying stabJGL to proteomic
data from a pan-cancer study, we demonstrate the potential for novel
discoveries the method brings. A user-friendly R package for stabJGL with
tutorials is available on Github at https://github.com/Camiling/stabJGL
Beam scanning by liquid-crystal biasing in a modified SIW structure
A fixed-frequency beam-scanning 1D antenna based on Liquid Crystals (LCs) is designed for application in 2D scanning with lateral alignment. The 2D array environment imposes full decoupling of adjacent 1D antennas, which often conflicts with the LC requirement of DC biasing: the proposed design accommodates both. The LC medium is placed inside a Substrate Integrated Waveguide (SIW) modified to work as a Groove Gap Waveguide, with radiating slots etched on the upper broad wall, that radiates as a Leaky-Wave Antenna (LWA). This allows effective application of the DC bias voltage needed for tuning the LCs. At the same time, the RF field remains laterally confined, enabling the possibility to lay several antennas in parallel and achieve 2D beam scanning. The design is validated by simulation employing the actual properties of a commercial LC medium
Is attention all you need in medical image analysis? A review
Medical imaging is a key component in clinical diagnosis, treatment planning
and clinical trial design, accounting for almost 90% of all healthcare data.
CNNs achieved performance gains in medical image analysis (MIA) over the last
years. CNNs can efficiently model local pixel interactions and be trained on
small-scale MI data. The main disadvantage of typical CNN models is that they
ignore global pixel relationships within images, which limits their
generalisation ability to understand out-of-distribution data with different
'global' information. The recent progress of Artificial Intelligence gave rise
to Transformers, which can learn global relationships from data. However, full
Transformer models need to be trained on large-scale data and involve
tremendous computational complexity. Attention and Transformer compartments
(Transf/Attention) which can well maintain properties for modelling global
relationships, have been proposed as lighter alternatives of full Transformers.
Recently, there is an increasing trend to co-pollinate complementary
local-global properties from CNN and Transf/Attention architectures, which led
to a new era of hybrid models. The past years have witnessed substantial growth
in hybrid CNN-Transf/Attention models across diverse MIA problems. In this
systematic review, we survey existing hybrid CNN-Transf/Attention models,
review and unravel key architectural designs, analyse breakthroughs, and
evaluate current and future opportunities as well as challenges. We also
introduced a comprehensive analysis framework on generalisation opportunities
of scientific and clinical impact, based on which new data-driven domain
generalisation and adaptation methods can be stimulated
Deep learning for unsupervised domain adaptation in medical imaging: Recent advancements and future perspectives
Deep learning has demonstrated remarkable performance across various tasks in
medical imaging. However, these approaches primarily focus on supervised
learning, assuming that the training and testing data are drawn from the same
distribution. Unfortunately, this assumption may not always hold true in
practice. To address these issues, unsupervised domain adaptation (UDA)
techniques have been developed to transfer knowledge from a labeled domain to a
related but unlabeled domain. In recent years, significant advancements have
been made in UDA, resulting in a wide range of methodologies, including feature
alignment, image translation, self-supervision, and disentangled representation
methods, among others. In this paper, we provide a comprehensive literature
review of recent deep UDA approaches in medical imaging from a technical
perspective. Specifically, we categorize current UDA research in medical
imaging into six groups and further divide them into finer subcategories based
on the different tasks they perform. We also discuss the respective datasets
used in the studies to assess the divergence between the different domains.
Finally, we discuss emerging areas and provide insights and discussions on
future research directions to conclude this survey.Comment: Under Revie
HistoPerm: A Permutation-Based View Generation Approach for Improving Histopathologic Feature Representation Learning
Deep learning has been effective for histology image analysis in digital
pathology. However, many current deep learning approaches require large,
strongly- or weakly-labeled images and regions of interest, which can be
time-consuming and resource-intensive to obtain. To address this challenge, we
present HistoPerm, a view generation method for representation learning using
joint embedding architectures that enhances representation learning for
histology images. HistoPerm permutes augmented views of patches extracted from
whole-slide histology images to improve classification performance. We
evaluated the effectiveness of HistoPerm on two histology image datasets for
Celiac disease and Renal Cell Carcinoma, using three widely used joint
embedding architecture-based representation learning methods: BYOL, SimCLR, and
VICReg. Our results show that HistoPerm consistently improves patch- and
slide-level classification performance in terms of accuracy, F1-score, and AUC.
Specifically, for patch-level classification accuracy on the Celiac disease
dataset, HistoPerm boosts BYOL and VICReg by 8% and SimCLR by 3%. On the Renal
Cell Carcinoma dataset, patch-level classification accuracy is increased by 2%
for BYOL and VICReg, and by 1% for SimCLR. In addition, on the Celiac disease
dataset, models with HistoPerm outperform the fully-supervised baseline model
by 6%, 5%, and 2% for BYOL, SimCLR, and VICReg, respectively. For the Renal
Cell Carcinoma dataset, HistoPerm lowers the classification accuracy gap for
the models up to 10% relative to the fully-supervised baseline. These findings
suggest that HistoPerm can be a valuable tool for improving representation
learning of histopathology features when access to labeled data is limited and
can lead to whole-slide classification results that are comparable to or
superior to fully-supervised methods
LCDctCNN: Lung Cancer Diagnosis of CT scan Images Using CNN Based Model
The most deadly and life-threatening disease in the world is lung cancer.
Though early diagnosis and accurate treatment are necessary for lowering the
lung cancer mortality rate. A computerized tomography (CT) scan-based image is
one of the most effective imaging techniques for lung cancer detection using
deep learning models. In this article, we proposed a deep learning model-based
Convolutional Neural Network (CNN) framework for the early detection of lung
cancer using CT scan images. We also have analyzed other models for instance
Inception V3, Xception, and ResNet-50 models to compare with our proposed
model. We compared our models with each other considering the metrics of
accuracy, Area Under Curve (AUC), recall, and loss. After evaluating the
model's performance, we observed that CNN outperformed other models and has
been shown to be promising compared to traditional methods. It achieved an
accuracy of 92%, AUC of 98.21%, recall of 91.72%, and loss of 0.328.Comment: 8, accepted by 10th International Conference on Signal Processing and
Integrated Networks (SPIN 2023
Recommended from our members
Ensuring Access to Safe and Nutritious Food for All Through the Transformation of Food Systems
Bayesian networks for disease diagnosis: What are they, who has used them and how?
A Bayesian network (BN) is a probabilistic graph based on Bayes' theorem,
used to show dependencies or cause-and-effect relationships between variables.
They are widely applied in diagnostic processes since they allow the
incorporation of medical knowledge to the model while expressing uncertainty in
terms of probability. This systematic review presents the state of the art in
the applications of BNs in medicine in general and in the diagnosis and
prognosis of diseases in particular. Indexed articles from the last 40 years
were included. The studies generally used the typical measures of diagnostic
and prognostic accuracy: sensitivity, specificity, accuracy, precision, and the
area under the ROC curve. Overall, we found that disease diagnosis and
prognosis based on BNs can be successfully used to model complex medical
problems that require reasoning under conditions of uncertainty.Comment: 22 pages, 5 figures, 1 table, Student PhD first pape
A scoping review of natural language processing of radiology reports in breast cancer
Various natural language processing (NLP) algorithms have been applied in the literature to analyze radiology reports pertaining to the diagnosis and subsequent care of cancer patients. Applications of this technology include cohort selection for clinical trials, population of large-scale data registries, and quality improvement in radiology workflows including mammography screening. This scoping review is the first to examine such applications in the specific context of breast cancer. Out of 210 identified articles initially, 44 met our inclusion criteria for this review. Extracted data elements included both clinical and technical details of studies that developed or evaluated NLP algorithms applied to free-text radiology reports of breast cancer. Our review illustrates an emphasis on applications in diagnostic and screening processes over treatment or therapeutic applications and describes growth in deep learning and transfer learning approaches in recent years, although rule-based approaches continue to be useful. Furthermore, we observe increased efforts in code and software sharing but not with data sharing
- …