12 research outputs found
Synthetic Correlated Diffusion Imaging for Prostate Cancer Detection and Risk Assessment
Prostate cancer (PCa) is the second most common form of cancer among men worldwide and the most frequently diagnosed cancer among men in 112 countries. While the overall 5-year survival rate for prostate cancer is very high, prognosis varies considerably depending on how early PCa is diagnosed and how aggressively it develops. As such, PCa screening is critical for early detection and treatment of PCa. However, many PCas develop slowly and pose a minimal risk of PCa-related mortality, in which case treatment can be limited to active surveillance of tumour development. Over the last few decades, magnetic resonance imaging (MRI) been used extensively for PCa screening and assessment. In particular, multi-parametric magnetic resonance imaging (mpMRI), where multiple MRI modalities are acquired, is commonly used for PCa imaging. However, the use of mpMRI requires radiologists to interpret multiple MRI images in parallel, resulting in increased inter-observer variability. This is especially true for radiologists with less experience interpreting prostate MRI images. In an effort to address these concerns, a computational MRI modality known as correlated diffusion imaging (CDI) was introduced, with initial results showing promise for CDI as a PCa screening tool. However, CDI is uncalibrated and strongly dependent on the underlying MRI protocols used to compute it, which leads to inconsistencies across different protocols and significant inter- and intra-patient variability. In this thesis, a computational MRI technique known as synthetic correlated diffusion imaging (CDIs) is introduced. CDIs extends CDI through the addition of synthetic DWI and per-patient calibration, thereby providing flexibility and consistency beyond that of CDI. Additionally, a gradient-based optimization framework is developed through which the parameters of CDIs may be optimized for downstream clinical tasks. The proposed CDIs and optimization framework were evaluated against current standard MRI modalities using a clinical MRI dataset comprising 200 PCa patients. Through clinical interpretation by an experienced radiologist, CDIs was found to provide better tissue contrast between healthy, low-risk PCa, and high-risk PCa than standard MRI modalities. This suggests that CDIs provides visual indications of PCa and PCa risk level, which may allow radiologists to make more accurate and consistent conclusions from imaging alone. CDIs may also be used to guide prostate biopsies, potentially indicating better biopsy locations and reducing the number of biopsies required. Upon quantitative evaluation, CDIs achieved a voxel-level area under the receiver operating characteristic curve (AUC) of 0.8446 for separation of healthy and PCa tissue, representing an increase of 0.0315 (p<0.0001) over the best-performing standard MRI modality and indicating the potential of CDIs for PCa screening and diagnosis. Moreover, CDIs achieved a voxel-level AUC of 0.8530 for distinguishing between high- and low-risk cancers, representing an increase of 0.1590 (p<0.0001) over the best-performing standard MRI modality and indicating the potential of CDIs for PCa risk assessment. These results suggest that CDIs may improve voxel-level identification of PCa, which is valuable for PCa localization and segmentation. Moreover, machine learning models trained on CDIs images can benefit from this improved voxel-level contrast, potentially achieving better diagnostic, prognostic, or segmentation performance than models trained on standard MRI images
COVIDNet-CT: Detection of COVID-19 from Chest CT Images using a Tailored Deep Convolutional Neural Network Architecture
The COVID-19 pandemic continues to have a tremendous impact on patients and healthcare systems around the world. To combat this disease, there is a need for effective screening tools to identify patients infected with COVID-19, and to this end CT imaging has been proposed as a key screening method to complement RT-PCR testing. Early studies have reported abnormalities in chest CT images which are characteristic of COVID-19 infection, but these abnormalities may be difficult to distinguish from abnormalities caused by other lung conditions. Motivated by this, we introduce COVIDNet-CT, a deep convolutional neural network architecture tailored for detection of COVID-19 cases from chest CT images. We also introduce COVIDx-CT, a CT image dataset comprising 104,009 images across 1,489 patient cases. Finally, we leverage explainability to investigate the decision-making behaviour of COVIDNet-CT and ensure that COVIDNet-CT makes predictions based on relevant indicators in CT images
Cancer-Net PCa-Data: An Open-Source Benchmark Dataset for Prostate Cancer Clinical Decision Support using Synthetic Correlated Diffusion Imaging Data
The recent introduction of synthetic correlated diffusion (CDI) imaging
has demonstrated significant potential in the realm of clinical decision
support for prostate cancer (PCa). CDI is a new form of magnetic resonance
imaging (MRI) designed to characterize tissue characteristics through the joint
correlation of diffusion signal attenuation across different Brownian motion
sensitivities. Despite the performance improvement, the CDI data for PCa
has not been previously made publicly available. In our commitment to advance
research efforts for PCa, we introduce Cancer-Net PCa-Data, an open-source
benchmark dataset of volumetric CDI imaging data of PCa patients.
Cancer-Net PCa-Data consists of CDI volumetric images from a patient cohort
of 200 patient cases, along with full annotations (gland masks, tumor masks,
and PCa diagnosis for each tumor). We also analyze the demographic and label
region diversity of Cancer-Net PCa-Data for potential biases. Cancer-Net
PCa-Data is the first-ever public dataset of CDI imaging data for PCa, and
is a part of the global open-source initiative dedicated to advancement in
machine learning and imaging research to aid clinicians in the global fight
against cancer
COVID-Net CT-2: Enhanced Deep Neural Networks for Detection of COVID-19 from Chest CT Images Through Bigger, More Diverse Learning
The COVID-19 pandemic continues to rage on, with multiple waves causing
substantial harm to health and economies around the world. Motivated by the use
of CT imaging at clinical institutes around the world as an effective
complementary screening method to RT-PCR testing, we introduced COVID-Net CT, a
neural network tailored for detection of COVID-19 cases from chest CT images as
part of the open source COVID-Net initiative. However, one potential limiting
factor is restricted quantity and diversity given the single nation patient
cohort used. In this study, we introduce COVID-Net CT-2, enhanced deep neural
networks for COVID-19 detection from chest CT images trained on the largest
quantity and diversity of multinational patient cases in research literature.
We introduce two new CT benchmark datasets, the largest comprising a
multinational cohort of 4,501 patients from at least 15 countries. We leverage
explainability to investigate the decision-making behaviour of COVID-Net CT-2,
with the results for select cases reviewed and reported on by two
board-certified radiologists with over 10 and 30 years of experience,
respectively. The COVID-Net CT-2 neural networks achieved accuracy, COVID-19
sensitivity, PPV, specificity, and NPV of 98.1%/96.2%/96.7%/99%/98.8% and
97.9%/95.7%/96.4%/98.9%/98.7%, respectively. Explainability-driven performance
validation shows that COVID-Net CT-2's decision-making behaviour is consistent
with radiologist interpretation by leveraging correct, clinically relevant
critical factors. The results are promising and suggest the strong potential of
deep neural networks as an effective tool for computer-aided COVID-19
assessment. While not a production-ready solution, we hope the open-source,
open-access release of COVID-Net CT-2 and benchmark datasets will continue to
enable researchers, clinicians, and citizen data scientists alike to build upon
them.Comment: 15 page
COVIDx CXR-4: An Expanded Multi-Institutional Open-Source Benchmark Dataset for Chest X-ray Image-Based Computer-Aided COVID-19 Diagnostics
The global ramifications of the COVID-19 pandemic remain significant,
exerting persistent pressure on nations even three years after its initial
outbreak. Deep learning models have shown promise in improving COVID-19
diagnostics but require diverse and larger-scale datasets to improve
performance. In this paper, we introduce COVIDx CXR-4, an expanded
multi-institutional open-source benchmark dataset for chest X-ray image-based
computer-aided COVID-19 diagnostics. COVIDx CXR-4 expands significantly on the
previous COVIDx CXR-3 dataset by increasing the total patient cohort size by
greater than 2.66 times, resulting in 84,818 images from 45,342 patients across
multiple institutions. We provide extensive analysis on the diversity of the
patient demographic, imaging metadata, and disease distributions to highlight
potential dataset biases. To the best of the authors' knowledge, COVIDx CXR-4
is the largest and most diverse open-source COVID-19 CXR dataset and is made
publicly available as part of an open initiative to advance research to aid
clinicians against the COVID-19 disease
COVIDx CXR-3: A Large-Scale, Open-Source Benchmark Dataset of Chest X-ray Images for Computer-Aided COVID-19 Diagnostics
After more than two years since the beginning of the COVID-19 pandemic, the
pressure of this crisis continues to devastate globally. The use of chest X-ray
(CXR) imaging as a complementary screening strategy to RT-PCR testing is not
only prevailing but has greatly increased due to its routine clinical use for
respiratory complaints. Thus far, many visual perception models have been
proposed for COVID-19 screening based on CXR imaging. Nevertheless, the
accuracy and the generalization capacity of these models are very much
dependent on the diversity and the size of the dataset they were trained on.
Motivated by this, we introduce COVIDx CXR-3, a large-scale benchmark dataset
of CXR images for supporting COVID-19 computer vision research. COVIDx CXR-3 is
composed of 30,386 CXR images from a multinational cohort of 17,026 patients
from at least 51 countries, making it, to the best of our knowledge, the most
extensive, most diverse COVID-19 CXR dataset in open access form. Here, we
provide comprehensive details on the various aspects of the proposed dataset
including patient demographics, imaging views, and infection types. The hope is
that COVIDx CXR-3 can assist scientists in advancing machine learning research
against both the COVID-19 pandemic and related diseases.Comment: 5 pages, MED-NeurIPS 2022 worksho
Cancer-Net PCa-Gen: Synthesis of Realistic Prostate Diffusion Weighted Imaging Data via Anatomic-Conditional Controlled Latent Diffusion
In Canada, prostate cancer is the most common form of cancer in men and
accounted for 20% of new cancer cases for this demographic in 2022. Due to
recent successes in leveraging machine learning for clinical decision support,
there has been significant interest in the development of deep neural networks
for prostate cancer diagnosis, prognosis, and treatment planning using
diffusion weighted imaging (DWI) data. A major challenge hindering widespread
adoption in clinical use is poor generalization of such networks due to
scarcity of large-scale, diverse, balanced prostate imaging datasets for
training such networks. In this study, we explore the efficacy of latent
diffusion for generating realistic prostate DWI data through the introduction
of an anatomic-conditional controlled latent diffusion strategy. To the best of
the authors' knowledge, this is the first study to leverage conditioning for
synthesis of prostate cancer imaging. Experimental results show that the
proposed strategy, which we call Cancer-Net PCa-Gen, enhances synthesis of
diverse prostate images through controllable tumour locations and better
anatomical and textural fidelity. These crucial features make it well-suited
for augmenting real patient data, enabling neural networks to be trained on a
more diverse and comprehensive data distribution. The Cancer-Net PCa-Gen
framework and sample images have been made publicly available at
https://www.kaggle.com/datasets/deetsadi/cancer-net-pca-gen-dataset as a part
of a global open-source initiative dedicated to accelerating advancement in
machine learning to aid clinicians in the fight against cancer
MMRNet: Improving Reliability for Multimodal Object Detection and Segmentation for Bin Picking via Multimodal Redundancy
Recently, there has been tremendous interest in industry 4.0 infrastructure
to address labor shortages in global supply chains. Deploying artificial
intelligence-enabled robotic bin picking systems in real world has become
particularly important for reducing stress and physical demands of workers
while increasing speed and efficiency of warehouses. To this end, artificial
intelligence-enabled robotic bin picking systems may be used to automate order
picking, but with the risk of causing expensive damage during an abnormal event
such as sensor failure. As such, reliability becomes a critical factor for
translating artificial intelligence research to real world applications and
products. In this paper, we propose a reliable object detection and
segmentation system with MultiModal Redundancy (MMRNet) for tackling object
detection and segmentation for robotic bin picking using data from different
modalities. This is the first system that introduces the concept of multimodal
redundancy to address sensor failure issues during deployment. In particular,
we realize the multimodal redundancy framework with a gate fusion module and
dynamic ensemble learning. Finally, we present a new label-free multi-modal
consistency (MC) score that utilizes the output from all modalities to measure
the overall system output reliability and uncertainty. Through experiments, we
demonstrate that in an event of missing modality, our system provides a much
more reliable performance compared to baseline models. We also demonstrate that
our MC score is a more reliability indicator for outputs during inference time
compared to the model generated confidence scores that are often
over-confident
COVIDx CT-3: A Large-scale, Multinational, Open-Source Benchmark Dataset for Computer-aided COVID-19 Screening from Chest CT Images
Computed tomography (CT) has been widely explored as a COVID-19 screening and
assessment tool to complement RT-PCR testing. To assist radiologists with
CT-based COVID-19 screening, a number of computer-aided systems have been
proposed; however, many proposed systems are built using CT data which is
limited in both quantity and diversity. Motivated to support efforts in the
development of machine learning-driven screening systems, we introduce COVIDx
CT-3, a large-scale multinational benchmark dataset for detection of COVID-19
cases from chest CT images. COVIDx CT-3 includes 431,205 CT slices from 6,068
patients across at least 17 countries, which to the best of our knowledge
represents the largest, most diverse dataset of COVID-19 CT images in
open-access form. Additionally, we examine the data diversity and potential
biases of the COVIDx CT-3 dataset, finding that significant geographic and
class imbalances remain despite efforts to curate data from a wide variety of
sources.Comment: 3 page