1,474 research outputs found
Object Counting with Deep Learning
This thesis explores various empirical aspects of deep learning or convolutional network based models for efficient object counting. First, we train moderately large convolutional networks on comparatively smaller datasets containing few hundred samples from scratch with conventional image processing based data augmentation. Then, we extend this approach for unconstrained, outdoor images using more advanced architectural concepts. Additionally, we propose an efficient, randomized data augmentation strategy based on sub-regional pixel distribution for low-resolution images.
Next, the effectiveness of depth-to-space shuffling of feature elements for efficient segmentation is investigated for simpler problems like binary segmentation -- often required in the counting framework. This depth-to-space operation violates the basic assumption of encoder-decoder type of segmentation architectures. Consequently, it helps to train the encoder model as a sparsely connected graph. Nonetheless, we have found comparable accuracy to that of the standard encoder-decoder architectures with our depth-to-space models.
After that, the subtleties regarding the lack of localization information in the conventional scalar count loss for one-look models are illustrated. At this point, without using additional annotations, a possible solution is proposed based on the regulation of a network-generated heatmap in the form of a weak, subsidiary loss. The models trained with this auxiliary loss alongside the conventional loss perform much better compared to their baseline counterparts, both qualitatively and quantitatively. Lastly, the intricacies of tiled prediction for high-resolution images are studied in detail, and a simple and effective trick of eliminating the normalization factor in an existing computational block is demonstrated. All of the approaches employed here are thoroughly benchmarked across multiple heterogeneous datasets for object counting against previous, state-of-the-art approaches
Ground Truth Evaluation of Neural Network Explanations with CLEVR-XAI
The rise of deep learning in today's applications entailed an increasing need
in explaining the model's decisions beyond prediction performances in order to
foster trust and accountability. Recently, the field of explainable AI (XAI)
has developed methods that provide such explanations for already trained neural
networks. In computer vision tasks such explanations, termed heatmaps,
visualize the contributions of individual pixels to the prediction. So far XAI
methods along with their heatmaps were mainly validated qualitatively via
human-based assessment, or evaluated through auxiliary proxy tasks such as
pixel perturbation, weak object localization or randomization tests. Due to the
lack of an objective and commonly accepted quality measure for heatmaps, it was
debatable which XAI method performs best and whether explanations can be
trusted at all. In the present work, we tackle the problem by proposing a
ground truth based evaluation framework for XAI methods based on the CLEVR
visual question answering task. Our framework provides a (1) selective, (2)
controlled and (3) realistic testbed for the evaluation of neural network
explanations. We compare ten different explanation methods, resulting in new
insights about the quality and properties of XAI methods, sometimes
contradicting with conclusions from previous comparative studies. The CLEVR-XAI
dataset and the benchmarking code can be found at
https://github.com/ahmedmagdiosman/clevr-xai.Comment: 37 pages, 9 tables, 2 figures (plus appendix 14 pages
Recommended from our members
Validation of machine learning models to detect amyloid pathologies across institutions.
Semi-quantitative scoring schemes like the Consortium to Establish a Registry for Alzheimer's Disease (CERAD) are the most commonly used method in Alzheimer's disease (AD) neuropathology practice. Computational approaches based on machine learning have recently generated quantitative scores for whole slide images (WSIs) that are highly correlated with human derived semi-quantitative scores, such as those of CERAD, for Alzheimer's disease pathology. However, the robustness of such models have yet to be tested in different cohorts. To validate previously published machine learning algorithms using convolutional neural networks (CNNs) and determine if pathological heterogeneity may alter algorithm derived measures, 40 cases from the Goizueta Emory Alzheimer's Disease Center brain bank displaying an array of pathological diagnoses (including AD with and without Lewy body disease (LBD), and / or TDP-43-positive inclusions) and levels of Aβ pathologies were evaluated. Furthermore, to provide deeper phenotyping, amyloid burden in gray matter vs whole tissue were compared, and quantitative CNN scores for both correlated significantly to CERAD-like scores. Quantitative scores also show clear stratification based on AD pathologies with or without additional diagnoses (including LBD and TDP-43 inclusions) vs cases with no significant neurodegeneration (control cases) as well as NIA Reagan scoring criteria. Specifically, the concomitant diagnosis group of AD + TDP-43 showed significantly greater CNN-score for cored plaques than the AD group. Finally, we report that whole tissue computational scores correlate better with CERAD-like categories than focusing on computational scores from a field of view with densest pathology, which is the standard of practice in neuropathological assessment per CERAD guidelines. Together these findings validate and expand CNN models to be robust to cohort variations and provide additional proof-of-concept for future studies to incorporate machine learning algorithms into neuropathological practice
Unmasking Clever Hans Predictors and Assessing What Machines Really Learn
Current learning machines have successfully solved hard application problems,
reaching high accuracy and displaying seemingly "intelligent" behavior. Here we
apply recent techniques for explaining decisions of state-of-the-art learning
machines and analyze various tasks from computer vision and arcade games. This
showcases a spectrum of problem-solving behaviors ranging from naive and
short-sighted, to well-informed and strategic. We observe that standard
performance evaluation metrics can be oblivious to distinguishing these diverse
problem solving behaviors. Furthermore, we propose our semi-automated Spectral
Relevance Analysis that provides a practically effective way of characterizing
and validating the behavior of nonlinear learning machines. This helps to
assess whether a learned model indeed delivers reliably for the problem that it
was conceived for. Furthermore, our work intends to add a voice of caution to
the ongoing excitement about machine intelligence and pledges to evaluate and
judge some of these recent successes in a more nuanced manner.Comment: Accepted for publication in Nature Communication
Single-cell analysis of ER-positive breast cancer treated with letrozole and ribociclib
Breast cancer is the most widespread cancer in the world, accounting for 25% of all female cancers. There is a high inter- and intra-tumor heterogeneity in breast cancer which makes it challenging to optimize the treatment for the individual patient. In recent years, the role of immune infiltration in tumor carcinogenesis and pathophysiology has been increasingly recognized. It has therefore become a priority to understand the interactions and cooperation between immune and cancer cells. Despite a thorough attempt to match treatment options with clinicopathological features such as histological classification, grade, stage, biomarkers, molecular subtypes, and intrinsic subtypes, many patients show resistance to treatment. One attempt to overcome treatment resistance is the emergence of combinatorial treatment, meaning treating patients with two drugs at the same time.
CDK4/6 inhibitors are anti-cancer drugs which prohibits cell growth and is shown to have promising results in combination with aromatase inhibitors for breast cancer patients with hormone receptor positive disease. This drug combination is not yet approved in Norway as standard neoadjuvant treatment. The NeoLetRib clinical trial facilitates the access to the combinations of aromatase and CDK4/6 inhibitor to patients. The study also gives the opportunity to investigate potential biomarkers for more personalized treatment, novel predictive biomarkers and assess how the tumor microenvironment changes during treatment.
Single cell analysis is the method we used to capture each cells transcriptome in the tumor microenvironment. We performed scRNA-seq of breast cancer biopsies from patients enrolled in the clinical trial NeoLetRib before the neoadjuvant treatment and after 21 days. This study shows that five cellular subtypes including Tregs, and four monocyte subtypes had a significant proportional change. These cell types have been associated with the promotion of a proinflammatory microenvironment and may be associated with tumor progression.M-K
Counting and Locating High-Density Objects Using Convolutional Neural Network
This paper presents a Convolutional Neural Network (CNN) approach for
counting and locating objects in high-density imagery. To the best of our
knowledge, this is the first object counting and locating method based on a
feature map enhancement and a Multi-Stage Refinement of the confidence map. The
proposed method was evaluated in two counting datasets: tree and car. For the
tree dataset, our method returned a mean absolute error (MAE) of 2.05, a
root-mean-squared error (RMSE) of 2.87 and a coefficient of determination
(R) of 0.986. For the car dataset (CARPK and PUCPR+), our method was
superior to state-of-the-art methods. In the these datasets, our approach
achieved an MAE of 4.45 and 3.16, an RMSE of 6.18 and 4.39, and an R of
0.975 and 0.999, respectively. The proposed method is suitable for dealing with
high object-density, returning a state-of-the-art performance for counting and
locating objects.Comment: 15 pages, 10 figures, 8 table
EZH2 modifies sunitinib resistance in renal cell carcinoma by kinome reprogramming
Acquired and intrinsic resistance to receptor tyrosine kinase inhibitors (RTKi) represent a major hurdle in improving the management of clear cell renal cell carcinoma (ccRCC). Recent reports suggest that drug resistance is driven by tumor adaptation via epigenetic mechanisms that activate alternative survival pathways. The histone methyl transferase EZH2 is frequently altered in many cancers including ccRCC. To evaluate its role in ccRCC resistance to RTKi, we established and characterized a spontaneously metastatic, patient-derived xenograft (PDX) model that is intrinsically resistant to the RTKI sunitinib but not to the VEGF therapeutic antibody bevacizumab. Sunitinib maintained its anti-angiogenic and anti-metastatic activity but lost its direct anti-tumor effects due to kinome reprogramming, which resulted in suppression of pro- apoptotic and cell cycle regulatory target genes. Modulating EZH2 expression or activity suppressed phosphorylation of certain RTK, restoring the anti-tumor effects of sunitnib in models of acquired or intrinsically resistant ccRCC. Overall, our results highlight EZH2 as a rational target for therapeutic intervention in sunitinib-resistant ccRCC as well as a predictive marker for RTKi response in this disease.This research was funded by Roswell Park Cancer Institute’s Cancer Center Support Grant from National Cancer Institute, NIH P30CA016056 (RP) and a generous donation by Richard and Deidre Turner (RP). This investigation was conducted in-part in a facility constructed with support from Research Facilities Improvement Program Grant Number C06 RR020128-01 from the National Center for Research Resources, National Institutes of Health
- …