5,569 research outputs found
Exploring Context with Deep Structured models for Semantic Segmentation
State-of-the-art semantic image segmentation methods are mostly based on
training deep convolutional neural networks (CNNs). In this work, we proffer to
improve semantic segmentation with the use of contextual information. In
particular, we explore `patch-patch' context and `patch-background' context in
deep CNNs. We formulate deep structured models by combining CNNs and
Conditional Random Fields (CRFs) for learning the patch-patch context between
image regions. Specifically, we formulate CNN-based pairwise potential
functions to capture semantic correlations between neighboring patches.
Efficient piecewise training of the proposed deep structured model is then
applied in order to avoid repeated expensive CRF inference during the course of
back propagation. For capturing the patch-background context, we show that a
network design with traditional multi-scale image inputs and sliding pyramid
pooling is very effective for improving performance. We perform comprehensive
evaluation of the proposed method. We achieve new state-of-the-art performance
on a number of challenging semantic segmentation datasets including ,
-, , -, -,
-, and datasets. Particularly, we report an
intersection-over-union score of on the - dataset.Comment: 16 pages. Accepted to IEEE T. Pattern Analysis & Machine
Intelligence, 2017. Extended version of arXiv:1504.0101
Domain-adapted driving scene understanding with uncertainty-aware and diversified generative adversarial networks
Funding Information: This work was supported by Fisheries Innovation & Sustainability (FIS) and the U.K. Department for Environment, Food & Rural Affairs (DEFRA) under grant number FIS039 and FIS045A.Peer reviewedPublisher PD
Uncertainty quantification in prostate segmentation
Prostate cancer, a significant global health challenge, necessitates innovative diagnostic solutions. Despite the invaluable role of Magnetic Resonance Imaging (MRI), challenges persist in analysis due to time-intensive tasks and inter-reader variability. Accurate prostate segmentation is critical for diagnosis, influencing clinical decisions and further testing choices.
Traditionally, Convolutional Neural Networks (CNNs) have been employed for automated segmentation tasks, but the manual assessment of segmentation quality remains a crucial bottleneck. This research shifts the paradigm by exploring statistical approaches, specifically focusing on Conformal Prediction (CP), to evaluate the quality of prostate segmentation. Clinically relevant metrics, including Dice Score, relative volume difference, efficiency, and validity, are employed for quantitative assessment and comparison. The conformal classifier demonstrates robustness across diverse datasets. Nearest-Neighbor interpolation ensures image resizing uniformity, and patient-centric data splitting with Region of Interest (ROI) extraction enhances the model's focus.
The work we present is an innovative approach in prostate cancer segmentation using conformal prediction. It focuses on quantifying uncertainties in segmentation and evaluates segmentation quality through the Dice Score and RVD metrics. The study stands out for its high validity and efficiency, achieving percentages ranging from 94.24\% to 99.34\% on external datasets. This approach significantly enhances the diagnostic accuracy in prostate cancer detection via MRI analysis, showcasing the potential of integrating conformal classification in medical imaging to improve precision in clinical diagnostics.
This research advances prostate cancer diagnosis methodologies, emphasizing the novel application of conformal prediction for quantifying the segmentation obtained by other deep learning models. The findings underscore the importance of precise segmentation quality assessment, emphasizing the significance of specific metrics in evaluating the proposed statistical approach for quality control in prostate cancer diagnosis
An Informative Path Planning Framework for Active Learning in UAV-based Semantic Mapping
Unmanned aerial vehicles (UAVs) are frequently used for aerial mapping and
general monitoring tasks. Recent progress in deep learning enabled automated
semantic segmentation of imagery to facilitate the interpretation of
large-scale complex environments. Commonly used supervised deep learning for
segmentation relies on large amounts of pixel-wise labelled data, which is
tedious and costly to annotate. The domain-specific visual appearance of aerial
environments often prevents the usage of models pre-trained on publicly
available datasets. To address this, we propose a novel general planning
framework for UAVs to autonomously acquire informative training images for
model re-training. We leverage multiple acquisition functions and fuse them
into probabilistic terrain maps. Our framework combines the mapped acquisition
function information into the UAV's planning objectives. In this way, the UAV
adaptively acquires informative aerial images to be manually labelled for model
re-training. Experimental results on real-world data and in a photorealistic
simulation show that our framework maximises model performance and drastically
reduces labelling efforts. Our map-based planners outperform state-of-the-art
local planning.Comment: 18 pages, 24 figure
MUAD: Multiple Uncertainties for Autonomous Driving, a benchmark for multiple uncertainty types and tasks
Predictive uncertainty estimation is essential for safe deployment of Deep
Neural Networks in real-world autonomous systems. However, disentangling the
different types and sources of uncertainty is non trivial for most datasets,
especially since there is no ground truth for uncertainty. In addition, while
adverse weather conditions of varying intensities can disrupt neural network
predictions, they are usually under-represented in both training and test sets
in public datasets.We attempt to mitigate these setbacks and introduce the MUAD
dataset (Multiple Uncertainties for Autonomous Driving), consisting of 10,413
realistic synthetic images with diverse adverse weather conditions (night, fog,
rain, snow), out-of-distribution objects, and annotations for semantic
segmentation, depth estimation, object, and instance detection. MUAD allows to
better assess the impact of different sources of uncertainty on model
performance. We conduct a thorough experimental study of this impact on several
baseline Deep Neural Networks across multiple tasks, and release our dataset to
allow researchers to benchmark their algorithm methodically in adverse
conditions. More visualizations and the download link for MUAD are available at
https://muad-dataset.github.io/.Comment: Accepted at BMVC 202
- …