898 research outputs found

    Context encoding enables machine learning-based quantitative photoacoustics

    Full text link
    Real-time monitoring of functional tissue parameters, such as local blood oxygenation, based on optical imaging could provide groundbreaking advances in the diagnosis and interventional therapy of various diseases. While photoacoustic (PA) imaging is a novel modality with great potential to measure optical absorption deep inside tissue, quantification of the measurements remains a major challenge. In this paper, we introduce the first machine learning based approach to quantitative PA imaging (qPAI), which relies on learning the fluence in a voxel to deduce the corresponding optical absorption. The method encodes relevant information of the measured signal and the characteristics of the imaging system in voxel-based feature vectors, which allow the generation of thousands of training samples from a single simulated PA image. Comprehensive in silico experiments suggest that context encoding (CE)-qPAI enables highly accurate and robust quantification of the local fluence and thereby the optical absorption from PA images.Comment: under review JB

    Hyperspectral Camera Selection for Interventional Health-care

    Full text link
    Hyperspectral imaging (HSI) is an emerging modality in health-care applications for disease diagnosis, tissue assessment and image-guided surgery. Tissue reflectances captured by a HSI camera encode physiological properties including oxygenation and blood volume fraction. Optimal camera properties such as filter responses depend crucially on the application, and choosing a suitable HSI camera for a research project and/or a clinical problem is not straightforward. We propose a generic framework for quantitative and application-specific performance assessment of HSI cameras and optical subsystem without the need for any physical setup. Based on user input about the camera characteristics and properties of the target domain, our framework quantifies the performance of the given camera configuration using large amounts of simulated data and a user-defined metric. The application of the framework to commercial camera selection and band selection in the context of oxygenation monitoring in interventional health-care demonstrates its integration into the design work-flow of an engineer. The advantage of being able to test the desired configuration without the need for purchasing expensive components may save system engineers valuable resources

    Tract orientation mapping for bundle-specific tractography

    Full text link
    While the major white matter tracts are of great interest to numerous studies in neuroscience and medicine, their manual dissection in larger cohorts from diffusion MRI tractograms is time-consuming, requires expert knowledge and is hard to reproduce. Tract orientation mapping (TOM) is a novel concept that facilitates bundle-specific tractography based on a learned mapping from the original fiber orientation distribution function (fODF) peaks to a list of tract orientation maps (also abbr. TOM). Each TOM represents one of the known tracts with each voxel containing no more than one orientation vector. TOMs can act as a prior or even as direct input for tractography. We use an encoder-decoder fully-convolutional neural network architecture to learn the required mapping. In comparison to previous concepts for the reconstruction of specific bundles, the presented one avoids various cumbersome processing steps like whole brain tractography, atlas registration or clustering. We compare it to four state of the art bundle recognition methods on 20 different bundles in a total of 105 subjects from the Human Connectome Project. Results are anatomically convincing even for difficult tracts, while reaching low angular errors, unprecedented runtimes and top accuracy values (Dice). Our code and our data are openly available.Comment: Accepted at MICCAI 201

    Estimation of blood oxygenation with learned spectral decoloring for quantitative photoacoustic imaging (LSD-qPAI)

    Full text link
    One of the main applications of photoacoustic (PA) imaging is the recovery of functional tissue properties, such as blood oxygenation (sO2). This is typically achieved by linear spectral unmixing of relevant chromophores from multispectral photoacoustic images. Despite the progress that has been made towards quantitative PA imaging (qPAI), most sO2 estimation methods yield poor results in realistic settings. In this work, we tackle the challenge by employing learned spectral decoloring for quantitative photoacoustic imaging (LSD-qPAI) to obtain quantitative estimates for blood oxygenation. LSD-qPAI computes sO2 directly from pixel-wise initial pressure spectra Sp0, which are vectors comprised of the initial pressure at the same spatial location over all recorded wavelengths. Initial results suggest that LSD-qPAI is able to obtain accurate sO2 estimates directly from multispectral photoacoustic measurements in silico and plausible estimates in vivo.Comment: 5 page

    Towards whole-body CT Bone Segmentation

    Full text link
    Bone segmentation from CT images is a task that has been worked on for decades. It is an important ingredient to several diagnostics or treatment planning approaches and relevant to various diseases. As high-quality manual and semi-automatic bone segmentation is very time-consuming, a reliable and fully automatic approach would be of great interest in many scenarios. In this publication, we propose a UNet inspired architecture to address the task using Deep Learning. We evaluated the approach on whole-body CT scans of patients suffering from multiple myeloma. As the disease decomposes the bone, an accurate segmentation is of utmost importance for the evaluation of bone density, disease staging and localization of focal lesions. The method was evaluated on an in-house data-set of 6000 2D image slices taken from 15 whole-body CT scans, achieving a dice score of 0.96 and an IOU of 0.94.Comment: Accepted conference paper at BVM 201

    Direct White Matter Bundle Segmentation using Stacked U-Nets

    Full text link
    The state-of-the-art method for automatically segmenting white matter bundles in diffusion-weighted MRI is tractography in conjunction with streamline cluster selection. This process involves long chains of processing steps which are not only computationally expensive but also complex to setup and tedious with respect to quality control. Direct bundle segmentation methods treat the task as a traditional image segmentation problem. While they so far did not deliver competitive results, they can potentially mitigate many of the mentioned issues. We present a novel supervised approach for direct tract segmentation that shows major performance gains. It builds upon a stacked U-Net architecture which is trained on manual bundle segmentations from Human Connectome Project subjects. We evaluate our approach \textit{in vivo} as well as \textit{in silico} using the ISMRM 2015 Tractography Challenge phantom dataset. We achieve human segmentation performance and a major performance gain over previous pipelines. We show how the learned spatial priors efficiently guide the segmentation even at lower image qualities with little quality loss

    Task Fingerprinting for Meta Learning in Biomedical Image Analysis

    Full text link
    Shortage of annotated data is one of the greatest bottlenecks in biomedical image analysis. Meta learning studies how learning systems can increase in efficiency through experience and could thus evolve as an important concept to overcome data sparsity. However, the core capability of meta learning-based approaches is the identification of similar previous tasks given a new task - a challenge largely unexplored in the biomedical imaging domain. In this paper, we address the problem of quantifying task similarity with a concept that we refer to as task fingerprinting. The concept involves converting a given task, represented by imaging data and corresponding labels, to a fixed-length vector representation. In fingerprint space, different tasks can be directly compared irrespective of their data set sizes, types of labels or specific resolutions. An initial feasibility study in the field of surgical data science (SDS) with 26 classification tasks from various medical and non-medical domains suggests that task fingerprinting could be leveraged for both (1) selecting appropriate data sets for pretraining and (2) selecting appropriate architectures for a new task. Task fingerprinting could thus become an important tool for meta learning in SDS and other fields of biomedical image analysis.Comment: Medical Image Computing and Computer Assisted Interventions (MICCAI) 202

    No New-Net

    Full text link
    In this paper we demonstrate the effectiveness of a well trained U-Net in the context of the BraTS 2018 challenge. This endeavour is particularly interesting given that researchers are currently besting each other with architectural modifications that are intended to improve the segmentation performance. We instead focus on the training process arguing that a well trained U-Net is hard to beat. Our baseline U-Net, which has only minor modifications and is trained with a large patch size and a Dice loss function indeed achieved competitive Dice scores on the BraTS2018 validation data. By incorporating additional measures such as region based training, additional training data, a simple postprocessing technique and a combination of loss functions, we obtain Dice scores of 77.88, 87.81 and 80.62, and Hausdorff Distances (95th percentile) of 2.90, 6.03 and 5.08 for the enhancing tumor, whole tumor and tumor core, respectively on the test data. This setup achieved rank two in BraTS2018, with more than 60 teams participating in the challenge

    Brain Tumor Segmentation and Radiomics Survival Prediction: Contribution to the BRATS 2017 Challenge

    Full text link
    Quantitative analysis of brain tumors is critical for clinical decision making. While manual segmentation is tedious, time consuming and subjective, this task is at the same time very challenging to solve for automatic segmentation methods. In this paper we present our most recent effort on developing a robust segmentation algorithm in the form of a convolutional neural network. Our network architecture was inspired by the popular U-Net and has been carefully modified to maximize brain tumor segmentation performance. We use a dice loss function to cope with class imbalances and use extensive data augmentation to successfully prevent overfitting. Our method beats the current state of the art on BraTS 2015, is one of the leading methods on the BraTS 2017 validation set (dice scores of 0.896, 0.797 and 0.732 for whole tumor, tumor core and enhancing tumor, respectively) and achieves very good Dice scores on the test set (0.858 for whole, 0.775 for core and 0.647 for enhancing tumor). We furthermore take part in the survival prediction subchallenge by training an ensemble of a random forest regressor and multilayer perceptrons on shape features describing the tumor subregions. Our approach achieves 52.6% accuracy, a Spearman correlation coefficient of 0.496 and a mean square error of 209607 on the test set

    OR-UNet: an Optimized Robust Residual U-Net for Instrument Segmentation in Endoscopic Images

    Full text link
    Segmentation of endoscopic images is an essential processing step for computer and robotics-assisted interventions. The Robust-MIS challenge provides the largest dataset of annotated endoscopic images to date, with 5983 manually annotated images. Here we describe OR-UNet, our optimized robust residual 2D U-Net for endoscopic image segmentation. As the name implies, the network makes use of residual connections in the encoder. It is trained with the sum of Dice and cross-entropy loss and deep supervision. During training, extensive data augmentation is used to increase the robustness. In an 8-fold cross-validation on the training images, our model achieved a mean (median) Dice score of 87.41 (94.35). We use the eight models from the cross-validation as an ensemble on the test set
    corecore