4 research outputs found

    Automatic quantification of tumour hypoxia from multi-modal microscopy images using weakly-supervised learning methods

    No full text
    In recently published clinical trial results, hypoxia-modified therapies have shown to provide more positive outcomes to cancer patients, compared with standard cancer treatments. The development and validation of these hypoxia-modified therapies depend on an effective way of measuring tumor hypoxia, but a standardized measurement is currently unavailable in clinical practice. Different types of manual measurements have been proposed in clinical research, but in this paper we focus on a recently published approach that quantifies the number and proportion of hypoxic regions using high resolution (immuno-)fluorescence (IF) and hematoxylin and eosin (HE) stained images of a histological specimen of a tumor. We introduce new machine learning-based methodologies to automate this measurement, where the main challenge is the fact that the clinical annotations available for training the proposed methodologies consist of the total number of normoxic, chronically hypoxic, and acutely hypoxic regions without any indication of their location in the image. Therefore, this represents a weakly-supervised structured output classification problem, where training is based on a high-order loss function formed by the norm of the difference between the manual and estimated annotations mentioned above. We propose four methodologies to solve this problem: 1) a naive method that uses a majority classifier applied on the nodes of a fixed grid placed over the input images; 2) a baseline method based on a structured output learning formulation that relies on a fixed grid placed over the input images; 3) an extension to this baseline based on a latent structured output learning formulation that uses a graph that is flexible in terms of the amount and positions of nodes; and 4) a pixel-wise labeling based on a fully-convolutional neural network. Using a data set of 89 weakly annotated pairs of IF and HE images from eight tumors, we show that the quantitative results of methods (3) and (4) above are equally competitive and superior to the naive (1) and baseline (2) methods. All proposed methodologies show high correlation values with respect to the clinical annotations.Gustavo Carneiro, Tingying Peng, Christine Bayer, and Nassir Nava

    Creating a platform for the democratisation of Deep Learning in microscopy

    Get PDF
    One of the major technological success stories of the last decade has been the advent of deep learning (DL), which has touched almost every aspect of modern life after a breakthrough performance in an image detection challenge in 2012. The bioimaging community quickly recognised the prospect of the automated ability to make sense of image data with near-human performance as potentially ground-breaking. In the decade since, hundreds of publications have used this technology to tackle many problems related to image analysis, such as labelling or counting cells, identifying cells or organelles of interest in large image datasets, or removing noise or improving the resolution of images. However, the adoption of DL tools in large parts of the bioimaging community has been slow, and many tools have remained in the hands of developers. In this project, I have identified key barriers which have prevented many bioimage analysts and microscopists from accessing existing DL technology in their field and have, in collaboration with colleagues, developed the ZeroCostDL4Mic platform, which aims to address these barriers. This project is inspired by the observation that the most significant impact technology can have in science is when it becomes ubiquitous, that is, when its use becomes essential to address the community’s questions. This work represents one of the first attempts to make DL tools accessible in a transparent, code-free, and affordable manner for bioimage analysis to unlock the full potential of DL via its democratisation for the bioimaging community
    corecore