98 research outputs found

    A Computer-Assisted Colorization Algorithm based on Topological Difference

    Get PDF
    National audienceRegion-based approaches to cel painting typically use shape similarity and topology relations between regions of consecutive animation frames. This paper presents a new colorization algorithm based on topological differences defined over a hierarchical graph of adjacent regions, which allows an almost full automatic colorization process. Also this paper discusses other attributes that improve the solution of the image association problem

    Doctor of Philosophy

    Get PDF
    dissertationConfocal microscopy has become a popular imaging technique in biology research in recent years. It is often used to study three-dimensional (3D) structures of biological samples. Confocal data are commonly multichannel, with each channel resulting from a different fluorescent staining. This technique also results in finely detailed structures in 3D, such as neuron fibers. Despite the plethora of volume rendering techniques that have been available for many years, there is a demand from biologists for a flexible tool that allows interactive visualization and analysis of multichannel confocal data. Together with biologists, we have designed and developed FluoRender. It incorporates volume rendering techniques such as a two-dimensional (2D) transfer function and multichannel intermixing. Rendering results can be enhanced through tone-mappings and overlays. To facilitate analyses of confocal data, FluoRender provides interactive operations for extracting complex structures. Furthermore, we developed the Synthetic Brainbow technique, which takes advantage of the asynchronous behavior in Graphics Processing Unit (GPU) framebuffer loops and generates random colorizations for different structures in single-channel confocal data. The results from our Synthetic Brainbows, when applied to a sequence of developing cells, can then be used for tracking the movements of these cells. Finally, we present an application of FluoRender in the workflow of constructing anatomical atlases

    Divide-and-Rule: Self-Supervised Learning for Survival Analysis in Colorectal Cancer

    Full text link
    With the long-term rapid increase in incidences of colorectal cancer (CRC), there is an urgent clinical need to improve risk stratification. The conventional pathology report is usually limited to only a few histopathological features. However, most of the tumor microenvironments used to describe patterns of aggressive tumor behavior are ignored. In this work, we aim to learn histopathological patterns within cancerous tissue regions that can be used to improve prognostic stratification for colorectal cancer. To do so, we propose a self-supervised learning method that jointly learns a representation of tissue regions as well as a metric of the clustering to obtain their underlying patterns. These histopathological patterns are then used to represent the interaction between complex tissues and predict clinical outcomes directly. We furthermore show that the proposed approach can benefit from linear predictors to avoid overfitting in patient outcomes predictions. To this end, we introduce a new well-characterized clinicopathological dataset, including a retrospective collective of 374 patients, with their survival time and treatment information. Histomorphological clusters obtained by our method are evaluated by training survival models. The experimental results demonstrate statistically significant patient stratification, and our approach outperformed the state-of-the-art deep clustering methods

    An evaluation of partial differential equations based digital inpainting algorithms

    Get PDF
    Partial Differential equations (PDEs) have been used to model various phenomena/tasks in different scientific and engineering endeavours. This thesis is devoted to modelling image inpainting by numerical implementations of certain PDEs. The main objectives of image inpainting include reconstructing damaged parts and filling-in regions in which data/colour information are missing. Different automatic and semi-automatic approaches to image inpainting have been developed including PDE-based, texture synthesis-based, exemplar-based, and hybrid approaches. Various challenges remain unresolved in reconstructing large size missing regions and/or missing areas with highly textured surroundings. Our main aim is to address such challenges by developing new advanced schemes with particular focus on using PDEs of different orders to preserve continuity of textural and geometric information in the surrounding of missing regions. We first investigated the problem of partial colour restoration in an image region whose greyscale channel is intact. A PDE-based solution is known that is modelled as minimising total variation of gradients in the different colour channels. We extend the applicability of this model to partial inpainting in other 3-channels colour spaces (such as RGB where information is missing in any of the two colours), simply by exploiting the known linear/affine relationships between different colouring models in the derivation of a modified PDE solution obtained by using the Euler-Lagrange minimisation of the corresponding gradient Total Variation (TV). We also developed two TV models on the relations between greyscale and colour channels using the Laplacian operator and the directional derivatives of gradients. The corresponding Euler-Lagrange minimisation yields two new PDEs of different orders for partial colourisation. We implemented these solutions in both spatial and frequency domains. We measure the success of these models by evaluating known image quality measures in inpainted regions for sufficiently large datasets and scenarios. The results reveal that our schemes compare well with existing algorithms, but inpainting large regions remains a challenge. Secondly, we investigate the Total Inpainting (TI) problem where all colour channels are missing in an image region. Reviewing and implementing existing PDE-based total inpainting methods reveal that high order PDEs, applied to each colour channel separately, perform well but are influenced by the size of the region and the quantity of texture surrounding it. Here we developed a TI scheme that benefits from our partial inpainting approach and apply two PDE methods to recover the missing regions in the image. First, we extract the (Y, Cb, Cr) of the image outside the missing region, apply the above PDE methods for reconstructing the missing regions in the luminance channel (Y), and then use the colourisation method to recover the missing (Cb, Cr) colours in the region. We shall demonstrate that compared to existing TI algorithms, our proposed method (using 2 PDE methods) performs well when tested on large datasets of natural and face images. Furthermore, this helps understanding of the impact of the texture in the surrounding areas on inpainting and opens new research directions. Thirdly, we investigate existing Exemplar-Based Inpainting (EBI) methods that do not use PDEs but simultaneously propagate the texture and structure into the missing region by finding similar patches within the rest of image and copying them into the boundary of the missing region. The order of patch propagation is determined by a priority function, and the similarity is determined by matching criteria. We shall exploit recently emerging Topological Data Analysis (TDA) tools to create innovative EBI schemes, referred to as TEBI. TDA studies shapes of data/objects to quantify image texture in terms of connectivity and closeness properties of certain data landmarks. Such quantifications help determine the appropriate size of patch propagation and will be used to modify the patch propagation priority function using the geometrical properties of curvature of isophotes, and to improve the matching criteria of patches by calculating the correlation coefficients from the spatial, gradient and Laplacian domains. The performance of this TEBI method will be tested by applying it to natural dataset images, resulting in improved inpainting when compared with other EBI methods. Fourthly, the recent hybrid-based inpainting techniques are reviewed and a number of highly performing innovative hybrid techniques that combine the use of high order PDE methods with the TEBI method for the simultaneous rebuilding of the missing texture and structure regions in an image are proposed. Such a hybrid scheme first decomposes the image into texture and structure components, and then the missing regions in these components are recovered by TEBI and PDE based methods respectively. The performance of our hybrid schemes will be compared with two existing hybrid algorithms. Fifthly, we turn our attention to inpainting large missing regions, and develop an innovative inpainting scheme that uses the concept of seam carving to reduce this problem to that of inpainting a smaller size missing region that can be dealt with efficiently using the inpainting schemes developed above. Seam carving resizes images based on content-awareness of the image for both reduction and expansion without affecting those image regions that have rich information. The missing region of the seam-carved version will be recovered by the TEBI method, original image size is restored by adding the removed seams and the missing parts of the added seams are then repaired using a high order PDE inpainting scheme. The benefits of this approach in dealing with large missing regions are demonstrated. The extensive performance testing of the developed inpainting methods shows that these methods significantly outperform existing inpainting methods for such a challenging task. However, the performance is still not acceptable in recovering large missing regions in high texture and structure images, and hence we shall identify remaining challenges to be investigated in the future. We shall also extend our work by investigating recently developed deep learning based image/video colourisation, with the aim of overcoming its limitations and shortcoming. Finally, we should also describe our on-going research into using TDA to detect recently growing serious “malicious” use of inpainting to create Fake images/videos

    A Comprehensive Overview of Computational Nuclei Segmentation Methods in Digital Pathology

    Full text link
    In the cancer diagnosis pipeline, digital pathology plays an instrumental role in the identification, staging, and grading of malignant areas on biopsy tissue specimens. High resolution histology images are subject to high variance in appearance, sourcing either from the acquisition devices or the H\&E staining process. Nuclei segmentation is an important task, as it detects the nuclei cells over background tissue and gives rise to the topology, size, and count of nuclei which are determinant factors for cancer detection. Yet, it is a fairly time consuming task for pathologists, with reportedly high subjectivity. Computer Aided Diagnosis (CAD) tools empowered by modern Artificial Intelligence (AI) models enable the automation of nuclei segmentation. This can reduce the subjectivity in analysis and reading time. This paper provides an extensive review, beginning from earlier works use traditional image processing techniques and reaching up to modern approaches following the Deep Learning (DL) paradigm. Our review also focuses on the weak supervision aspect of the problem, motivated by the fact that annotated data is scarce. At the end, the advantages of different models and types of supervision are thoroughly discussed. Furthermore, we try to extrapolate and envision how future research lines will potentially be, so as to minimize the need for labeled data while maintaining high performance. Future methods should emphasize efficient and explainable models with a transparent underlying process so that physicians can trust their output.Comment: 47 pages, 27 figures, 9 table

    Learning the link between Albedo and reflectance: Machine learning-based prediction of hyperspectral bands from CTX images

    Get PDF
    The instruments of the Mars Reconnaissance Orbiter (MRO) provide a large quantity and variety of imagining data for investigations of the Martian surface. Among others, the hyper-spectral Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) captures visible to infrared reflectance across several hundred spectral bands. However, Mars is only partially covered with targeted CRISM at full spectral and spatial resolution. In fact, less than one percent of the Martian surface is imaged in this way. In contrast, the Context Camera (CTX) onboard the MRO delivers images with a higher spatial resolution and the image data cover almost the entire Martian surface. In this work, we examine to what extent machine learning systems can learn the relation between morphology, albedo and spectral composition. To this end, a dataset of 67 CRISM-CTX image pairs is created and different deep neural networks are trained for the pixel-wise prediction of CRISM bands solely based on the albedo information of a CTX image. The trained models enable us to estimate spectral bands across large areas without existing CRISM data and to predict the spectral composition of any CTX image. The predictions are qualitatively similar to the ground-truth spectra and are also able to recover finer grained details, such as dunes or small craters

    Learning to segment fetal brain tissue from noisy annotations

    Full text link
    Automatic fetal brain tissue segmentation can enhance the quantitative assessment of brain development at this critical stage. Deep learning methods represent the state of the art in medical image segmentation and have also achieved impressive results in brain segmentation. However, effective training of a deep learning model to perform this task requires a large number of training images to represent the rapid development of the transient fetal brain structures. On the other hand, manual multi-label segmentation of a large number of 3D images is prohibitive. To address this challenge, we segmented 272 training images, covering 19-39 gestational weeks, using an automatic multi-atlas segmentation strategy based on deformable registration and probabilistic atlas fusion, and manually corrected large errors in those segmentations. Since this process generated a large training dataset with noisy segmentations, we developed a novel label smoothing procedure and a loss function to train a deep learning model with smoothed noisy segmentations. Our proposed methods properly account for the uncertainty in tissue boundaries. We evaluated our method on 23 manually-segmented test images of a separate set of fetuses. Results show that our method achieves an average Dice similarity coefficient of 0.893 and 0.916 for the transient structures of younger and older fetuses, respectively. Our method generated results that were significantly more accurate than several state-of-the-art methods including nnU-Net that achieved the closest results to our method. Our trained model can serve as a valuable tool to enhance the accuracy and reproducibility of fetal brain analysis in MRI
    • …
    corecore