23 research outputs found

    Hierarchical super-regions and their applications to biological volume segmentation

    Get PDF
    Advances in Biological Imaging technology have made possible imaging of sub-cellular samples with an unprecedented resolution. By using Tomographic Reconstruction biological researchers can now obtain volumetric reconstructions for whole cells in near-native state using cryo-Soft X-ray Tomography or even smaller sub-cellular regions with cryo-Electron Tomography. These technologies allow for visualisation, exploration and analysis of very exciting biological samples, however, it doesn鈥檛 come without its challenges. Poor signal-to-noise ratio, low contrast and other sample preparation and re-construction artefacts make these 3D datasets to be a great challenge for the image processing and computer vision community. Without previous available annotations due to the biological relevance of the datasets (which makes them not being publicly available) and the scarce previous research in the field, (semi-)automatic segmentation of these datasets tends to fail. In order to bring state-of-the-art in computer vision closer to the biological community and overcome the difficulties previously mentioned, we are going to build towards a semi-automatic segmentation framework. To do so, we will first introduce superpixels, a group of adjacent pixels that share similar characteristics that reduce whole images to a few superpixels that still preserve important information of the image. Superpixels have been used in the recent literature to speed up object detection, tracking and scene parsing systems. The reduced representation of the image with a few regions allows for faster processing on the subsequent algorithms applied over them. Two novel superpixel algorithms will be presented, introducing with them what we call a Super-Region Hierarchy. A Super-Region Hierarchy is composed of similar regions agglomerated hierarchically. We will show that exploiting this hierarchy in both directions (bottom-up and top-down) helps improving the quality of the superpixels and generalizing them toimages of large dimensionality. Then, superpixels are going to be extended to 3D (named supervoxels), resulting in a variation of two new algorithms ready to be applied to large biological volumes. We will show that representing biological volumes with supervoxels helps not only to dramatically reduce the computational complexity of the analysis (as billions of voxels can be accurately represented with few thousand supervoxels), but also improve the accuracy of the analysis itself by reducing the local noisy neighbourhood of these datasets when grouping voxel features within supervoxels. These regions are only as powerful as the features that represent them, and thus, an in-depth discussion about biological features and grouping methods will lead the way to our first interactive segmentation model, by gathering contextual information from super-regions and hierarchical segmentation layers to allow for segmentation of large regions of the volume with few user input (in the form of annotations or scribbles). Moving forward to improve the interactive segmentation model, a novel algorithm will be presented to extract the most representative (or relevant) sub-volumes from a 3D dataset, since the lack of training data is one of the deciding factors for automatic approaches to fail. We will show that by serving small sub-volumes to the user to be segmented and applying Active Learning to select the next best sub-volume, the number of user interactions needed to completely segment a 3D volume is dramatically reduced. A novel classifier based on Random Forests will be presented to better benefit from these regions of known shape. To finish, SuRVoS will be introduced. A novel fully functional and publicly available workbench based on the work presented here. It is a software tool that comprises most of the ideas, problem formulations and algorithms into a single user interface. It allows a user to interactively segment arbitrary volumetric datasets in a very intuitive and easy to use manner. We have then covered all the topics from data representation to segmentation of biological volumes, and provide with a software tool that hopefully will help closing the gap between biological imaging and computer vision, allowing to generate annotations (or ground truth as it is known in computer vision) much quicker with the aim of gathering a large biological segmentation database to be used in future large-scale completely automatic projects

    Adaptaci贸n de una aplicaci贸n RIA desarrollada en Flex a una aplicaci贸n HTML5

    Get PDF
    Babelium Project (tambi en BP de aqu en adelante), es un proyecto de c odigo abierto cuyo principal objetivo es fomentar el aprendizaje de idiomas online. Para ello, haciendo uso de distintas tecnolog as, se cre o una aplicaci on web, Rich Internet Applications (RIA), que permite a los usuarios practicar idiomas por pr actica oral. La aplicaci on dispone de varios m odulos donde, gracias al streaming de v deo, los usuarios pueden grabar o evaluar ejercicios de forma colaborativa. Babelium Project empez o siendo una aplicaci on desarrollada bajo el conjunto de tecnolog as Flex, un conjunto de tecnolog as de Adobe cuya compilaci on resultante es un aplicaci on (web en este caso) basada en Flash. Sin embargo, con la reciente llegada de HTML5, todo parece indicar que Adobe abandonar a Flex en un futuro 1 2 para centrar sus esfuerzos en el desarrollo de soluciones para esta nueva tecnolog a. Por esa raz on, naci o este proyecto, con el n de migrar Babelium Project a HTML5, conjunto de tecnolog as con gran futuro y acogida en el mundo web. Mi trabajo en este proyecto ha consistido, principalmente, en analizar la factibilidad y proceso adecuado de la migraci on de un proyecto de dimensiones considerables desarrollado bajo Flex al conjunto de tecnolog as que forman HTML5, teniendo para ello el proyecto Babelium como caso de prueba. Las fases principales del proyecto han sido: an alisis del estado de HTML5, an alisis de factibilidad, elecci on de un conjunto de tecnolog as para la migraci on, desarrollo de patrones de migraci on y, por ultimo, migraci on de Babelium utilizando dichas tecnolog as y siguiendo dichos patrones

    Fast global interactive volume segmentation with regional supervoxel descriptors

    Get PDF
    In this paper we propose a novel approach towards fast multi-class volume segmentation that exploits supervoxels in order to reduce complexity, time and memory requirements. Current methods for biomedical image segmentation typically require either complex mathematical models with slow convergence, or expensive-to-calculate image features, which makes them non-feasible for large volumes with many objects (tens to hundreds) of different classes, as is typical in modern medical and biological datasets. Recently, graphical models such as Markov Random Fields (MRF) or Conditional Random Fields (CRF) are having a huge impact in different computer vision areas (e.g. image parsing, object detection, object recognition) as they provide global regularization for multiclass problems over an energy minimization framework. These models have yet to find impact in biomedical imaging due to complexities in training and slow inference in 3D images due to the very large number of voxels. Here, we define an interactive segmentation approach over a supervoxel space by first defining novel, robust and fast regional descriptors for supervoxels. Then, a hierarchical segmentation approach is adopted by training Contextual Extremely Random Forests in a user-defined label hierarchy where the classification output of the previous layer is used as additional features to train a new classifier to refine more detailed label information. This hierarchical model yields final class likelihoods for supervoxels which are finally refined by a MRF model for 3D segmentation. Results demonstrate the effectiveness on a challenging cryo-soft X-ray tomography dataset by segmenting cell areas with only a few user scribbles as the input for our algorithm. Further results demonstrate the effectiveness of our method to fully extract different organelles from the cell volume with another few seconds of user interaction. 漏 (2016) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only

    SMURFS: superpixels from multi-scale refinement of super-regions

    Get PDF
    Recent applications in computer vision have come to rely on superpixel segmentation as a pre-processing step for higher level vision tasks, such as object recognition, scene labelling or image segmentation. Here, we present a new algorithm, Superpixels from MUlti-scale ReFinement of Super-regions (SMURFS), which not only obtains state-of-the-art superpixels, but can also be applied hierarchically to form what we call n-th order super-regions. In essence, starting from a uniformly distributed set of super-regions, the algorithm iteratively alternates graph-based split and merge optimization schemes which yield superpixels that better represent the image. The split step is performed over the pixel grid to separate large super-regions into different smaller superpixels. The merging process, conversely, is performed over the superpixel graph to create 2nd-order super-regions (super-segments). Iterative refinement over two scale of regions allows the algorithm to achieve better over-segmentation results than current state-of-the-art methods, as experimental results show on the public Berkeley Segmentation Dataset (BSD500)

    SurReal: enhancing Surgical simulation Realism using style transfer

    Get PDF
    Surgical simulation is an increasingly important element of surgical education. Using simulation can be a means to address some of the significant challenges in developing surgical skills with limited time and resources. The photo-realistic fidelity of simulations is a key feature that can improve the experience and transfer ratio of trainees. In this paper, we demonstrate how we can enhance the visual fidelity of existing surgical simulation by performing style transfer of multi-class labels from real surgical video onto synthetic content. We demonstrate our approach on simulations of cataract surgery using real data labels from an existing public dataset. Our results highlight the feasibility of the approach and also the powerful possibility to extend this technique to incorporate additional temporal constraints and to different applications

    Data-centric multi-task surgical phase estimation with sparse scene segmentation

    Get PDF
    PURPOSE: Surgical workflow estimation techniques aim to divide a surgical video into temporal segments based on predefined surgical actions or objectives, which can be of different granularity such as steps or phases. Potential applications range from real-time intra-operative feedback to automatic post-operative reports and analysis. A common approach in the literature for performing automatic surgical phase estimation is to decouple the problem into two stages: feature extraction from a single frame and temporal feature fusion. This approach is performed in two stages due to computational restrictions when processing large spatio-temporal sequences. METHODS: The majority of existing works focus on pushing the performance solely through temporal model development. Differently, we follow a data-centric approach and propose a training pipeline that enables models to maximise the usage of existing datasets, which are generally used in isolation. Specifically, we use dense phase annotations available in Cholec80, and sparse scene (i.e., instrument and anatomy) segmentation annotation available in CholecSeg8k in less than 5% of the overlapping frames. We propose a simple multi-task encoder that effectively fuses both streams, when available, based on their importance and jointly optimise them for performing accurate phase prediction. RESULTS AND CONCLUSION: We show that with a small fraction of scene segmentation annotations, a relatively simple model can obtain comparable results than previous state-of-the-art and more complex architectures when evaluated in similar settings. We hope that this data-centric approach can encourage new research directions where data, and how to use it, plays an important role along with model development

    DeepPhase: Surgical Phase Recognition in CATARACTS Videos

    Get PDF
    Automated surgical workflow analysis and understanding can assist surgeons to standardize procedures and enhance post-surgical assessment and indexing, as well as, interventional monitoring. Computer-assisted interventional (CAI) systems based on video can perform workflow estimation through surgical instruments' recognition while linking them to an ontology of procedural phases. In this work, we adopt a deep learning paradigm to detect surgical instruments in cataract surgery videos which in turn feed a surgical phase inference recurrent network that encodes temporal aspects of phase steps within the phase classification. Our models present comparable to state-of-the-art results for surgical tool detection and phase recognition with accuracies of 99 and 78% respectively.Comment: 8 pages, 3 figures, 1 table, MICCAI 201

    Volume segmentation and analysis of biological materials using SuRVoS (Super-region Volume Segmentation) workbench

    Get PDF
    Segmentation is the process of isolating specific regions or objects within an imaged volume, so that further study can be undertaken on these areas of interest. When considering the analysis of complex biological systems, the segmentation of three-dimensional image data is a time consuming and labor intensive step. With the increased availability of many imaging modalities and with automated data collection schemes, this poses an increased challenge for the modern experimental biologist to move from data to knowledge. This publication describes the use of SuRVoS Workbench, a program designed to address these issues by providing methods to semi-automatically segment complex biological volumetric data. Three datasets of differing magnification and imaging modalities are presented here, each highlighting different strategies of segmenting with SuRVoS. Phase contrast X-ray tomography (microCT) of the fruiting body of a plant is used to demonstrate segmentation using model training, cryo electron tomography (cryoET) of human platelets is used to demonstrate segmentation using super- and megavoxels, and cryo soft X-ray tomography (cryoSXT) of a mammalian cell line is used to demonstrate the label splitting tools. Strategies and parameters for each datatype are also presented. By blending a selection of semi-automatic processes into a single interactive tool, SuRVoS provides several benefits. Overall time to segment volumetric data is reduced by a factor of five when compared to manual segmentation, a mainstay in many image processing fields. This is a significant savings when full manual segmentation can take weeks of effort. Additionally, subjectivity is addressed through the use of computationally identified boundaries, and splitting complex collections of objects by their calculated properties rather than on a case-by-case basis

    SuRVoS: Super-Region Volume Segmentation workbench

    Get PDF
    Segmentation of biological volumes is a crucial step needed to fully analyse their scientific content. Not having access to convenient tools with which to segment or annotate the data means many biological volumes remain under-utilised. Automatic segmentation of biological volumes is still a very challenging research field, and current methods usually require a large amount of manually-produced training data to deliver a high-quality segmentation. However, the complex appearance of cellular features and the high variance from one sample to another, along with the time-consuming work of manually labelling complete volumes, makes the required training data very scarce or non-existent. Thus, fully automatic approaches are often infeasible for many practical applications. With the aim of unifying the segmentation power of automatic approaches with the user expertise and ability to manually annotate biological samples, we present a new workbench named SuRVoS (Super-Region Volume Segmentation). Within this software, a volume to be segmented is first partitioned into hierarchical segmentation layers (named Super-Regions) and is then interactively segmented with the user's knowledge input in the form of training annotations. SuRVoS first learns from and then extends user inputs to the rest of the volume, while using Super-Regions for quicker and easier segmentation than when using a voxel grid. These benefits are especially noticeable on noisy, low-dose, biological datasets
    corecore