698 research outputs found

    Evaluating and Improving 4D-CT Image Segmentation for Lung Cancer Radiotherapy

    Get PDF
    Lung cancer is a high-incidence disease with low survival despite surgical advances and concurrent chemo-radiotherapy strategies. Image-guided radiotherapy provides for treatment measures, however, significant challenges exist for imaging, treatment planning, and delivery of radiation due to the influence of respiratory motion. 4D-CT imaging is capable of improving image quality of thoracic target volumes influenced by respiratory motion. 4D-CT-based treatment planning strategies requires highly accurate anatomical segmentation of tumour volumes for radiotherapy treatment plan optimization. Variable segmentation of tumour volumes significantly contributes to uncertainty in radiotherapy planning due to a lack of knowledge regarding the exact shape of the lesion and difficulty in quantifying variability. As image-segmentation is one of the earliest tasks in the radiotherapy process, inherent geometric uncertainties affect subsequent stages, potentially jeopardizing patient outcomes. Thus, this work assesses and suggests strategies for mitigation of segmentation-related geometric uncertainties in 4D-CT-based lung cancer radiotherapy at pre- and post-treatment planning stages

    Nucleus segmentation : towards automated solutions

    Get PDF
    Single nucleus segmentation is a frequent challenge of microscopy image processing, since it is the first step of many quantitative data analysis pipelines. The quality of tracking single cells, extracting features or classifying cellular phenotypes strongly depends on segmentation accuracy. Worldwide competitions have been held, aiming to improve segmentation, and recent years have definitely brought significant improvements: large annotated datasets are now freely available, several 2D segmentation strategies have been extended to 3D, and deep learning approaches have increased accuracy. However, even today, no generally accepted solution and benchmarking platform exist. We review the most recent single-cell segmentation tools, and provide an interactive method browser to select the most appropriate solution.Peer reviewe

    Semi-automated learning strategies for large-scale segmentation of histology and other big bioimaging stacks and volumes

    Get PDF
    Labelled high-resolution datasets are becoming increasingly common and necessary in different areas of biomedical imaging. Examples include: serial histology and ex-vivo MRI for atlas building, OCT for studying the human brain, and micro X-ray for tissue engineering. Labelling such datasets, typically, requires manual delineation of a very detailed set of regions of interest on a large number of sections or slices. This process is tedious, time-consuming, not reproducible and rather inefficient due to the high similarity of adjacent sections. In this thesis, I explore the potential of a semi-automated slice level segmentation framework and a suggestive region level framework which aim to speed up the segmentation process of big bioimaging datasets. The thesis includes two well validated, published, and widely used novel methods and one algorithm which did not yield an improvement compared to the current state-of the-art. The slice-wise method, SmartInterpol, consists of a probabilistic model for semi-automated segmentation of stacks of 2D images, in which the user manually labels a sparse set of sections (e.g., one every n sections), and lets the algorithm complete the segmentation for other sections automatically. The proposed model integrates in a principled manner two families of segmentation techniques that have been very successful in brain imaging: multi-atlas segmentation and convolutional neural networks. Labelling every structure on a sparse set of slices is not necessarily optimal, therefore I also introduce a region level active learning framework which requires the labeller to annotate one region of interest on one slice at the time. The framework exploits partial annotations, weak supervision, and realistic estimates of class and section-specific annotation effort in order to greatly reduce the time it takes to produce accurate segmentations for large histological datasets. Although both frameworks have been created targeting histological datasets, they have been successfully applied to other big bioimaging datasets, reducing labelling effort by up to 60−70% without compromising accuracy

    Accurate and budget-efficient text, image, and video analysis systems powered by the crowd

    Full text link
    Crowdsourcing systems empower individuals and companies to outsource labor-intensive tasks that cannot currently be solved by automated methods and are expensive to tackle by domain experts. Crowdsourcing platforms are traditionally used to provide training labels for supervised machine learning algorithms. Crowdsourced tasks are distributed among internet workers who typically have a range of skills and knowledge, differing previous exposure to the task at hand, and biases that may influence their work. This inhomogeneity of the workforce makes the design of accurate and efficient crowdsourcing systems challenging. This dissertation presents solutions to improve existing crowdsourcing systems in terms of accuracy and efficiency. It explores crowdsourcing tasks in two application areas, political discourse and annotation of biomedical and everyday images. The first part of the dissertation investigates how workers' behavioral factors and their unfamiliarity with data can be leveraged by crowdsourcing systems to control quality. Through studies that involve familiar and unfamiliar image content, the thesis demonstrates the benefit of explicitly accounting for a worker's familiarity with the data when designing annotation systems powered by the crowd. The thesis next presents Crowd-O-Meter, a system that automatically predicts the vulnerability of crowd workers to believe \enquote{fake news} in text and video. The second part of the dissertation explores the reversed relationship between machine learning and crowdsourcing by incorporating machine learning techniques for quality control of crowdsourced end products. In particular, it investigates if machine learning can be used to improve the quality of crowdsourced results and also consider budget constraints. The thesis proposes an image analysis system called ICORD that utilizes behavioral cues of the crowd worker, augmented by automated evaluation of image features, to infer the quality of a worker-drawn outline of a cell in a microscope image dynamically. ICORD determines the need to seek additional annotations from other workers in a budget-efficient manner. Next, the thesis proposes a budget-efficient machine learning system that uses fewer workers to analyze easy-to-label data and more workers for data that require extra scrutiny. The system learns a mapping from data features to number of allocated crowd workers for two case studies, sentiment analysis of twitter messages and segmentation of biomedical images. Finally, the thesis uncovers the potential for design of hybrid crowd-algorithm methods by describing an interactive system for cell tracking in time-lapse microscopy videos, based on a prediction model that determines when automated cell tracking algorithms fail and human interaction is needed to ensure accurate tracking

    Combining crowd worker, algorithm, and expert efforts to find boundaries of objects in images

    Get PDF
    While traditional approaches to image analysis have typically relied upon either manual annotation by experts or purely-algorithmic approaches, the rise of crowdsourcing now provides a new source of human labor to create training data or perform computations at run-time. Given this richer design space, how should we utilize algorithms, crowds, and experts to better annotate images? To answer this question for the important task of finding the boundaries of objects or regions in images, I focus on image segmentation, an important precursor to solving a variety of fundamental image analysis problems, including recognition, classification, tracking, registration, retrieval, and 3D visualization. The first part of the work includes a detailed analysis of the relative strengths and weaknesses of three different approaches to demarcate object boundaries in images: by experts, by crowdsourced laymen, and by automated computer vision algorithms. The second part of the work describes three hybrid system designs that integrate computer vision algorithms and crowdsourced laymen to demarcate boundaries in images. Experiments revealed that hybrid system designs yielded more accurate results than relying on algorithms or crowd workers alone and could yield segmentations that are indistinguishable from those created by biomedical experts. To encourage community-wide effort to continue working on developing methods and systems for image-based studies which can have real and measurable impact that benefit society at large, datasets and code are publicly-shared (http://www.cs.bu.edu/~betke/BiomedicalImageSegmentation/)

    A proposed framework for consensus-based lung tumour volume auto-segmentation in 4D computed tomography imaging.

    Get PDF
    This work aims to propose and validate a framework for tumour volume auto-segmentation based on ground-truth estimates derived from multi-physician input contours to expedite 4D-CT based lung tumour volume delineation. 4D-CT datasets of ten non-small cell lung cancer (NSCLC) patients were manually segmented by 6 physicians. Multi-expert ground truth (GT) estimates were constructed using the STAPLE algorithm for the gross tumour volume (GTV) on all respiratory phases. Next, using a deformable model-based method, multi-expert GT on each individual phase of the 4D-CT dataset was propagated to all other phases providing auto-segmented GTVs and motion encompassing internal gross target volumes (IGTVs) based on GT estimates (STAPLE) from each respiratory phase of the 4D-CT dataset. Accuracy assessment of auto-segmentation employed graph cuts for 3D-shape reconstruction and point-set registration-based analysis yielding volumetric and distance-based measures. STAPLE-based auto-segmented GTV accuracy ranged from (81.51  ±  1.92) to (97.27  ±  0.28)% volumetric overlap of the estimated ground truth. IGTV auto-segmentation showed significantly improved accuracies with reduced variance for all patients ranging from 90.87 to 98.57% volumetric overlap of the ground truth volume. Additional metrics supported these observations with statistical significance. Accuracy of auto-segmentation was shown to be largely independent of selection of the initial propagation phase. IGTV construction based on auto-segmented GTVs within the 4D-CT dataset provided accurate and reliable target volumes compared to manual segmentation-based GT estimates. While inter-/intra-observer effects were largely mitigated, the proposed segmentation workflow is more complex than that of current clinical practice and requires further development

    Bioimage informatics in the context of drosophila research

    Get PDF
    Modern biological research relies heavily on microscopic imaging. The advanced genetic toolkit of drosophila makes it possible to label molecular and cellular components with unprecedented level of specificity necessitating the application of the most sophisticated imaging technologies. Imaging in drosophila spans all scales from single molecules to the entire populations of adult organisms, from electron microscopy to live imaging of developmental processes. As the imaging approaches become more complex and ambitious, there is an increasing need for quantitative, computer-mediated image processing and analysis to make sense of the imagery. Bioimage informatics is an emerging research field that covers all aspects of biological image analysis from data handling, through processing, to quantitative measurements, analysis and data presentation. Some of the most advanced, large scale projects, combining cutting edge imaging with complex bioimage informatics pipelines, are realized in the drosophila research community. In this review, we discuss the current research in biological image analysis specifically relevant to the type of systems level image datasets that are uniquely available for the drosophila model system. We focus on how state-of-the-art computer vision algorithms are impacting the ability of drosophila researchers to analyze biological systems in space and time. We pay particular attention to how these algorithmic advances from computer science are made usable to practicing biologists through open source platforms and how biologists can themselves participate in their further development

    Extracting complex lesion phenotypes in

    Get PDF

    Combining crowd worker, algorithm, and expert efforts to find boundaries of objects in images

    Get PDF
    While traditional approaches to image analysis have typically relied upon either manual annotation by experts or purely-algorithmic approaches, the rise of crowdsourcing now provides a new source of human labor to create training data or perform computations at run-time. Given this richer design space, how should we utilize algorithms, crowds, and experts to better annotate images? To answer this question for the important task of finding the boundaries of objects or regions in images, I focus on image segmentation, an important precursor to solving a variety of fundamental image analysis problems, including recognition, classification, tracking, registration, retrieval, and 3D visualization. The first part of the work includes a detailed analysis of the relative strengths and weaknesses of three different approaches to demarcate object boundaries in images: by experts, by crowdsourced laymen, and by automated computer vision algorithms. The second part of the work describes three hybrid system designs that integrate computer vision algorithms and crowdsourced laymen to demarcate boundaries in images. Experiments revealed that hybrid system designs yielded more accurate results than relying on algorithms or crowd workers alone and could yield segmentations that are indistinguishable from those created by biomedical experts. To encourage community-wide effort to continue working on developing methods and systems for image-based studies which can have real and measurable impact that benefit society at large, datasets and code are publicly-shared (http://www.cs.bu.edu/~betke/BiomedicalImageSegmentation/)
    • …
    corecore