908 research outputs found

    Signaling local non-credibility in an automatic segmentation pipeline

    Get PDF
    The advancing technology for automatic segmentation of medical images should be accompanied by techniques to inform the user of the local credibility of results. To the extent that this technology produces clinically acceptable segmentations for a significant fraction of cases, there is a risk that the clinician will assume every result is acceptable. In the less frequent case where segmentation fails, we are concerned that unless the user is alerted by the computer, she would still put the result to clinical use. By alerting the user to the location of a likely segmentation failure, we allow her to apply limited validation and editing resources where they are most needed. We propose an automated method to signal suspected non-credible regions of the segmentation, triggered by statistical outliers of the local image match function. We apply this test to m-rep segmentations of the bladder and prostate in CT images using a local image match computed by PCA on regional intensity quantile functions. We validate these results by correlating the non-credible regions with regions that have surface distance greater than 5.5mm to a reference segmentation for the bladder. A 6mm surface distance was used to validate the prostate results. Varying the outlier threshold level produced a receiver operating characteristic with area under the curve of 0.89 for the bladder and 0.92 for the prostate. Based on this preliminary result, our method has been able to predict local segmentation failures and shows potential for validation in an automatic segmentation pipeline

    Effectiveness of Visualisations for Detection of Errors in Segmentation of Blood Vessels

    Get PDF
    Vascular disease diagnosis often requires a precise segmentation of the vessel lumen. When 3D (Magnetic Resonance Angiography, MRA, or Computed Tomography Angiography, CTA) imaging is available, this can be done automatically, but occasional errors are inevitable. So, the segmentation has to be checked by clinicians. This requires appropriate visualisation techniques. A number of visualisation techniques exist, but there has been little in the way of user studies that compare the different alternatives. In this study we examine how users interact with several basic visualisations, when performing a visual search task, checking vascular segmentation correctness of segmented MRA data. These visualisations are: direct volume rendering (DVR), isosurface rendering, and curved planar reformatting (CPR). Additionally, we examine if visual highlighting of potential errors can help the user find errors, so a fourth visualisation we examine is DVR with visual highlighting. Our main findings are that CPR performs fastest but has higher error rate, and there are no significant differences between the other three visualisations. We did find that visual highlighting actually has slower performance in early trials, suggesting that users learned to ignore them

    Resting state EEG biomarkers in translational neuroscience

    Get PDF

    A multiscale orchestrated computational framework to reveal emergent phenomena in neuroblastoma

    Get PDF
    Neuroblastoma is a complex and aggressive type of cancer that affects children. Current treatments involve a combination of surgery, chemotherapy, radiotherapy, and stem cell transplantation. However, treatment outcomes vary due to the heterogeneous nature of the disease. Computational models have been used to analyse data, simulate biological processes, and predict disease progression and treatment outcomes. While continuum cancer models capture the overall behaviour of tumours, and agent-based models represent the complex behaviour of individual cells, multiscale models represent interactions at different organisational levels, providing a more comprehensive understanding of the system. In 2018, the PRIMAGE consortium was formed to build a cloud-based decision support system for neuroblastoma, including a multi-scale model for patient-specific simulations of disease progression. In this work we have developed this multi-scale model that includes data such as patient's tumour geometry, cellularity, vascularization, genetics and type of chemotherapy treatment, and integrated it into an online platform that runs the simulations on a high-performance computation cluster using Onedata and Kubernetes technologies. This infrastructure will allow clinicians to optimise treatment regimens and reduce the number of costly and time-consuming clinical trials. This manuscript outlines the challenging framework's model architecture, data workflow, hypothesis, and resources employed in its development

    Data Science Methods for Analyzing Nanomaterial Images and Videos

    Get PDF
    A large amount of nanomaterial characterization data has been routinely collected by using electron microscopes and stored in image or video formats. A bottleneck in making effective use of the image/video data is the lack of the development of sophisticated data science methods capable of unlocking valuable material pertinent information buried in the raw data. To address this problem, the research of this dissertation begins with understanding the physical mechanisms behind the concerned process to determine why the generic methods fall short. Afterwards, it designs and improves image processing and statistical modeling tools to address the practical challenges. Specifically, this dissertation consists of two main tasks: extracting useful information from images or videos of nanomaterials captured by electron microscopes, and designing analytical methods for modeling/monitoring the dynamic growth of nanoparticles. In the first task, a two-pipeline framework is proposed to fuse two kinds of image information for nanoscale object detection that can accurately identify and measure nanoparticles in transmission electron microscope (TEM) images of high noise and low contrast. To handle the second task of analyzing nanoparticle growth, this dissertation develops dynamic nonparametric models for time-varying probability density functions (PDFs) estimation. Unlike simple statistics, a PDF contains fuller information about the nanoscale objects of interests. Characterizing the dynamic changes of the PDF as the nanoparticles grow into different sizes and morph into different shapes, the proposed nonparametric methods are capable of analyzing an in situ TEM video to delineate growth stages in a retrospective analysis, or tracking the nanoparticle growth process in a prospective analysis. The resulting analytic methods have applications in areas beyond the nanoparticle growth process such as the image-based process control tasks in additive manufacturing

    Pushing the Boundaries of Biomolecule Characterization through Deep Learning

    Get PDF
    The importance of studying biological molecules in living organisms can hardly be overstated as they regulate crucial processes in living matter of all kinds.Their ubiquitous nature makes them relevant for disease diagnosis, drug development, and for our fundamental understanding of the complex systems of biology.However, due to their small size, they scatter too little light on their own to be directly visible and available for study.Thus, it is necessary to develop characterization methods which enable their elucidation even in the regime of very faint signals. Optical systems, utilizing the relatively low intrusiveness of visible light, constitute one such approach of characterization. However, the optical systems currently capable of analyzing single molecules in the nano-sized regime today either require the species of interest to be tagged with visible labels like fluorescence or chemically restrained on a surface to be analyzed.Ergo, there exist effectively no methods of characterizing very small biomolecules under naturally relevant conditions through unobtrusive probing. Nanofluidic Scattering Microscopy is a method introduced in this thesis which bridges this gap by enabling the real-time label-free size-and-weight determination of freely diffusing molecules directly in small nano-sized channels. However, the molecule signals are so faint, and the background noise so complex with high spatial and temporal variation, that standard methods of data analysis are incapable of elucidating the molecules\u27 properties of relevance in any but the least challenging conditions.To remedy the weak signal, and realize the method\u27s full potential, this thesis\u27 focus is the development of a versatile deep-learning based computer-vision platform to overcome the bottleneck of data analysis. We find that said platform has considerably increased speed, accuracy, precision and limit of detection compared to standard methods, constituting even a lower detection limit than any other method of label-free optical characterization currently available. In this regime, hitherto elusive species of biomolecules become accessible for study, potentially opening up entirely new avenues of biological research. These results, along with many others in the context of deep learning for optical microscopy in biological applications, suggest that deep learning is likely to be pivotal in solving the complex image analysis problems of the present and enabling new regimes of study within microscopy-based research in the near future
    corecore