4,101 research outputs found

    3D mesh processing using GAMer 2 to enable reaction-diffusion simulations in realistic cellular geometries

    Full text link
    Recent advances in electron microscopy have enabled the imaging of single cells in 3D at nanometer length scale resolutions. An uncharted frontier for in silico biology is the ability to simulate cellular processes using these observed geometries. Enabling such simulations requires watertight meshing of electron micrograph images into 3D volume meshes, which can then form the basis of computer simulations of such processes using numerical techniques such as the Finite Element Method. In this paper, we describe the use of our recently rewritten mesh processing software, GAMer 2, to bridge the gap between poorly conditioned meshes generated from segmented micrographs and boundary marked tetrahedral meshes which are compatible with simulation. We demonstrate the application of a workflow using GAMer 2 to a series of electron micrographs of neuronal dendrite morphology explored at three different length scales and show that the resulting meshes are suitable for finite element simulations. This work is an important step towards making physical simulations of biological processes in realistic geometries routine. Innovations in algorithms to reconstruct and simulate cellular length scale phenomena based on emerging structural data will enable realistic physical models and advance discovery at the interface of geometry and cellular processes. We posit that a new frontier at the intersection of computational technologies and single cell biology is now open.Comment: 39 pages, 14 figures. High resolution figures and supplemental movies available upon reques

    Estimation of Wood Pulp Fiber Species Composition From Microscopy Images Using Computer Vision

    Get PDF
    Pulp mills and papermakers require careful control of input raw materials. The paper pulp composition, consisting of blends of different wood fiber types, affects multiple final product properties in interacting ways and impacts process operating conditions. Manual estimation of composition by classification and counting using microscopy is time consuming, repetitive, error-prone, and fibers are not always identifiable. Using a dataset of 359,840 fibers from 12,690 images of either hardwood or softwood fibers from 423 microscopy slides with data partitioned into 60% training, 20% validation, and 20% testing splits by slide, and a sequence of principal components analysis, Gaussian mixture, image analysis, and convolutional neural network models this work demonstrates a system capable of processing 4.92 megapixel microscopy images with 3 color channels at a rate of 30 seconds per image using a 4gb Nvidia Jetson Nano computer with a fiber-segment level test accuracy of 91%. The variation in accuracy between slides is statistically significant and follows a beta-binomial distribution, which controls the required number of slides for confident estimation of actual process mixture composition; the described implementation requires 10 slides for a 90% interval of ±3.25% of the estimated composition. Additionally, anomalous cotton fibers, not present in training data, are correctly identified with a rate of 33% false negatives and 5% false positives. The entire process is visualized, enhancing interpretability, and understanding of fundamental fiber structures. The complete system enables papermakers and pulp mills to improve control of the input concentrations of component fibers and appropriately adjust corresponding operating conditions to achieve desired properties. Studying the classification results, we the identify the influence of confounding factors in our data; changing confounding factors from one slide to the next influences not only the species of fiber, but also the observation conditions, such as illumination, imaging, and slide preparation. Then, by simulating a dataset of microscopy slides, in which the influence of such confounders is not present, we demonstrate that it is not the simplicity of the objects of interest that limits the use of high capacity models for learning, but hypothesize the presence of an easily learnable feature that varies from slide to slide and is detectable among many objects from the same slide. Mitigating this feature could greatly improve learning of otherwise relevant but subtle fiber features

    Sub-pixel Registration In Computational Imaging And Applications To Enhancement Of Maxillofacial Ct Data

    Get PDF
    In computational imaging, data acquired by sampling the same scene or object at different times or from different orientations result in images in different coordinate systems. Registration is a crucial step in order to be able to compare, integrate and fuse the data obtained from different measurements. Tomography is the method of imaging a single plane or slice of an object. A Computed Tomography (CT) scan, also known as a CAT scan (Computed Axial Tomography scan), is a Helical Tomography, which traditionally produces a 2D image of the structures in a thin section of the body. It uses X-ray, which is ionizing radiation. Although the actual dose is typically low, repeated scans should be limited. In dentistry, implant dentistry in specific, there is a need for 3D visualization of internal anatomy. The internal visualization is mainly based on CT scanning technologies. The most important technological advancement which dramatically enhanced the clinician\u27s ability to diagnose, treat, and plan dental implants has been the CT scan. Advanced 3D modeling and visualization techniques permit highly refined and accurate assessment of the CT scan data. However, in addition to imperfections of the instrument and the imaging process, it is not uncommon to encounter other unwanted artifacts in the form of bright regions, flares and erroneous pixels due to dental bridges, metal braces, etc. Currently, removing and cleaning up the data from acquisition backscattering imperfections and unwanted artifacts is performed manually, which is as good as the experience level of the technician. On the other hand the process is error prone, since the editing process needs to be performed image by image. We address some of these issues by proposing novel registration methods and using stonecast models of patient\u27s dental imprint as reference ground truth data. Stone-cast models were originally used by dentists to make complete or partial dentures. The CT scan of such stone-cast models can be used to automatically guide the cleaning of patients\u27 CT scans from defects or unwanted artifacts, and also as an automatic segmentation system for the outliers of the CT scan data without use of stone-cast models. Segmented data is subsequently used to clean the data from artifacts using a new proposed 3D inpainting approach

    DUDE-Seq: Fast, Flexible, and Robust Denoising for Targeted Amplicon Sequencing

    Full text link
    We consider the correction of errors from nucleotide sequences produced by next-generation targeted amplicon sequencing. The next-generation sequencing (NGS) platforms can provide a great deal of sequencing data thanks to their high throughput, but the associated error rates often tend to be high. Denoising in high-throughput sequencing has thus become a crucial process for boosting the reliability of downstream analyses. Our methodology, named DUDE-Seq, is derived from a general setting of reconstructing finite-valued source data corrupted by a discrete memoryless channel and effectively corrects substitution and homopolymer indel errors, the two major types of sequencing errors in most high-throughput targeted amplicon sequencing platforms. Our experimental studies with real and simulated datasets suggest that the proposed DUDE-Seq not only outperforms existing alternatives in terms of error-correction capability and time efficiency, but also boosts the reliability of downstream analyses. Further, the flexibility of DUDE-Seq enables its robust application to different sequencing platforms and analysis pipelines by simple updates of the noise model. DUDE-Seq is available at http://data.snu.ac.kr/pub/dude-seq

    Modeling and Development of Iterative Reconstruction Algorithms in Emerging X-ray Imaging Technologies

    Get PDF
    Many new promising X-ray-based biomedical imaging technologies have emerged over the last two decades. Five different novel X-ray based imaging technologies are discussed in this dissertation: differential phase-contrast tomography (DPCT), grating-based phase-contrast tomography (GB-PCT), spectral-CT (K-edge imaging), cone-beam computed tomography (CBCT), and in-line X-ray phase contrast (XPC) tomosynthesis. For each imaging modality, one or more specific problems prevent them being effectively or efficiently employed in clinical applications have been discussed. Firstly, to mitigate the long data-acquisition times and large radiation doses associated with use of analytic reconstruction methods in DPCT, we analyze the numerical and statistical properties of two classes of discrete imaging models that form the basis for iterative image reconstruction. Secondly, to improve image quality in grating-based phase-contrast tomography, we incorporate 2nd order statistical properties of the object property sinograms, including correlations between them, into the formulation of an advanced multi-channel (MC) image reconstruction algorithm, which reconstructs three object properties simultaneously. We developed an advanced algorithm based on the proximal point algorithm and the augmented Lagrangian method to rapidly solve the MC reconstruction problem. Thirdly, to mitigate image artifacts that arise from reduced-view and/or noisy decomposed sinogram data in K-edge imaging, we exploited the inherent sparseness of typical K-edge objects and incorporated the statistical properties of the decomposed sinograms to formulate two penalized weighted least square problems with a total variation (TV) penalty and a weighted sum of a TV penalty and an l1-norm penalty with a wavelet sparsifying transform. We employed a fast iterative shrinkage/thresholding algorithm (FISTA) and splitting-based FISTA algorithm to solve these two PWLS problems. Fourthly, to enable advanced iterative algorithms to obtain better diagnostic images and accurate patient positioning information in image-guided radiation therapy for CBCT in a few minutes, two accelerated variants of the FISTA for PLS-based image reconstruction are proposed. The algorithm acceleration is obtained by replacing the original gradient-descent step by a sub-problem that is solved by use of the ordered subset concept (OS-SART). In addition, we also present efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units (GPUs). Finally, we employed our developed accelerated version of FISTA for dealing with the incomplete (and often noisy) data inherent to in-line XPC tomosynthesis which combines the concepts of tomosynthesis and in-line XPC imaging to utilize the advantages of both for biological imaging applications. We also investigate the depth resolution properties of XPC tomosynthesis and demonstrate that the z-resolution properties of XPC tomosynthesis is superior to that of conventional absorption-based tomosynthesis. To investigate all these proposed novel strategies and new algorithms in these different imaging modalities, we conducted computer simulation studies and real experimental data studies. The proposed reconstruction methods will facilitate the clinical or preclinical translation of these emerging imaging methods

    Enhanced information extraction in the multi-energy x-ray tomography for security

    Full text link
    Thesis (Ph.D.)--Boston UniversityX-ray Computed Tomography (CT) is an effective nondestructive technology widely used for medical diagnosis and security. In CT, three-dimensional images of the interior of an object are generated based on its X-ray attenuation. Conventional CT is performed with a single energy spectrum and materials can only be differentiated based on an averaged measure of the attenuation. Multi-Energy CT (MECT) methods have been developed to provide more information about the chemical composition of the scanned material using multiple energy-selective measurements of the attenuation. Existing literature on MECT is mostly focused on differentiation between body tissues and other medical applications. The problems in security are more challenging due to the larger range of materials and threats which may be found. Objects may appear in high clutter and in different forms of concealment. Thus, the information extracted by the medical domain methods may not be optimal for detection of explosives and improved performance is desired. In this dissertation, learning and adaptive model-based methods are developed to address the challenges of multi-energy material discrimination for security. First, the fundamental information contained in the X-ray attenuation versus energy curves of materials is studied. For this purpose, a database of these curves for a set of explosive and non-explosive compounds was created. The dimensionality and span of the curves is estimated and their space is shown to be larger than two-dimensional, contrary to what is typically assumed. In addition, optimized feature selection methods are developed and applied to the curves and it is demonstrated that detection performance may be improved by using more than two features and when using features different than the standard photoelectric and Compton coefficients. Second, several MECT reconstruction methods are studied and compared. This includes a new structure-preserving inversion technique which can mitigate metal artifacts and provide precise object localization in the estimated parameter images. Finally, a learning-based MECT framework for joint material classification and segmentation is developed, which can produce accurate material labels in the presence of metal and clutter. The methods are tested on simulated and real multi-energy data and it is shown that they outperform previously published MECT techniques

    Algorithms for enhanced artifact reduction and material recognition in computed tomography

    Full text link
    Computed tomography (CT) imaging provides a non-destructive means to examine the interior of an object which is a valuable tool in medical and security applications. The variety of materials seen in the security applications is higher than in the medical applications. Factors such as clutter, presence of dense objects, and closely placed items in a bag or a parcel add to the difficulty of the material recognition in security applications. Metal and dense objects create image artifacts which degrade the image quality and deteriorate the recognition accuracy. Conventional CT machines scan the object using single source or dual source spectra and reconstruct the effective linear attenuation coefficient of voxels in the image which may not provide the sufficient information to identify the occupying materials. In this dissertation, we provide algorithmic solutions to enhance CT material recognition. We provide a set of algorithms to accommodate different classes of CT machines. First, we provide a metal artifact reduction algorithm for conventional CT machines which perform the measurements using single X-ray source spectrum. Compared to previous methods, our algorithm is robust to severe metal artifacts and accurately reconstructs the regions that are in proximity to metal. Second, we propose a novel joint segmentation and classification algorithm for dual-energy CT machines which extends prior work to capture spatial correlation in material X-ray attenuation properties. We show that the classification performance of our method surpasses the prior work's result. Third, we propose a new framework for reconstruction and classification using a new class of CT machines known as spectral CT which has been recently developed. Spectral CT uses multiple energy windows to scan the object, thus it captures data across higher energy dimensions per detector. Our reconstruction algorithm extracts essential features from the measured data by using spectral decomposition. We explore the effect of using different transforms in performing the measurement decomposition and we develop a new basis transform which encapsulates the sufficient information of the data and provides high classification accuracy. Furthermore, we extend our framework to perform the task of explosive detection. We show that our framework achieves high detection accuracy and it is robust to noise and variations. Lastly, we propose a combined algorithm for spectral CT, which jointly reconstructs images and labels each region in the image. We offer a tractable optimization method to solve the proposed discrete tomography problem. We show that our method outperforms the prior work in terms of both reconstruction quality and classification accuracy
    • …
    corecore