826 research outputs found

    A workflow for the automatic segmentation of organelles in electron microscopy image stacks.

    Get PDF
    Electron microscopy (EM) facilitates analysis of the form, distribution, and functional status of key organelle systems in various pathological processes, including those associated with neurodegenerative disease. Such EM data often provide important new insights into the underlying disease mechanisms. The development of more accurate and efficient methods to quantify changes in subcellular microanatomy has already proven key to understanding the pathogenesis of Parkinson's and Alzheimer's diseases, as well as glaucoma. While our ability to acquire large volumes of 3D EM data is progressing rapidly, more advanced analysis tools are needed to assist in measuring precise three-dimensional morphologies of organelles within data sets that can include hundreds to thousands of whole cells. Although new imaging instrument throughputs can exceed teravoxels of data per day, image segmentation and analysis remain significant bottlenecks to achieving quantitative descriptions of whole cell structural organellomes. Here, we present a novel method for the automatic segmentation of organelles in 3D EM image stacks. Segmentations are generated using only 2D image information, making the method suitable for anisotropic imaging techniques such as serial block-face scanning electron microscopy (SBEM). Additionally, no assumptions about 3D organelle morphology are made, ensuring the method can be easily expanded to any number of structurally and functionally diverse organelles. Following the presentation of our algorithm, we validate its performance by assessing the segmentation accuracy of different organelle targets in an example SBEM dataset and demonstrate that it can be efficiently parallelized on supercomputing resources, resulting in a dramatic reduction in runtime

    3D Face Reconstruction from Light Field Images: A Model-free Approach

    Full text link
    Reconstructing 3D facial geometry from a single RGB image has recently instigated wide research interest. However, it is still an ill-posed problem and most methods rely on prior models hence undermining the accuracy of the recovered 3D faces. In this paper, we exploit the Epipolar Plane Images (EPI) obtained from light field cameras and learn CNN models that recover horizontal and vertical 3D facial curves from the respective horizontal and vertical EPIs. Our 3D face reconstruction network (FaceLFnet) comprises a densely connected architecture to learn accurate 3D facial curves from low resolution EPIs. To train the proposed FaceLFnets from scratch, we synthesize photo-realistic light field images from 3D facial scans. The curve by curve 3D face estimation approach allows the networks to learn from only 14K images of 80 identities, which still comprises over 11 Million EPIs/curves. The estimated facial curves are merged into a single pointcloud to which a surface is fitted to get the final 3D face. Our method is model-free, requires only a few training samples to learn FaceLFnet and can reconstruct 3D faces with high accuracy from single light field images under varying poses, expressions and lighting conditions. Comparison on the BU-3DFE and BU-4DFE datasets show that our method reduces reconstruction errors by over 20% compared to recent state of the art

    High-performance generalized tensor operations: A compiler-oriented approach

    Full text link
    The efficiency of tensor contraction is of great importance. Compilers cannot optimize it well enough to come close to the performance of expert-tuned implementations. All existing approaches that provide competitive performance require optimized external code. We introduce a compiler optimization that reaches the performance of optimized BLAS libraries without the need for an external implementation or automatic tuning. Our approach provides competitive performance across hardware architectures and can be generalized to deliver the same benefits for algebraic path problems. By making fast linear algebra kernels available to everyone, we expect productivity increases when optimized libraries are not available. © 2018 Association for Computing Machinery

    Adorym: A multi-platform generic x-ray image reconstruction framework based on automatic differentiation

    Full text link
    We describe and demonstrate an optimization-based x-ray image reconstruction framework called Adorym. Our framework provides a generic forward model, allowing one code framework to be used for a wide range of imaging methods ranging from near-field holography to and fly-scan ptychographic tomography. By using automatic differentiation for optimization, Adorym has the flexibility to refine experimental parameters including probe positions, multiple hologram alignment, and object tilts. It is written with strong support for parallel processing, allowing large datasets to be processed on high-performance computing systems. We demonstrate its use on several experimental datasets to show improved image quality through parameter refinement

    Big data analytics in computational biology and bioinformatics

    Get PDF
    Big data analytics in computational biology and bioinformatics refers to an array of operations including biological pattern discovery, classification, prediction, inference, clustering as well as data mining in the cloud, among others. This dissertation addresses big data analytics by investigating two important operations, namely pattern discovery and network inference. The dissertation starts by focusing on biological pattern discovery at a genomic scale. Research reveals that the secondary structure in non-coding RNA (ncRNA) is more conserved during evolution than its primary nucleotide sequence. Using a covariance model approach, the stems and loops of an ncRNA secondary structure are represented as a statistical image against which an entire genome can be efficiently scanned for matching patterns. The covariance model approach is then further extended, in combination with a structural clustering algorithm and a random forests classifier, to perform genome-wide search for similarities in ncRNA tertiary structures. The dissertation then presents methods for gene network inference. Vast bodies of genomic data containing gene and protein expression patterns are now available for analysis. One challenge is to apply efficient methodologies to uncover more knowledge about the cellular functions. Very little is known concerning how genes regulate cellular activities. A gene regulatory network (GRN) can be represented by a directed graph in which each node is a gene and each edge or link is a regulatory effect that one gene has on another gene. By evaluating gene expression patterns, researchers perform in silico data analyses in systems biology, in particular GRN inference, where the “reverse engineering” is involved in predicting how a system works by looking at the system output alone. Many algorithmic and statistical approaches have been developed to computationally reverse engineer biological systems. However, there are no known bioin-formatics tools capable of performing perfect GRN inference. Here, extensive experiments are conducted to evaluate and compare recent bioinformatics tools for inferring GRNs from time-series gene expression data. Standard performance metrics for these tools based on both simulated and real data sets are generally low, suggesting that further efforts are needed to develop more reliable GRN inference tools. It is also observed that using multiple tools together can help identify true regulatory interactions between genes, a finding consistent with those reported in the literature. Finally, the dissertation discusses and presents a framework for parallelizing GRN inference methods using Apache Hadoop in a cloud environment

    Efficient Computing for Three-Dimensional Quantitative Phase Imaging

    Get PDF
    Quantitative Phase Imaging (QPI) is a powerful imaging technique for measuring the refractive index distribution of transparent objects such as biological cells and optical fibers. The quantitative, non-invasive approach of QPI provides preeminent advantages in biomedical applications and the characterization of optical fibers. Tomographic Deconvolution Phase Microscopy (TDPM) is a promising 3D QPI method that combines diffraction tomography, deconvolution, and through-focal scanning with object rotation to achieve isotropic spatial resolution. However, due to the large data size, 3D TDPM has a drawback in that it requires extensive computation power and time. In order to overcome this shortcoming, CPU/GPU parallel computing and application-specific embedded systems can be utilized. In this research, OpenMP Tasking and CUDA Streaming with Unified Memory (TSUM) is proposed to speed up the tomographic angle computations in 3D TDPM. TSUM leverages CPU multithreading and GPU computing on a System on a Chip (SoC) with unified memory. Unified memory eliminates data transfer between CPU and GPU memories, which is a major bottleneck in GPU computing. This research presents a speedup of 3D TDPM with TSUM for a large dataset and demonstrates the potential of TSUM in realizing real-time 3D TDPM.M.S

    Efficient transfer entropy analysis of non-stationary neural time series

    Full text link
    Information theory allows us to investigate information processing in neural systems in terms of information transfer, storage and modification. Especially the measure of information transfer, transfer entropy, has seen a dramatic surge of interest in neuroscience. Estimating transfer entropy from two processes requires the observation of multiple realizations of these processes to estimate associated probability density functions. To obtain these observations, available estimators assume stationarity of processes to allow pooling of observations over time. This assumption however, is a major obstacle to the application of these estimators in neuroscience as observed processes are often non-stationary. As a solution, Gomez-Herrero and colleagues theoretically showed that the stationarity assumption may be avoided by estimating transfer entropy from an ensemble of realizations. Such an ensemble is often readily available in neuroscience experiments in the form of experimental trials. Thus, in this work we combine the ensemble method with a recently proposed transfer entropy estimator to make transfer entropy estimation applicable to non-stationary time series. We present an efficient implementation of the approach that deals with the increased computational demand of the ensemble method's practical application. In particular, we use a massively parallel implementation for a graphics processing unit to handle the computationally most heavy aspects of the ensemble method. We test the performance and robustness of our implementation on data from simulated stochastic processes and demonstrate the method's applicability to magnetoencephalographic data. While we mainly evaluate the proposed method for neuroscientific data, we expect it to be applicable in a variety of fields that are concerned with the analysis of information transfer in complex biological, social, and artificial systems.Comment: 27 pages, 7 figures, submitted to PLOS ON

    A MICROFLUIDIC DIGITAL MELT PLATFORM FOR SENSITIVE BIOMARKER ANALYSIS AND PARALLELIZED PROFILING OF MOLECULAR HETEROGENEITY

    Get PDF
    Variability in gene regulation is a fundamental characteristic of biology, allowing cellular adaptation in many states, such as development, stress response, and survival. In early disease onset, genetic and epigenetic variability permit the formation of multiple cellular phenotypes. In cancer, increased cellular plasticity ultimately results in the foundation of a tumor with the phenotypic alterations necessary to dynamically adapt, proliferate, metastasize, and acquire therapeutic resistance throughout the course of the disease. One prominent form of cellular regulation is DNA methylation, an epigenetic chemical modification that can alter gene expression. Hypermethylation-induced silencing is known to occur early on in tumorigenesis, often in precursor phases of the disease. Furthermore, tumors have been shown to undergo epigenetic reprogramming throughout progression of the disease. In light of these observations, methylation heterogeneity may serve as a novel biomarker for early cancer detection. Early detection of cancer remains challenging, as symptoms often manifest in later stages and current screening techniques often lack the requisite sensitivity and specificity. To maximize effectiveness, routine screening techniques should be noninvasive, simple, and unbiased. To this end, liquid biopsies (e.g. blood samples) containing cellular debris, such as tumor-derived cell-free DNA in the plasma, are ideally suited towards routine screening. However, detection of tumor-derived molecules in plasma is challenging, as they are often rare and may be eclipsed by a high background of molecules from healthy cells. Thus a sensitive platform capable of quantifying epigenetic heterogeneity could uncover new insights and improve early detection. In this dissertation, I present a microfluidic digital melt platform for facile, highly-sensitive detection and molecule-by-molecule profiling. The platform is applied towards the quantification of epiallelic heterogeneity. Digitization of rare molecules into thousands of microchambers followed by parallelized sequencing interrogation through high resolution melt enables order of magnitude higher sensitivity than current techniques and insight into new intermolecular characteristics. I also demonstrate how this platform may be modified to complement and improve the sensing capabilities of existing commercial technologies. Finally, I validate the potential clinical utility of this platform through detection of methylation heterogeneity in complex clinical samples towards noninvasive screening applications. The technical capabilities along with the operational simplicity of this platform facilitate adoption by other laboratories and offer potential clinical utility. This system may offer new insights into the mechanisms of epigenetic regulation in pathogenesis, and potentially improve early diagnosis

    Fourier ptychography: current applications and future promises

    Get PDF
    Traditional imaging systems exhibit a well-known trade-off between the resolution and the field of view of their captured images. Typical cameras and microscopes can either “zoom in” and image at high-resolution, or they can “zoom out” to see a larger area at lower resolution, but can rarely achieve both effects simultaneously. In this review, we present details about a relatively new procedure termed Fourier ptychography (FP), which addresses the above trade-off to produce gigapixel-scale images without requiring any moving parts. To accomplish this, FP captures multiple low-resolution, large field-of-view images and computationally combines them in the Fourier domain into a high-resolution, large field-of-view result. Here, we present details about the various implementations of FP and highlight its demonstrated advantages to date, such as aberration recovery, phase imaging, and 3D tomographic reconstruction, to name a few. After providing some basics about FP, we list important details for successful experimental implementation, discuss its relationship with other computational imaging techniques, and point to the latest advances in the field while highlighting persisting challenges
    corecore