42 research outputs found

    Linear-Time Poisson-Disk Patterns

    Get PDF
    We present an algorithm for generating Poisson-disc patterns taking O(N) time to generate NN points. The method is based on a grid of regions which can contain no more than one point in the final pattern, and uses an explicit model of point arrival times under a uniform Poisson process.Comment: 4 pages, 2 figure

    Feature preserving smoothing of 3D surface scans

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2004.Includes bibliographical references (p. 63-70).With the increasing use of geometry scanners to create 3D models, there is a rising need for effective denoising of data captured with these devices. This thesis presents new methods for smoothing scanned data, based on extensions of the bilateral filter to 3D. The bilateral filter is a non-linear, edge-preserving image filter; its extension to 3D leads to an efficient, feature preserving filter for a wide class of surface representations, including points and "polygon soups."by Thouis Raymond Jones.S.M

    Predicting gene function from images of cells

    Get PDF
    Thesis (Sc. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.Includes bibliographical references (p. 107-118).This dissertation shows that biologically meaningful predictions can be made by analyzing images of cells. In particular, groups of related genes and their biological functions can be predicted using images from large gene-knockdown experiments. Our analysis methods focus on measuring individual cells in images from large gene-knockdown screens, using these measurements to classify cells according to phenotype, and scoring each gene according to how reduction in its expression affects phenotypes. To enable this approach, we introduce methods for correcting biases in cell images, segmenting individual cells in images, modeling the distribution of cells showing a phenotype of interest within a screen, scoring gene knockdowns according to their effect on a phenotype, and using existing biological knowledge to predict the underlying biological meaning of a phenotype and, by extension, the function of the genes that most strongly affect that phenotype. We repeat this analysis for multiple phenotypes, extracting for each a set of genes related through that phenotype, along with predictions for the biology of each phenotype. We apply our methods to a large gene-knockdown screen in human cells, validating it on known phenotypes as well as identifying and characterizing several new cellular phenotypes that have not been previously studied.by Thouis Raymond Jones.Sc.D

    Dual channel rank-based intensity weighting for quantitative co-localization of microscopy images

    Get PDF
    BACKGROUND: Accurate quantitative co-localization is a key parameter in the context of understanding the spatial co-ordination of molecules and therefore their function in cells. Existing co-localization algorithms consider either the presence of co-occurring pixels or correlations of intensity in regions of interest. Depending on the image source, and the algorithm selected, the co-localization coefficients determined can be highly variable, and often inaccurate. Furthermore, this choice of whether co-occurrence or correlation is the best approach for quantifying co-localization remains controversial. RESULTS: We have developed a novel algorithm to quantify co-localization that improves on and addresses the major shortcomings of existing co-localization measures. This algorithm uses a non-parametric ranking of pixel intensities in each channel, and the difference in ranks of co-localizing pixel positions in the two channels is used to weight the coefficient. This weighting is applied to co-occurring pixels thereby efficiently combining both co-occurrence and correlation. Tests with synthetic data sets show that the algorithm is sensitive to both co-occurrence and correlation at varying levels of intensity. Analysis of biological data sets demonstrate that this new algorithm offers high sensitivity, and that it is capable of detecting subtle changes in co-localization, exemplified by studies on a well characterized cargo protein that moves through the secretory pathway of cells. CONCLUSIONS: This algorithm provides a novel way to efficiently combine co-occurrence and correlation components in biological images, thereby generating an accurate measure of co-localization. This approach of rank weighting of intensities also eliminates the need for manual thresholding of the image, which is often a cause of error in co-localization quantification. We envisage that this tool will facilitate the quantitative analysis of a wide range of biological data sets, including high resolution confocal images, live cell time-lapse recordings, and high-throughput screening data sets

    Large-Scale Automatic Reconstruction of Neuronal Processes from Electron Microscopy Images

    Full text link
    Automated sample preparation and electron microscopy enables acquisition of very large image data sets. These technical advances are of special importance to the field of neuroanatomy, as 3D reconstructions of neuronal processes at the nm scale can provide new insight into the fine grained structure of the brain. Segmentation of large-scale electron microscopy data is the main bottleneck in the analysis of these data sets. In this paper we present a pipeline that provides state-of-the art reconstruction performance while scaling to data sets in the GB-TB range. First, we train a random forest classifier on interactive sparse user annotations. The classifier output is combined with an anisotropic smoothing prior in a Conditional Random Field framework to generate multiple segmentation hypotheses per image. These segmentations are then combined into geometrically consistent 3D objects by segmentation fusion. We provide qualitative and quantitative evaluation of the automatic segmentation and demonstrate large-scale 3D reconstructions of neuronal processes from a 27,000\mathbf{27,000} μm3\mathbf{\mu m^3} volume of brain tissue over a cube of 30  μm\mathbf{30 \; \mu m} in each dimension corresponding to 1000 consecutive image sections. We also introduce Mojo, a proofreading tool including semi-automated correction of merge errors based on sparse user scribbles

    Non-Iterative, Feature-Preserving Mesh Smoothing

    Get PDF
    With the increasing use of geometry scanners to create 3D models, there is a rising need for fast and robust mesh smoothing to remove inevitable noise in the measurements. While most previous work has favored diffusion-based iterative techniques for feature-preserving smoothing, we propose a radically different approach, based on robust statistics and local first-order predictors of the surface. The robustness of our local estimates allows us to derive a non-iterative feature-preserving filtering technique applicable to arbitrary "triangle soups". We demonstrate its simplicity of implementation and its efficiency, which make it an excellent solution for smoothing large, noisy, and non-manifold meshes.Singapore-MIT Alliance (SMA
    corecore