96 research outputs found

    Denoising 3D microscopy images of cell nuclei using shape priors on an anisotropic grid

    Get PDF
    This paper presents a new multiscale method to denoise three-dimensional images of cell nuclei. The speci- ficity of this method is its awareness of the noise distribution and object shapes. It combines a multiscale representation called Isotropic Undecimated Wavelet Transform (IUWT) with a nonlinear transform, a statistical test and a variational method, to retrieve spherical shapes in the image. Beyond extending an existing 2D approach to a 3D problem, our algorithm takes the sampling grid dimensions into account. We compare our method to the two algorithms from which it is derived on a representative image analysis task, and show that it is superior to both of them. It brings a slight improvement in the signal-to-noise ratio and a significant improvement in cell detection

    AnyStar: Domain randomized universal star-convex 3D instance segmentation

    Full text link
    Star-convex shapes arise across bio-microscopy and radiology in the form of nuclei, nodules, metastases, and other units. Existing instance segmentation networks for such structures train on densely labeled instances for each dataset, which requires substantial and often impractical manual annotation effort. Further, significant reengineering or finetuning is needed when presented with new datasets and imaging modalities due to changes in contrast, shape, orientation, resolution, and density. We present AnyStar, a domain-randomized generative model that simulates synthetic training data of blob-like objects with randomized appearance, environments, and imaging physics to train general-purpose star-convex instance segmentation networks. As a result, networks trained using our generative model do not require annotated images from unseen datasets. A single network trained on our synthesized data accurately 3D segments C. elegans and P. dumerilii nuclei in fluorescence microscopy, mouse cortical nuclei in micro-CT, zebrafish brain nuclei in EM, and placental cotyledons in human fetal MRI, all without any retraining, finetuning, transfer learning, or domain adaptation. Code is available at https://github.com/neel-dey/AnyStar.Comment: Code available at https://github.com/neel-dey/AnySta

    Computational Framework For Neuro-Optics Simulation And Deep Learning Denoising

    Get PDF
    The application of machine learning techniques in microscopic image restoration has shown superior performance. However, the development of such techniques has been hindered by the demand for large datasets and the lack of ground truth. To address these challenges, this study introduces a computer simulation model that accurately captures the neural anatomic volume, fluorescence light transportation within the tissue volume, and the photon collection process of microscopic imaging sensors. The primary goal of this simulation is to generate realistic image data for training and validating machine learning models. One notable aspect of this study is the incorporation of a machine learning denoiser into the simulation, which accelerates the computational efficiency of the entire process. By reducing noise levels in the generated images, the denoiser significantly enhances the simulation\u27s performance, allowing for faster and more accurate modeling and analysis of microscopy images. This approach addresses the limitations of data availability and ground truth annotation, offering a practical and efficient solution for microscopic image restoration. The integration of a machine learning denoiser within the simulation significantly accelerates the overall simulation process, while improving the quality of the generated images. This advancement opens new possibilities for training and validating machine learning models in microscopic image restoration, overcoming the challenges of large datasets and the lack of ground truth

    Computer Vision Approaches for Mapping Gene Expression onto Lineage Trees

    Get PDF
    This project concerns studying the early development of living organisms. This period is accompanied by dynamic morphogenetic events. There is an increase in the number of cells, changes in the shape of cells and specification of cell fate during this time. Typically, in order to capture the dynamic morphological changes, one can employ a form of microscopy imaging such as Selective Plane Illumination Microscopy (SPIM) which offers a single-cell resolution across time, and hence allows observing the positions, velocities and trajectories of most cells in a developing embryo. Unfortunately, the dynamic genetic activity which underlies these morphological changes and influences cellular fate decision, is captured only as static snapshots and often requires processing (sequencing or imaging) multiple distinct individuals. In order to set the stage for characterizing the factors which influence cellular fate, one must bring the data arising from the above-mentioned static snapshots of multiple individuals and the data arising from SPIM imaging of other distinct individual(s) which characterizes the changes in morphology, into the same frame of reference. In this project, a computational pipeline is established, which achieves the aforementioned goal of mapping data from these various imaging modalities and specimens to a canonical frame of reference. This pipeline relies on the three core building blocks of Instance Segmentation, Tracking and Registration. In this dissertation work, I introduce EmbedSeg which is my solution to performing instance segmentation of 2D and 3D (volume) image data. Next, I introduce LineageTracer which is my solution to performing tracking of a time-lapse (2d+t, 3d+t) recording. Finally, I introduce PlatyMatch which is my solution to performing registration of volumes. Errors from the application of these building blocks accumulate which produces a noisy observation estimate of gene expression for the digitized cells in the canonical frame of reference. These noisy estimates are processed to infer the underlying hidden state by using a Hidden Markov Model (HMM) formulation. Lastly, for wider dissemination of these methods, one requires an effective visualization strategy. A few details about the employed approach are also discussed in the dissertation work. The pipeline was designed keeping imaging volume data in mind, but can easily be extended to incorporate other data modalities, if available, such as single cell RNA Sequencing (scRNA-Seq) (more details are provided in the Discussion chapter). The methods elucidated in this dissertation would provide a fertile playground for several experiments and analyses in the future. Some of such potential experiments and current weaknesses of the computational pipeline are also discussed additionally in the Discussion Chapter

    Made to measure: An introduction to quantifying microscopy data in the life sciences

    Get PDF
    Images are at the core of most modern biological experiments and are used as a major source of quantitative information. Numerous algorithms are available to process images and make them more amenable to be measured. Yet the nature of the quantitative output that is useful for a given biological experiment is uniquely dependent upon the question being investigated. Here, we discuss the 3 main types of information that can be extracted from microscopy data: intensity, morphology, and object counts or categorical labels. For each, we describe where they come from, how they can be measured, and what may affect the relevance of these measurements in downstream data analysis. Acknowledging that what makes a measurement 'good' is ultimately down to the biological question being investigated, this review aims at providing readers with a toolkit to challenge how they quantify their own data and be critical of conclusions drawn from quantitative bioimage analysis experiments

    Improving the Tractography Pipeline: on Evaluation, Segmentation, and Visualization

    Get PDF
    Recent advances in tractography allow for connectomes to be constructed in vivo. These have applications for example in brain tumor surgery and understanding of brain development and diseases. The large size of the data produced by these methods lead to a variety problems, including how to evaluate tractography outputs, development of faster processing algorithms for tractography and clustering, and the development of advanced visualization methods for verification and exploration. This thesis presents several advances in these fields. First, an evaluation is presented for the robustness to noise of multiple commonly used tractography algorithms. It employs a Monte–Carlo simulation of measurement noise on a constructed ground truth dataset. As a result of this evaluation, evidence for obustness of global tractography is found, and algorithmic sources of uncertainty are identified. The second contribution is a fast clustering algorithm for tractography data based on k–means and vector fields for representing the flow of each cluster. It is demonstrated that this algorithm can handle large tractography datasets due to its linear time and memory complexity, and that it can effectively integrate interrupted fibers that would be rejected as outliers by other algorithms. Furthermore, a visualization for the exploration of structural connectomes is presented. It uses illustrative rendering techniques for efficient presentation of connecting fiber bundles in context in anatomical space. Visual hints are employed to improve the perception of spatial relations. Finally, a visualization method with application to exploration and verification of probabilistic tractography is presented, which improves on the previously presented Fiber Stippling technique. It is demonstrated that the method is able to show multiple overlapping tracts in context, and correctly present crossing fiber configurations
    • …
    corecore