11,049 research outputs found

    Veni Vidi Vici, A Three-Phase Scenario For Parameter Space Analysis in Image Analysis and Visualization

    Full text link
    Automatic analysis of the enormous sets of images is a critical task in life sciences. This faces many challenges such as: algorithms are highly parameterized, significant human input is intertwined, and lacking a standard meta-visualization approach. This paper proposes an alternative iterative approach for optimizing input parameters, saving time by minimizing the user involvement, and allowing for understanding the workflow of algorithms and discovering new ones. The main focus is on developing an interactive visualization technique that enables users to analyze the relationships between sampled input parameters and corresponding output. This technique is implemented as a prototype called Veni Vidi Vici, or "I came, I saw, I conquered." This strategy is inspired by the mathematical formulas of numbering computable functions and is developed atop ImageJ, a scientific image processing program. A case study is presented to investigate the proposed framework. Finally, the paper explores some potential future issues in the application of the proposed approach in parameter space analysis in visualization

    Serial optical coherence microscopy for label-free volumetric histopathology

    Get PDF
    The observation of histopathology using optical microscope is an essential procedure for examination of tissue biopsies or surgically excised specimens in biological and clinical laboratories. However, slide-based microscopic pathology is not suitable for visualizing the large-scale tissue and native 3D organ structure due to its sampling limitation and shallow imaging depth. Here, we demonstrate serial optical coherence microscopy (SOCM) technique that offers label-free, high-throughput, and large-volume imaging of ex vivo mouse organs. A 3D histopathology of whole mouse brain and kidney including blood vessel structure is reconstructed by deep tissue optical imaging in serial sectioning techniques. Our results demonstrate that SOCM has unique advantages as it can visualize both native 3D structures and quantitative regional volume without introduction of any contrast agents

    Ptychographic X-ray computed tomography of extended colloidal networks in food emulsions

    Get PDF
    As a main structural level in colloidal food materials, extended colloidal networks are important for texture and rheology. By obtaining the 3D microstructure of the network, macroscopic mechanical properties of the material can be inferred. However, this approach is hampered by the lack of suitable non-destructive 3D imaging techniques with submicron resolution. We present results of quantitative ptychographic X-ray computed tomography applied to a palm kernel oil based oil-in-water emulsion. The measurements were carried out at ambient pressure and temperature. The 3D structure of the extended colloidal network of fat globules was obtained with a resolution of around 300 nm. Through image analysis of the network structure, the fat globule size distribution was computed and compared to previous findings. In further support, the reconstructed electron density values were within 4% of reference values.Comment: 19 pages, 4 figures, to be published in Food Structur

    Three-dimensional Image Processing of Identifying Toner Particle Centroids

    Get PDF
    Powder-based 3D printed products are composed of fine particles. The structure formed by the particles in the powder is expected to affect the performance of the final products constructed from them (Finney, 1970; Dinsmore, 2001; Chang, 2015; Patil, 2015). A prior study done by Patil (2015) demonstrated a method for determining the centroids and radii of spherical particles and consequently reconstructed the structure formed by the particles. Patil’s method used a Confocal Laser Scanning Microscope to capture a stack of cross-sections of fluorescent toner particles and Matlab image analysis tools to determine the particle centroid positions and radii. Patil identified each particle centroid’s XY coordinates and particle radius layer by layer, called “frame-by-frame” method; where the Z-position of the particle centroid was estimated by comparing the radius change at different layers. This thesis extends Patil’s work by automatically locating particle centroids in 3D space. The researcher built an algorithm, named “3D particle sighting method,” for processing the same stacks of two-dimensional images that Patil used. The algorithm at first, created a three-dimensional image matrix and then processed it by convolving with a 3D kernel to locate local maxima, which pinpointed the centroid locations of the particles. This method treated the stack of images as a 3D image matrix and the convolution operation automatically located the particle centroids. By treating Patil’s results as the ground truth, the results revealed that the average delta distance between the particle centroids identified through Patil’s method and the automated method was 1.02 microns (+/- 0.93 microns). Since the diameter of the particles is around 10 microns, this error is small compared to the size of the particles, and the results of the 3D particle sighting method are acceptable. In addition, this automated method need 1/5 of the processing time compared to Patil’s frame-by-frame method

    Visualizing Structures in Confocal Microscopy Datasets Through Clusterization: A Case Study on Bile Ducts

    Get PDF
    Aiming at a better result from previous works, we employed some heuristics found in the literature to determine the appropriate parameters for the clustering. We proposed our methodology by adding some steps to be performed before the clustering phase: one step for pre-processing the volumetric dataset and another to analyzing candidate features to guide the clustering. In this latter aspect, we provide an interesting contribution: we have explored the gradient magnitude as a feature that allowed to extract relevant information from the density-based spatial clustering. Besides the fact that DBSCAN allows easy detection of noise points, an interesting result for both datasets was that the first and largest cluster found as significant for the visualization represents the structure of interest. In the red channel, this cluster represents the most prominent vessels, while in the green channel, the peribiliary glands were made more evident.Abstract—Three-dimensional datasets from biological tissues have increased with the evolution of confocal microscopy. Hepatology researchers have used confocal microscopy for investigating the microanatomy of bile ducts. Bile ducts are complex tubular tissues consisting of many juxtaposed microstructures with distinct characteristics. Since confocal images are difficult to segment because of the noise introduced during the specimen preparation, traditional quantitative analyses used in medical datasets are difficult to perform on confocal microscopy data and require extensive user intervention. Thus, the visual exploration and analysis of bile ducts pose a challenge in hepatology research, requiring different methods. This paper investigates the application of unsupervised machine learning to extract relevant structures from confocal microscopy datasets representing bile ducts. Our approach consists of pre-processing, clustering, and 3D visualization. For clustering, we explore the density-based spatial clustering for applications with noise (DBSCAN) algorithm, using gradient information for guiding the clustering. We obtained a better visualization of the most prominent vessels and internal structures.info:eu-repo/semantics/publishedVersio

    Fast fluorescence microscopy for imaging the dynamics of embryonic development

    Get PDF
    Live imaging has gained a pivotal role in developmental biology since it increasingly allows real-time observation of cell behavior in intact organisms. Microscopes that can capture the dynamics of ever-faster biological events, fluorescent markers optimal for in vivo imaging, and, finally, adapted reconstruction and analysis programs to complete data flow all contribute to this success. Focusing on temporal resolution, we discuss how fast imaging can be achieved with minimal prejudice to spatial resolution, photon count, or to reliably and automatically analyze images. In particular, we show how integrated approaches to imaging that combine bright fluorescent probes, fast microscopes, and custom post-processing techniques can address the kinetics of biological systems at multiple scales. Finally, we discuss remaining challenges and opportunities for further advances in this field
    corecore