621 research outputs found
Visualization and Correction of Automated Segmentation, Tracking and Lineaging from 5-D Stem Cell Image Sequences
Results: We present an application that enables the quantitative analysis of
multichannel 5-D (x, y, z, t, channel) and large montage confocal fluorescence
microscopy images. The image sequences show stem cells together with blood
vessels, enabling quantification of the dynamic behaviors of stem cells in
relation to their vascular niche, with applications in developmental and cancer
biology. Our application automatically segments, tracks, and lineages the image
sequence data and then allows the user to view and edit the results of
automated algorithms in a stereoscopic 3-D window while simultaneously viewing
the stem cell lineage tree in a 2-D window. Using the GPU to store and render
the image sequence data enables a hybrid computational approach. An
inference-based approach utilizing user-provided edits to automatically correct
related mistakes executes interactively on the system CPU while the GPU handles
3-D visualization tasks. Conclusions: By exploiting commodity computer gaming
hardware, we have developed an application that can be run in the laboratory to
facilitate rapid iteration through biological experiments. There is a pressing
need for visualization and analysis tools for 5-D live cell image data. We
combine accurate unsupervised processes with an intuitive visualization of the
results. Our validation interface allows for each data set to be corrected to
100% accuracy, ensuring that downstream data analysis is accurate and
verifiable. Our tool is the first to combine all of these aspects, leveraging
the synergies obtained by utilizing validation information from stereo
visualization to improve the low level image processing tasks.Comment: BioVis 2014 conferenc
Feature-preserving image restoration and its application in biological fluorescence microscopy
This thesis presents a new investigation of image restoration and its application to
fluorescence cell microscopy. The first part of the work is to develop advanced image
denoising algorithms to restore images from noisy observations by using a novel featurepreserving
diffusion approach. I have applied these algorithms to different types of
images, including biometric, biological and natural images, and demonstrated their
superior performance for noise removal and feature preservation, compared to several
state of the art methods. In the second part of my work, I explore a novel, simple and
inexpensive super-resolution restoration method for quantitative microscopy in cell
biology. In this method, a super-resolution image is restored, through an inverse process,
by using multiple diffraction-limited (low) resolution observations, which are acquired
from conventional microscopes whilst translating the sample parallel to the image plane,
so referred to as translation microscopy (TRAM). A key to this new development is the
integration of a robust feature detector, developed in the first part, to the inverse process
to restore high resolution images well above the diffraction limit in the presence of strong
noise. TRAM is a post-image acquisition computational method and can be implemented
with any microscope. Experiments show a nearly 7-fold increase in lateral spatial
resolution in noisy biological environments, delivering multi-colour image resolution of
~30 nm
TOMOBFLOW: feature-preserving noise filtering for electron tomography
<p>Abstract</p> <p>Background</p> <p>Noise filtering techniques are needed in electron tomography to allow proper interpretation of datasets. The standard linear filtering techniques are characterized by a tradeoff between the amount of reduced noise and the blurring of the features of interest. On the other hand, sophisticated anisotropic nonlinear filtering techniques allow noise reduction with good preservation of structures. However, these techniques are computationally intensive and are difficult to be tuned to the problem at hand.</p> <p>Results</p> <p>TOMOBFLOW is a program for noise filtering with capabilities of preservation of biologically relevant information. It is an efficient implementation of the Beltrami flow, a nonlinear filtering method that locally tunes the strength of the smoothing according to an edge indicator based on geometry properties. The fact that this method does not have free parameters hard to be tuned makes TOMOBFLOW a user-friendly filtering program equipped with the power of diffusion-based filtering methods. Furthermore, TOMOBFLOW is provided with abilities to deal with different types and formats of images in order to make it useful for electron tomography in particular and bioimaging in general.</p> <p>Conclusion</p> <p>TOMOBFLOW allows efficient noise filtering of bioimaging datasets with preservation of the features of interest, thereby yielding data better suited for post-processing, visualization and interpretation. It is available at the web site <url>http://www.ual.es/%7ejjfdez/SW/tomobflow.html</url>.</p
A CANDLE for a deeper in-vivo insight
A new Collaborative Approach for eNhanced Denoising under Low-light Excitation (CANDLE) is introduced for the processing of 3D laser scanning multiphoton microscopy images. CANDLE is designed to be robust for low signal-to-noise ratio (SNR) conditions typically encountered when imaging deep in scattering biological specimens. Based on an optimized non-local means filter involving the comparison of filtered patches, CANDLE locally adapts the amount of smoothing in order to deal with the noise inhomogeneity inherent to laser scanning fluorescence microscopy images. An extensive validation on synthetic data, images acquired on microspheres and in vivo images is presented. These experiments show that the CANDLE filter obtained competitive results compared to a state-of-the-art method and a locally adaptive optimized non-local means filter, especially under low SNR conditions (PSNR < 8 dB). Finally, the deeper imaging capabilities enabled by the proposed filter are demonstrated on deep tissue in vivo images of neurons and fine axonal processes in the Xenopus tadpole brain.We want to thank Florian Luisier for providing free plugin of his PureDenoise filter. We also want to thank Markku Makitalo for providing the code of their OVST. This study was supported by the Canadian Institutes of Health Research (CIHR, MOP-84360 to DLC and MOP-77567 to ESR) and Cda (CECR)-Gevas-OE016. MM holds a fellowship from the Deutscher Akademischer Austasch Dienst (DAAD) and a McGill Principal's Award. ESR is a tier 2 Canada Research Chair. This work has been partially supported by the Spanish Health Institute Carlos III through the RETICS Combiomed, RD07/0067/2001. This work benefited from the use of ImageJ.Coupé, P.; Munz, M.; Manjón Herrera, JV.; Ruthazer, ES.; Collins, DL. (2012). A CANDLE for a deeper in-vivo insight. Medical Image Analysis. 16(4):849-864. https://doi.org/10.1016/j.media.2012.01.002S84986416
Filter-Based Probabilistic Markov Random Field Image Priors: Learning, Evaluation, and Image Analysis
Markov random fields (MRF) based on linear filter responses are one of the most popular forms for modeling image priors due to their rigorous probabilistic interpretations and versatility in various applications. In this dissertation, we propose an application-independent method to quantitatively evaluate MRF image priors using model samples. To this end, we developed an efficient auxiliary-variable Gibbs samplers for a general class of MRFs with flexible potentials. We found that the popular pairwise and high-order MRF priors capture image statistics quite roughly and exhibit poor generative properties. We further developed new learning strategies and obtained high-order MRFs that well capture the statistics of the inbuilt features, thus being real maximum-entropy models, and other important statistical properties of natural images, outlining the capabilities of MRFs. We suggest a multi-modal extension of MRF potentials which not only allows to train more expressive priors, but also helps to reveal more insights of MRF variants, based on which we are able to train compact, fully-convolutional restricted Boltzmann machines (RBM) that can model visual repetitive textures even better than more complex and deep models.
The learned high-order MRFs allow us to develop new methods for various real-world image analysis problems. For denoising of natural images and deconvolution of microscopy images, the MRF priors are employed in a pure generative setting. We propose efficient sampling-based methods to infer Bayesian minimum mean squared error (MMSE) estimates, which substantially outperform maximum a-posteriori (MAP) estimates and can compete with state-of-the-art discriminative methods. For non-rigid registration of live cell nuclei in time-lapse microscopy images, we propose a global optical flow-based method. The statistics of noise in fluorescence microscopy images are studied to derive an adaptive weighting scheme for increasing model robustness. High-order MRFs are also employed to train image filters for extracting important features of cell nuclei and the deformation of nuclei are then estimated in the learned feature spaces. The developed method outperforms previous approaches in terms of both registration accuracy and computational efficiency
A flexible and accurate total variation and cascaded denoisers-based image reconstruction algorithm for hyperspectrally compressed ultrafast photography
Hyperspectrally compressed ultrafast photography (HCUP) based on compressed
sensing and the time- and spectrum-to-space mappings can simultaneously realize
the temporal and spectral imaging of non-repeatable or difficult-to-repeat
transient events passively in a single exposure. It possesses an incredibly
high frame rate of tens of trillions of frames per second and a sequence depth
of several hundred, and plays a revolutionary role in single-shot ultrafast
optical imaging. However, due to the ultra-high data compression ratio induced
by the extremely large sequence depth as well as the limited fidelities of
traditional reconstruction algorithms over the reconstruction process, HCUP
suffers from a poor image reconstruction quality and fails to capture fine
structures in complex transient scenes. To overcome these restrictions, we
propose a flexible image reconstruction algorithm based on the total variation
(TV) and cascaded denoisers (CD) for HCUP, named the TV-CD algorithm. It
applies the TV denoising model cascaded with several advanced deep
learning-based denoising models in the iterative plug-and-play alternating
direction method of multipliers framework, which can preserve the image
smoothness while utilizing the deep denoising networks to obtain more priori,
and thus solving the common sparsity representation problem in local similarity
and motion compensation. Both simulation and experimental results show that the
proposed TV-CD algorithm can effectively improve the image reconstruction
accuracy and quality of HCUP, and further promote the practical applications of
HCUP in capturing high-dimensional complex physical, chemical and biological
ultrafast optical scenes.Comment: 25 pages, 5 figures and 1 tabl
Deep learning approach to Fourier ptychographic microscopy
Convolutional neural networks (CNNs) have gained tremendous success in
solving complex inverse problems. The aim of this work is to develop a novel
CNN framework to reconstruct video sequence of dynamic live cells captured
using a computational microscopy technique, Fourier ptychographic microscopy
(FPM). The unique feature of the FPM is its capability to reconstruct images
with both wide field-of-view (FOV) and high resolution, i.e. a large
space-bandwidth-product (SBP), by taking a series of low resolution intensity
images. For live cell imaging, a single FPM frame contains thousands of cell
samples with different morphological features. Our idea is to fully exploit the
statistical information provided by this large spatial ensemble so as to make
predictions in a sequential measurement, without using any additional temporal
dataset. Specifically, we show that it is possible to reconstruct high-SBP
dynamic cell videos by a CNN trained only on the first FPM dataset captured at
the beginning of a time-series experiment. Our CNN approach reconstructs a
12800X10800 pixels phase image using only ~25 seconds, a 50X speedup compared
to the model-based FPM algorithm. In addition, the CNN further reduces the
required number of images in each time frame by ~6X. Overall, this
significantly improves the imaging throughput by reducing both the acquisition
and computational times. The proposed CNN is based on the conditional
generative adversarial network (cGAN) framework. Additionally, we also exploit
transfer learning so that our pre-trained CNN can be further optimized to image
other cell types. Our technique demonstrates a promising deep learning approach
to continuously monitor large live-cell populations over an extended time and
gather useful spatial and temporal information with sub-cellular resolution
Deep learning approach to Fourier ptychographic microscopy
Convolutional neural networks (CNNs) have gained tremendous success in solving complex inverse problems. The aim of this work is to develop a novel CNN framework to reconstruct video sequences of dynamic live cells captured using a computational microscopy technique, Fourier ptychographic microscopy (FPM). The unique feature of the FPM is its capability to reconstruct images with both wide field-of-view (FOV) and high resolution, i.e. a large space-bandwidth-product (SBP), by taking a series of low resolution intensity images. For live cell imaging, a single FPM frame contains thousands of cell samples with different morphological features. Our idea is to fully exploit the statistical information provided by these large spatial ensembles so as to make predictions in a sequential measurement, without using any additional temporal dataset. Specifically, we show that it is possible to reconstruct high-SBP dynamic cell videos by a CNN trained only on the first FPM dataset captured at the beginning of a time-series experiment. Our CNN approach reconstructs a 12800×10800 pixel phase image using only ∼25 seconds, a 50× speedup compared to the model-based FPM algorithm. In addition, the CNN further reduces the required number of images in each time frame by ∼ 6×. Overall, this significantly improves the imaging throughput by reducing both the acquisition and computational times. The proposed CNN is based on the conditional generative adversarial network (cGAN) framework. We further propose a mixed loss function that combines the standard image domain loss and a weighted Fourier domain loss, which leads to improved reconstruction of the high frequency information. Additionally, we also exploit transfer learning so that our pre-trained CNN can be further optimized to image other cell types. Our technique demonstrates a promising deep learning approach to continuously monitor large live-cell populations over an extended time and gather useful spatial and temporal information with sub-cellular resolution.We would like to thank NVIDIA Corporation for supporting us with the GeForce Titan Xp through the GPU Grant Program. (NVIDIA Corporation; GeForce Titan Xp through the GPU Grant Program)First author draf
Computational Framework For Neuro-Optics Simulation And Deep Learning Denoising
The application of machine learning techniques in microscopic image restoration has shown superior performance. However, the development of such techniques has been hindered by the demand for large datasets and the lack of ground truth. To address these challenges, this study introduces a computer simulation model that accurately captures the neural anatomic volume, fluorescence light transportation within the tissue volume, and the photon collection process of microscopic imaging sensors. The primary goal of this simulation is to generate realistic image data for training and validating machine learning models. One notable aspect of this study is the incorporation of a machine learning denoiser into the simulation, which accelerates the computational efficiency of the entire process. By reducing noise levels in the generated images, the denoiser significantly enhances the simulation\u27s performance, allowing for faster and more accurate modeling and analysis of microscopy images. This approach addresses the limitations of data availability and ground truth annotation, offering a practical and efficient solution for microscopic image restoration. The integration of a machine learning denoiser within the simulation significantly accelerates the overall simulation process, while improving the quality of the generated images. This advancement opens new possibilities for training and validating machine learning models in microscopic image restoration, overcoming the challenges of large datasets and the lack of ground truth
- …