17,639 research outputs found
Two-photon imaging and analysis of neural network dynamics
The glow of a starry night sky, the smell of a freshly brewed cup of coffee
or the sound of ocean waves breaking on the beach are representations of the
physical world that have been created by the dynamic interactions of thousands
of neurons in our brains. How the brain mediates perceptions, creates thoughts,
stores memories and initiates actions remains one of the most profound puzzles
in biology, if not all of science. A key to a mechanistic understanding of how
the nervous system works is the ability to analyze the dynamics of neuronal
networks in the living organism in the context of sensory stimulation and
behaviour. Dynamic brain properties have been fairly well characterized on the
microscopic level of individual neurons and on the macroscopic level of whole
brain areas largely with the help of various electrophysiological techniques.
However, our understanding of the mesoscopic level comprising local populations
of hundreds to thousands of neurons (so called 'microcircuits') remains
comparably poor. In large parts, this has been due to the technical
difficulties involved in recording from large networks of neurons with
single-cell spatial resolution and near- millisecond temporal resolution in the
brain of living animals. In recent years, two-photon microscopy has emerged as
a technique which meets many of these requirements and thus has become the
method of choice for the interrogation of local neural circuits. Here, we
review the state-of-research in the field of two-photon imaging of neuronal
populations, covering the topics of microscope technology, suitable fluorescent
indicator dyes, staining techniques, and in particular analysis techniques for
extracting relevant information from the fluorescence data. We expect that
functional analysis of neural networks using two-photon imaging will help to
decipher fundamental operational principles of neural microcircuits.Comment: 36 pages, 4 figures, accepted for publication in Reports on Progress
in Physic
Whole-brain vasculature reconstruction at the single capillary level
The distinct organization of the brain’s vascular network ensures that it is adequately supplied with oxygen and nutrients. However, despite this fundamental role, a detailed reconstruction of the brain-wide vasculature at the capillary level remains elusive, due to insufficient image quality using the best available techniques. Here, we demonstrate a novel approach that improves vascular demarcation by combining CLARITY with a vascular staining approach that can fill the entire blood vessel lumen and imaging with light-sheet fluorescence microscopy. This method significantly improves image contrast, particularly in depth, thereby allowing reliable application of automatic segmentation algorithms, which play an increasingly important role in high-throughput imaging of the terabyte-sized datasets now routinely produced. Furthermore, our novel method is compatible with endogenous fluorescence, thus allowing simultaneous investigations of vasculature and genetically targeted neurons. We believe our new method will be valuable for future brain-wide investigations of the capillary network
Deep learning approach to Fourier ptychographic microscopy
Convolutional neural networks (CNNs) have gained tremendous success in
solving complex inverse problems. The aim of this work is to develop a novel
CNN framework to reconstruct video sequence of dynamic live cells captured
using a computational microscopy technique, Fourier ptychographic microscopy
(FPM). The unique feature of the FPM is its capability to reconstruct images
with both wide field-of-view (FOV) and high resolution, i.e. a large
space-bandwidth-product (SBP), by taking a series of low resolution intensity
images. For live cell imaging, a single FPM frame contains thousands of cell
samples with different morphological features. Our idea is to fully exploit the
statistical information provided by this large spatial ensemble so as to make
predictions in a sequential measurement, without using any additional temporal
dataset. Specifically, we show that it is possible to reconstruct high-SBP
dynamic cell videos by a CNN trained only on the first FPM dataset captured at
the beginning of a time-series experiment. Our CNN approach reconstructs a
12800X10800 pixels phase image using only ~25 seconds, a 50X speedup compared
to the model-based FPM algorithm. In addition, the CNN further reduces the
required number of images in each time frame by ~6X. Overall, this
significantly improves the imaging throughput by reducing both the acquisition
and computational times. The proposed CNN is based on the conditional
generative adversarial network (cGAN) framework. Additionally, we also exploit
transfer learning so that our pre-trained CNN can be further optimized to image
other cell types. Our technique demonstrates a promising deep learning approach
to continuously monitor large live-cell populations over an extended time and
gather useful spatial and temporal information with sub-cellular resolution
Deep learning approach to Fourier ptychographic microscopy
Convolutional neural networks (CNNs) have gained tremendous success in solving complex inverse problems. The aim of this work is to develop a novel CNN framework to reconstruct video sequences of dynamic live cells captured using a computational microscopy technique, Fourier ptychographic microscopy (FPM). The unique feature of the FPM is its capability to reconstruct images with both wide field-of-view (FOV) and high resolution, i.e. a large space-bandwidth-product (SBP), by taking a series of low resolution intensity images. For live cell imaging, a single FPM frame contains thousands of cell samples with different morphological features. Our idea is to fully exploit the statistical information provided by these large spatial ensembles so as to make predictions in a sequential measurement, without using any additional temporal dataset. Specifically, we show that it is possible to reconstruct high-SBP dynamic cell videos by a CNN trained only on the first FPM dataset captured at the beginning of a time-series experiment. Our CNN approach reconstructs a 12800×10800 pixel phase image using only ∼25 seconds, a 50× speedup compared to the model-based FPM algorithm. In addition, the CNN further reduces the required number of images in each time frame by ∼ 6×. Overall, this significantly improves the imaging throughput by reducing both the acquisition and computational times. The proposed CNN is based on the conditional generative adversarial network (cGAN) framework. We further propose a mixed loss function that combines the standard image domain loss and a weighted Fourier domain loss, which leads to improved reconstruction of the high frequency information. Additionally, we also exploit transfer learning so that our pre-trained CNN can be further optimized to image other cell types. Our technique demonstrates a promising deep learning approach to continuously monitor large live-cell populations over an extended time and gather useful spatial and temporal information with sub-cellular resolution.We would like to thank NVIDIA Corporation for supporting us with the GeForce Titan Xp through the GPU Grant Program. (NVIDIA Corporation; GeForce Titan Xp through the GPU Grant Program)First author draf
- …