1,530,175 research outputs found
Improving spatial resolution of confocal Raman microscopy by super-resolution image restoration
A new super-resolution image restoration confocal Raman microscopy method (SRIR-RAMAN) is proposed for improving the spatial resolution of confocal Raman microscopy. This method can recover the lost high spatial frequency of the confocal Raman microscopy by using Poisson-MAP super-resolution imaging restoration, thereby improving the spatial resolution of confocal Raman microscopy and realizing its super-resolution imaging. Simulation analyses and experimental results indicate that the spatial resolution of SRIR-RAMAN can be improved by 65% to achieve 200 nm with the same confocal Raman microscopy system. This method can provide a new tool for high spatial resolution micro-probe structure detection in physical chemistry, materials science, biomedical science and other areas
Effects of spatial resolution
Studies of the effects of spatial resolution on extraction of geologic information are woefully lacking but spatial resolution effects can be examined as they influence two general categories: detection of spatial features per se; and the effects of IFOV on the definition of spectral signatures and on general mapping abilities
Lensfree on-chip microscopy over a wide field-of-view using pixel super-resolution.
We demonstrate lensfree holographic microscopy on a chip to achieve approximately 0.6 microm spatial resolution corresponding to a numerical aperture of approximately 0.5 over a large field-of-view of approximately 24 mm2. By using partially coherent illumination from a large aperture (approximately 50 microm), we acquire lower resolution lensfree in-line holograms of the objects with unit fringe magnification. For each lensfree hologram, the pixel size at the sensor chip limits the spatial resolution of the reconstructed image. To circumvent this limitation, we implement a sub-pixel shifting based super-resolution algorithm to effectively recover much higher resolution digital holograms of the objects, permitting sub-micron spatial resolution to be achieved across the entire sensor chip active area, which is also equivalent to the imaging field-of-view (24 mm2) due to unit magnification. We demonstrate the success of this pixel super-resolution approach by imaging patterned transparent substrates, blood smear samples, as well as Caenoharbditis Elegans
Nanoscale Infrared Imaging Analysis of Carbonaceous Chondrites to Understand Organic-Mineral Interactions During Aqueous Alteration
Organic matter in carbonaceous chondrites is distributed in fine-grained matrix. To understand pre- and postaccretion history of organic matter and its association with surrounding minerals, microscopic techniques are mandatory. Infrared (IR) spectroscopy is a useful technique, but the spatial resolution of IR is limited to a few micrometers, due to the diffraction limit. In this study, we applied the high spatial resolution IR imaging method to CM2 carbonaceous chondrites Murchison and Bells, which is based on an atomic force microscopy (AFM) with its tip detecting thermal expansion of a sample resulting from absorption of infrared radiation. We confirmed that this technique permits 30 nm spatial resolution organic analysis for the meteorite samples. The IR imaging results are consistent with the previously reported association of organic matter and phyllosilicates, but our results are at much higher spatial resolution. This observation of heterogeneous distributions of the functional groups of organic matter revealed its association with minerals at 30 nm spatial resolution in meteorite samples by IR spectroscopy
Optimal Population Codes for Space: Grid Cells Outperform Place Cells
Rodents use two distinct neuronal coordinate systems to estimate their position: place fields in the hippocampus and grid fields in the entorhinal cortex. Whereas place cells spike at only one particular spatial location, grid cells fire at multiple sites that correspond to the points of an imaginary hexagonal lattice. We study how to best construct place and grid codes, taking the probabilistic nature of neural spiking into account. Which spatial encoding properties of individual neurons confer the highest resolution when decoding the animal’s position from the neuronal population response? A priori, estimating a spatial position from a grid code could be ambiguous, as regular periodic lattices possess translational symmetry. The solution to this problem requires lattices for grid cells with different spacings; the spatial resolution crucially depends on choosing the right ratios of these spacings across the population. We compute the expected error in estimating the position in both the asymptotic limit, using Fisher information, and for low spike counts, using maximum likelihood estimation. Achieving high spatial resolution and covering a large range of space in a grid code leads to a trade-off: the best grid code for spatial resolution is built of nested modules with different spatial periods, one inside the other, whereas maximizing the spatial range requires distinct spatial periods that are pairwisely incommensurate. Optimizing the spatial resolution predicts two grid cell properties that have been experimentally observed. First, short lattice spacings should outnumber long lattice spacings. Second, the grid code should be self-similar across different lattice spacings, so that the grid field always covers a fixed fraction of the lattice period. If these conditions are satisfied and the spatial “tuning curves” for each neuron span the same range of firing rates, then the resolution of the grid code easily exceeds that of the best possible place code with the same number of neurons
Spatial and Angular Resolution Enhancement of Light Fields Using Convolutional Neural Networks
Light field imaging extends the traditional photography by capturing both
spatial and angular distribution of light, which enables new capabilities,
including post-capture refocusing, post-capture aperture control, and depth
estimation from a single shot. Micro-lens array (MLA) based light field cameras
offer a cost-effective approach to capture light field. A major drawback of MLA
based light field cameras is low spatial resolution, which is due to the fact
that a single image sensor is shared to capture both spatial and angular
information. In this paper, we present a learning based light field enhancement
approach. Both spatial and angular resolution of captured light field is
enhanced using convolutional neural networks. The proposed method is tested
with real light field data captured with a Lytro light field camera, clearly
demonstrating spatial and angular resolution improvement
- …
