1,584 research outputs found
Recent Progress in Image Deblurring
This paper comprehensively reviews the recent development of image
deblurring, including non-blind/blind, spatially invariant/variant deblurring
techniques. Indeed, these techniques share the same objective of inferring a
latent sharp image from one or several corresponding blurry images, while the
blind deblurring techniques are also required to derive an accurate blur
kernel. Considering the critical role of image restoration in modern imaging
systems to provide high-quality images under complex environments such as
motion, undesirable lighting conditions, and imperfect system components, image
deblurring has attracted growing attention in recent years. From the viewpoint
of how to handle the ill-posedness which is a crucial issue in deblurring
tasks, existing methods can be grouped into five categories: Bayesian inference
framework, variational methods, sparse representation-based methods,
homography-based modeling, and region-based methods. In spite of achieving a
certain level of development, image deblurring, especially the blind case, is
limited in its success by complex application conditions which make the blur
kernel hard to obtain and be spatially variant. We provide a holistic
understanding and deep insight into image deblurring in this review. An
analysis of the empirical evidence for representative methods, practical
issues, as well as a discussion of promising future directions are also
presented.Comment: 53 pages, 17 figure
Geobase Information System Impacts on Space Image Formats
As Geobase Information Systems increase in number, size and complexity, the format compatability of satellite remote sensing data becomes increasingly more important. Because of the vast and continually increasing quantity of data available from remote sensing systems the utility of these data is increasingly dependent on the degree to which their formats facilitate, or hinder, their incorporation into Geobase Information Systems. To merge satellite data into a geobase system requires that they both have a compatible geographic referencing system. Greater acceptance of satellite data by the user community will be facilitated if the data are in a form which most readily corresponds to existing geobase data structures. The conference addressed a number of specific topics and made recommendations
Recommended from our members
CA1-projecting subiculum neurons facilitate object-place learning.
Recent anatomical evidence suggests a functionally significant back-projection pathway from the subiculum to the CA1. Here we show that the afferent circuitry of CA1-projecting subicular neurons is biased by inputs from CA1 inhibitory neurons and the visual cortex, but lacks input from the entorhinal cortex. Efferents of the CA1-projecting subiculum neurons also target the perirhinal cortex, an area strongly implicated in object-place learning. We identify a critical role for CA1-projecting subicular neurons in object-location learning and memory, and show that this projection modulates place-specific activity of CA1 neurons and their responses to displaced objects. Together, these experiments reveal a novel pathway by which cortical inputs, particularly those from the visual cortex, reach the hippocampal output region CA1. Our findings also implicate this circuitry in the formation of complex spatial representations and learning of object-place associations
Recommended from our members
A model of ganglion axon pathways accounts for percepts elicited by retinal implants.
Degenerative retinal diseases such as retinitis pigmentosa and macular degeneration cause irreversible vision loss in more than 10 million people worldwide. Retinal prostheses, now implanted in over 250 patients worldwide, electrically stimulate surviving cells in order to evoke neuronal responses that are interpreted by the brain as visual percepts ('phosphenes'). However, instead of seeing focal spots of light, current implant users perceive highly distorted phosphenes that vary in shape both across subjects and electrodes. We characterized these distortions by asking users of the Argus retinal prosthesis system (Second Sight Medical Products Inc.) to draw electrically elicited percepts on a touchscreen. Using ophthalmic fundus imaging and computational modeling, we show that elicited percepts can be accurately predicted by the topographic organization of optic nerve fiber bundles in each subject's retina, successfully replicating visual percepts ranging from 'blobs' to oriented 'streaks' and 'wedges' depending on the retinal location of the stimulating electrode. This provides the first evidence that activation of passing axon fibers accounts for the rich repertoire of phosphene shape commonly reported in psychophysical experiments, which can severely distort the quality of the generated visual experience. Overall our findings argue for more detailed modeling of biological detail across neural engineering applications
A platform for brain-wide imaging and reconstruction of individual neurons
The structure of axonal arbors controls how signals from individual neurons are routed within the mammalian brain. However, the arbors of very few long-range projection neurons have been reconstructed in their entirety, as axons with diameters as small as 100 nm arborize in target regions dispersed over many millimeters of tissue. We introduce a platform for high-resolution, three-dimensional fluorescence imaging of complete tissue volumes that enables the visualization and reconstruction of long-range axonal arbors. This platform relies on a high-speed two-photon microscope integrated with a tissue vibratome and a suite of computational tools for large-scale image data. We demonstrate the power of this approach by reconstructing the axonal arbors of multiple neurons in the motor cortex across a single mouse brain.Howard Hughes Medical InstitutePublished versio
Synthetic recording and in situ readout of lineage information in single cells
Reconstructing the lineage relationships and dynamic event histories of individual cells within their native spatial context is a long-standing challenge in biology. Many biological processes of interest occur in optically opaque or physically inaccessible contexts, necessitating approaches other than direct imaging. Here, we describe a new synthetic system that enables cells to record lineage information and event histories in the genome in a format that can be subsequently read out in single cells in situ. This system, termed Memory by Engineered Mutagenesis with Optical In situ Readout (MEMOIR), is based on a set of barcoded recording elements termed scratchpads. The state of a given scratchpad can be irreversibly altered by Cas9-based targeted mutagenesis, and read out in single cells through multiplexed single-molecule RNA fluorescence hybridization (smFISH). To demonstrate a proof of principle of MEMOIR, we engineered mouse embryonic stem (ES) cells to contain multiple scratchpads and other recording components. In these cells, scratchpads were altered in a progressive and stochastic fashion as cells proliferated. Analysis of the final states of scratchpads in single cells in situ enabled reconstruction of the lineage trees of cell colonies. Combining analysis of endogenous gene expression with lineage reconstruction in the same cells further allowed inference of the dynamic rates at which ES cells switch between two gene expression states. Finally, using simulations, we showed how parallel MEMOIR systems operating in the same cell can enable recording and readout of dynamic cellular event histories. MEMOIR thus provides a versatile platform for information recording and in situ, single cell readout across diverse biological systems
- …