3,004 research outputs found

    Bio-Inspired Stereo Vision Calibration for Dynamic Vision Sensors

    Get PDF
    Many advances have been made in the eld of computer vision. Several recent research trends have focused on mimicking human vision by using a stereo vision system. In multi-camera systems, a calibration process is usually implemented to improve the results accuracy. However, these systems generate a large amount of data to be processed; therefore, a powerful computer is required and, in many cases, this cannot be done in real time. Neuromorphic Engineering attempts to create bio-inspired systems that mimic the information processing that takes place in the human brain. This information is encoded using pulses (or spikes) and the generated systems are much simpler (in computational operations and resources), which allows them to perform similar tasks with much lower power consumption, thus these processes can be developed over specialized hardware with real-time processing. In this work, a bio-inspired stereovision system is presented, where a calibration mechanism for this system is implemented and evaluated using several tests. The result is a novel calibration technique for a neuromorphic stereo vision system, implemented over specialized hardware (FPGA - Field-Programmable Gate Array), which allows obtaining reduced latencies on hardware implementation for stand-alone systems, and working in real time.Ministerio de Economía y Competitividad TEC2016-77785-PMinisterio de Economía y Competitividad TIN2016-80644-

    Molecular profiling of resident and infiltrating mononuclear phagocytes during rapid adult retinal degeneration using single-cell RNA sequencing.

    Get PDF
    Neuroinflammation commonly accompanies neurodegeneration, but the specific roles of resident and infiltrating immune cells during degeneration remains controversial. Much of the difficulty in assessing myeloid cell-specific functions during disease progression arises from the inability to clearly distinguish between activated microglia and bone marrow-derived monocytes and macrophages in various stages of differentiation and activation within the central nervous system. Using an inducible model of photoreceptor cell death, we investigated the prevalence of infiltrating monocytes and macrophage subpopulations after the initiation of degeneration in the mouse retina. In vivo retinal imaging revealed infiltration of CCR2+ leukocytes across retinal vessels and into the parenchyma within 48 hours of photoreceptor degeneration. Immunohistochemistry and flow cytometry confirmed and characterized these leukocytes as CD11b+CD45+ cells. Single-cell mRNA sequencing of the entire CD11b+CD45+ population revealed the presence of resting microglia, activated microglia, monocytes, and macrophages as well as 12 distinct subpopulations within these four major cell classes. Our results demonstrate a previously immeasurable degree of molecular heterogeneity in the innate immune response to cell-autonomous degeneration within the central nervous system and highlight the necessity of unbiased high-throughput and high-dimensional molecular techniques like scRNAseq to understand the complex and changing landscape of immune responders during disease progression

    VEGF guides angiogenic sprouting utilizing endothelial tip cell filopodia

    Get PDF
    Vascular endothelial growth factor (VEGF-A) is a major regulator of blood vessel formation and function. it controls several processes in endothelial cells, such as proliferation, survival, and migration, but it is not known how these are coordinately regulated to result in more complex morphogenetic events, such as tubular sprouting, fusion, and network formation. We show here that VEGF-A controls angiogenic sprouting in the early postnatal retina by guiding filopodial extension from specialized endothelial cells situated at the tips of the vascular sprouts. The tip cells respond to VEGF-A only by guided migration; the proliferative response to VEGF-A occurs in the sprout stalks. These two cellular responses are both mediated by agonistic activity of VEGF-A on VEGF receptor 2. Whereas tip cell migration depends on a gradient of VEGF-A, proliferation is regulated by its concentration. Thus, vessel patterning during retinal angiogenesis depends on the balance between two different qualities of the extracellular VEGF-A distribution, which regulate distinct cellular responses in defined populations of endothelial cells

    Dampening Spontaneous Activity Improves the Light Sensitivity and Spatial Acuity of Optogenetic Retinal Prosthetic Responses

    Get PDF
    Retinitis pigmentosa is a progressive retinal dystrophy that causes irreversible visual impairment and blindness. Retinal prostheses currently represent the only clinically available vision-restoring treatment, but the quality of vision returned remains poor. Recently, it has been suggested that the pathological spontaneous hyperactivity present in dystrophic retinas may contribute to the poor quality of vision returned by retinal prosthetics by reducing the signal-to-noise ratio of prosthetic responses. Here, we investigated to what extent blocking this hyperactivity can improve optogenetic retinal prosthetic responses. We recorded activity from channelrhodopsin-expressing retinal ganglion cells in retinal wholemounts in a mouse model of retinitis pigmentosa. Sophisticated stimuli, inspired by those used in clinical visual assessment, were used to assess light sensitivity, contrast sensitivity and spatial acuity of optogenetic responses; in all cases these were improved after blocking spontaneous hyperactivity using meclofenamic acid, a gap junction blocker. Our results suggest that this approach significantly improves the quality of vision returned by retinal prosthetics, paving the way to novel clinical applications. Moreover, the improvements in sensitivity achieved by blocking spontaneous hyperactivity may extend the dynamic range of optogenetic retinal prostheses, allowing them to be used at lower light intensities such as those encountered in everyday life

    Deep spectral learning for label-free optical imaging oximetry with uncertainty quantification

    Get PDF
    Measurement of blood oxygen saturation (sO2) by optical imaging oximetry provides invaluable insight into local tissue functions and metabolism. Despite different embodiments and modalities, all label-free optical-imaging oximetry techniques utilize the same principle of sO2-dependent spectral contrast from haemoglobin. Traditional approaches for quantifying sO2 often rely on analytical models that are fitted by the spectral measurements. These approaches in practice suffer from uncertainties due to biological variability, tissue geometry, light scattering, systemic spectral bias, and variations in the experimental conditions. Here, we propose a new data-driven approach, termed deep spectral learning (DSL), to achieve oximetry that is highly robust to experimental variations and, more importantly, able to provide uncertainty quantification for each sO2 prediction. To demonstrate the robustness and generalizability of DSL, we analyse data from two visible light optical coherence tomography (vis-OCT) setups across two separate in vivo experiments on rat retinas. Predictions made by DSL are highly adaptive to experimental variabilities as well as the depth-dependent backscattering spectra. Two neural-network-based models are tested and compared with the traditional least-squares fitting (LSF) method. The DSL-predicted sO2 shows significantly lower mean-square errors than those of the LSF. For the first time, we have demonstrated en face maps of retinal oximetry along with a pixel-wise confidence assessment. Our DSL overcomes several limitations of traditional approaches and provides a more flexible, robust, and reliable deep learning approach for in vivo non-invasive label-free optical oximetry.R01 CA224911 - NCI NIH HHS; R01 CA232015 - NCI NIH HHS; R01 NS108464 - NINDS NIH HHS; R21 EY029412 - NEI NIH HHSAccepted manuscrip

    DART: Distribution Aware Retinal Transform for Event-based Cameras

    Full text link
    We introduce a generic visual descriptor, termed as distribution aware retinal transform (DART), that encodes the structural context using log-polar grids for event cameras. The DART descriptor is applied to four different problems, namely object classification, tracking, detection and feature matching: (1) The DART features are directly employed as local descriptors in a bag-of-features classification framework and testing is carried out on four standard event-based object datasets (N-MNIST, MNIST-DVS, CIFAR10-DVS, NCaltech-101). (2) Extending the classification system, tracking is demonstrated using two key novelties: (i) For overcoming the low-sample problem for the one-shot learning of a binary classifier, statistical bootstrapping is leveraged with online learning; (ii) To achieve tracker robustness, the scale and rotation equivariance property of the DART descriptors is exploited for the one-shot learning. (3) To solve the long-term object tracking problem, an object detector is designed using the principle of cluster majority voting. The detection scheme is then combined with the tracker to result in a high intersection-over-union score with augmented ground truth annotations on the publicly available event camera dataset. (4) Finally, the event context encoded by DART greatly simplifies the feature correspondence problem, especially for spatio-temporal slices far apart in time, which has not been explicitly tackled in the event-based vision domain.Comment: 12 pages, revision submitted to TPAMI in Nov 201

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world
    corecore