161,663 research outputs found
Image-Dependent Spatial Shape-Error Concealment
Existing spatial shape-error concealment techniques are broadly based upon either parametric curves that exploit geometric information concerning a shape's contour or object shape statistics using a combination of Markov random fields and maximum a posteriori estimation. Both categories are to some extent, able to mask errors caused by information loss, provided the shape is considered independently of the image/video. They palpably however, do not afford the best solution in applications where shape is used as metadata to describe image and video content. This paper presents a novel image-dependent spatial shape-error concealment (ISEC) algorithm that uses both image and shape information by employing the established rubber-band contour detecting function, with the novel enhancement of automatically determining the optimal width of the band to achieve superior error concealment. Experimental results corroborate both qualitatively and numerically, the enhanced performance of the new ISEC strategy compared with established techniques
Sparse visual models for biologically inspired sensorimotor control
Given the importance of using resources efficiently in the competition for survival, it is reasonable to think that natural evolution has discovered efficient cortical coding strategies for representing natural visual information. Sparse representations have intrinsic advantages in terms of fault-tolerance and low-power consumption potential, and can therefore be attractive for robot sensorimotor control with powerful dispositions for decision-making. Inspired by the mammalian brain and its visual ventral pathway, we present in this paper a hierarchical sparse coding network architecture that extracts visual features for use in sensorimotor control. Testing with natural images demonstrates that this sparse coding facilitates processing and learning in subsequent layers. Previous studies have shown how the responses of complex cells could be sparsely represented by a higher-order neural layer. Here we extend sparse coding in each network layer, showing that detailed modeling of earlier stages in the visual pathway enhances the characteristics of the receptive fields developed in subsequent stages. The yield network is more dynamic with richer and more biologically plausible input and output representation
Discovering Neuronal Cell Types and Their Gene Expression Profiles Using a Spatial Point Process Mixture Model
Cataloging the neuronal cell types that comprise circuitry of individual
brain regions is a major goal of modern neuroscience and the BRAIN initiative.
Single-cell RNA sequencing can now be used to measure the gene expression
profiles of individual neurons and to categorize neurons based on their gene
expression profiles. While the single-cell techniques are extremely powerful
and hold great promise, they are currently still labor intensive, have a high
cost per cell, and, most importantly, do not provide information on spatial
distribution of cell types in specific regions of the brain. We propose a
complementary approach that uses computational methods to infer the cell types
and their gene expression profiles through analysis of brain-wide single-cell
resolution in situ hybridization (ISH) imagery contained in the Allen Brain
Atlas (ABA). We measure the spatial distribution of neurons labeled in the ISH
image for each gene and model it as a spatial point process mixture, whose
mixture weights are given by the cell types which express that gene. By fitting
a point process mixture model jointly to the ISH images, we infer both the
spatial point process distribution for each cell type and their gene expression
profile. We validate our predictions of cell type-specific gene expression
profiles using single cell RNA sequencing data, recently published for the
mouse somatosensory cortex. Jointly with the gene expression profiles, cell
features such as cell size, orientation, intensity and local density level are
inferred per cell type
Nonlinear Hebbian learning as a unifying principle in receptive field formation
The development of sensory receptive fields has been modeled in the past by a
variety of models including normative models such as sparse coding or
independent component analysis and bottom-up models such as spike-timing
dependent plasticity or the Bienenstock-Cooper-Munro model of synaptic
plasticity. Here we show that the above variety of approaches can all be
unified into a single common principle, namely Nonlinear Hebbian Learning. When
Nonlinear Hebbian Learning is applied to natural images, receptive field shapes
were strongly constrained by the input statistics and preprocessing, but
exhibited only modest variation across different choices of nonlinearities in
neuron models or synaptic plasticity rules. Neither overcompleteness nor sparse
network activity are necessary for the development of localized receptive
fields. The analysis of alternative sensory modalities such as auditory models
or V2 development lead to the same conclusions. In all examples, receptive
fields can be predicted a priori by reformulating an abstract model as
nonlinear Hebbian learning. Thus nonlinear Hebbian learning and natural
statistics can account for many aspects of receptive field formation across
models and sensory modalities
- …