19 research outputs found

    Efficient Sparse Coding in Early Sensory Processing: Lessons from Signal Recovery

    Get PDF
    Sensory representations are not only sparse, but often overcomplete: coding units significantly outnumber the input units. For models of neural coding this overcompleteness poses a computational challenge for shaping the signal processing channels as well as for using the large and sparse representations in an efficient way. We argue that higher level overcompleteness becomes computationally tractable by imposing sparsity on synaptic activity and we also show that such structural sparsity can be facilitated by statistics based decomposition of the stimuli into typical and atypical parts prior to sparse coding. Typical parts represent large-scale correlations, thus they can be significantly compressed. Atypical parts, on the other hand, represent local features and are the subjects of actual sparse coding. When applied on natural images, our decomposition based sparse coding model can efficiently form overcomplete codes and both center-surround and oriented filters are obtained similar to those observed in the retina and the primary visual cortex, respectively. Therefore we hypothesize that the proposed computational architecture can be seen as a coherent functional model of the first stages of sensory coding in early vision

    RPCA pseudo-code.

    No full text
    <p> denotes a shrinkage operator, acting on matrices componentwise. For matrix , denotes the singular value threshold operator: , where is the singular value decomposition.</p

    Different basis types of RPCA preprocessing and Sparse Coding.

    No full text
    <p>Sample receptive fields are scaled into range [0,1]. (A) no RPCA, columns of dictionary . (B) receptive fields learned after PCA pre-filtering: features show wavy, global structure. (C) Features (‘global filters’) of the low dimensional signal for the case (dimension = 17). (D) reverse correlation of the full rank sparsified signal yields stereotypical DoG-like filters with symmetric 2D structure. The figure shows the profile of the central section as a function of . At higher values the negative basin around the peak gets deeper. (E) Randomly selected sparse coding filter sets (over-completeness is , and ) With increasing the filters get smaller and more localized (i.e. <i>cleaner</i>). (F) For comparison, a set of sparse coding filters () and the corresponding linear approximations (normalized reverse correlation, ) are shown at .</p

    The pseudocode of the Subspace Pursuit method.

    No full text
    <p>The goal is to represent the input with minimal reconstruction error using basis only <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1002372#pcbi.1002372-Dai1" target="_blank">[23]</a>. SP differs from other iterative greedy methods in the incremental refinement of the selected basis subset. First, a representation is generated with the help of the full basis set (using pseudoinverse computations). During iteration basis are selected based on the amplitude of the corresponding coordinates of the representation. The resulting residual (difference between the original input and the approximation obtained by projecting the representation onto the input space) is then again projected back to the representation space and another set of basis are chosen. The two selected subsets are then fused (<i>expansion</i>) and the resulting expanded set is used again to project the original input onto the representation space. Finally a new set of basis are selected by the amplitude of the corresponding coordinates of the projection (<i>shrinkage</i>). Iteration stops when the norm of the residual does not decrease anymore. Notation: denotes a sub-matrix of where index set contains the indices of the selected columns. The index set of the first sorted components of a vector is denoted by .</p
    corecore