18 research outputs found

    Information Flow in Color Appearance Neural Networks

    Full text link
    Color Appearance Models are biological networks that consist of a cascade of linear+nonlinear layers that modify the linear measurements at the retinal photo-receptors leading to an internal (nonlinear) representation of color that correlates with psychophysical experience. The basic layers of these networks include: (1) chromatic adaptation (normalization of the mean and covariance of the color manifold), (2) change to opponent color channels (PCA-like rotation in the color space), and (3) saturating nonlinearities to get perceptually Euclidean color representations (similar to dimensionwise equalization). The Efficient Coding Hypothesis argues that these transforms should emerge from information-theoretic goals. In case this hypothesis holds in color vision, the question is, what is the coding gain due to the different layers of the color appearance networks? In this work, a representative family of Color Appearance Models is analyzed in terms of how the redundancy among the chromatic components is modified along the network and how much information is transferred from the input data to the noisy response. The proposed analysis is done using data and methods that were not available before: (1) new colorimetrically calibrated scenes in different CIE illuminations for proper evaluation of chromatic adaptation, and (2) new statistical tools to estimate (multivariate) information-theoretic quantities between multidimensional sets based on Gaussianization. Results confirm that the Efficient Coding Hypothesis holds for current color vision models, and identify the psychophysical mechanisms critically responsible for gains in information transference: opponent channels and their nonlinear nature are more important than chromatic adaptation at the retina

    Psychophysics of Artificial Neural Networks Questions Classical Hue Cancellation Experiments

    Full text link
    We show that classical hue cancellation experiments lead to human-like opponent curves even if the task is done by trivial (identity) artificial networks. Specifically, human-like opponent spectral sensitivities always emerge in artificial networks as long as (i) the retina converts the input radiation into any tristimulus-like representation, and (ii) the post-retinal network solves the standard hue cancellation task, e.g. the network looks for the weights of the cancelling lights so that every monochromatic stimulus plus the weighted cancelling lights match a grey reference in the (arbitrary) color representation used by the network. In fact, the specific cancellation lights (and not the network architecture) are key to obtain human-like curves: results show that the classical choice of the lights is the one that leads to the best (more human-like) result, and any other choices lead to progressively different spectral sensitivities. We show this in two ways: through artificial psychophysics using a range of networks with different architectures and a range of cancellation lights, and through a change-of-basis theoretical analogy of the experiments. This suggests that the opponent curves of the classical experiment are just a by-product of the front-end photoreceptors and of a very specific experimental choice but they do not inform about the downstream color representation. In fact, the architecture of the post-retinal network (signal recombination or internal color space) seems irrelevant for the emergence of the curves in the classical experiment. This result in artificial networks questions the conventional interpretation of the classical result in humans by Jameson and Hurvich.Comment: 17 pages, 7 figure

    Sequential Learning of Principal Curves: Summarizing Data Streams on the Fly

    Full text link
    When confronted with massive data streams, summarizing data with dimension reduction methods such as PCA raises theoretical and algorithmic pitfalls. Principal curves act as a nonlinear generalization of PCA and the present paper proposes a novel algorithm to automatically and sequentially learn principal curves from data streams. We show that our procedure is supported by regret bounds with optimal sublinear remainder terms. A greedy local search implementation (called \texttt{slpc}, for Sequential Learning Principal Curves) that incorporates both sleeping experts and multi-armed bandit ingredients is presented, along with its regret computation and performance on synthetic and real-life data

    Divisive Normalization from Wilson-Cowan Dynamics

    Full text link
    Divisive Normalization and the Wilson-Cowan equations are influential models of neural interaction and saturation [Carandini and Heeger Nat.Rev.Neurosci. 2012; Wilson and Cowan Kybernetik 1973]. However, they have not been analytically related yet. In this work we show that Divisive Normalization can be obtained from the Wilson-Cowan model. Specifically, assuming that Divisive Normalization is the steady state solution of the Wilson-Cowan differential equation, we find that the kernel that controls neural interactions in Divisive Normalization depends on the Wilson-Cowan kernel but also has a signal-dependent contribution. A standard stability analysis of a Wilson-Cowan model with the parameters obtained from our relation shows that the Divisive Normalization solution is a stable node. This stability demonstrates the consistency of our steady state assumption, and is in line with the straightforward use of Divisive Normalization with time-varying stimuli. The proposed theory provides a physiological foundation (a relation to a dynamical network with fixed wiring among neurons) for the functional suggestions that have been done on the need of signal-dependent Divisive Normalization [e.g. in Coen-Cagli et al., PLoS Comp.Biol. 2012]. Moreover, this theory explains the modifications that had to be introduced ad-hoc in Gaussian kernels of Divisive Normalization in [Martinez et al. Front. Neurosci. 2019] to reproduce contrast responses. The proposed relation implies that the Wilson-Cowan dynamics also reproduces visual masking and subjective image distortion metrics, which up to now had been mainly explained via Divisive Normalization. Finally, this relation allows to apply to Divisive Normalization the methods which up to now had been developed for dynamical systems such as Wilson-Cowan networks
    corecore