15 research outputs found

    The algorithm detects the number of longer-wavelength-sensitive cone classes and correctly classifies individual cones for a range of trichromatic mosaic parameters.

    No full text
    <p>(<b>A</b>) The fraction of cones correctly classified for various combinations of L∶M cone ratio and M cone value, when the number of longer-wavelength cone classes (L and M) was assumed to be 2. The S cone proportion was held at 6%, and S cones were given a value of 420.7 nm in all simulations. The L cone value was 558.9 nm. Each cell in the plot represents the aggregate results of three simulations, each with a different mosaic and each shown a different random sample of 2 million natural image patches. Mosaic responses for each simulation were obtained with different draws of natural image patches. Accuracies are reported as the average accuracy for L and M cones, each calculated separately. For example, if the algorithm correctly classified 354 of 354 L cones and 1 of 22 M cones in a mosaic with an L∶M ratio of 16∶1, the overall accuracy reported here would be 52% (the mean of 354/354 and 1/22) rather than 94% (the fraction of all cones correctly classified). (<b>B</b>) The number of simulations (of three) for each L∶M ratio and M cone value where the algorithm correctly detected that there were two longer-wavelength-sensitive cone classes. The results for individual simulations are given in <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1003652#pcbi.1003652.s004" target="_blank">Figure S4</a>.</p

    Unsupervised Learning of Cone Spectral Classes from Natural Images

    No full text
    <div><p>The first step in the evolution of primate trichromatic color vision was the expression of a third cone class not present in ancestral mammals. This observation motivates a fundamental question about the evolution of any sensory system: how is it possible to detect and exploit the presence of a novel sensory class? We explore this question in the context of primate color vision. We present an unsupervised learning algorithm capable of both detecting the number of spectral cone classes in a retinal mosaic and learning the class of each cone using the inter-cone correlations obtained in response to natural image input. The algorithm's ability to classify cones is in broad agreement with experimental evidence about functional color vision for a wide range of mosaic parameters, including those characterizing dichromacy, typical trichromacy, anomalous trichromacy, and possible tetrachromacy.</p></div

    Multidimensional scaling allows classification of cones for a typical trichromatic retinal mosaic.

    No full text
    <p>(<b>A</b>) 3D embeddings of the correlation matrix of the mosaic from <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1003652#pcbi-1003652-g002" target="_blank">Figure 2A</a>. Each point represents a single cone and is colored red, green, or blue for L, M, or S respectively, according to its actual identity in the mosaic. The 3D embeddings shown here and in other figures in this paper are oriented so that the - plane ( horizontal, vertical) of the representational space described in the text is shown. The absolute units on these axes are not meaningful, because MDS solutions are determined only up to a relative-distance preserving transformation. (<b>B</b>) The same 3D embedding shown in A zoomed in on the embedding of the L and M cones only. (<b>C</b>) The 3D embedding of the L and M cones from <b>A</b> after flattening. (<b>D</b>) A histogram of the positions of the embedding from <b>C</b> (<i>i.e.</i>, after flattening); best fit skew normals are shown in red and green. Rotating animations that show the three-dimensional structure of the embeddings are available online (<a href="http://color.psych.upenn.edu/supplements/receptorlearning" target="_blank">http://color.psych.upenn.edu/supplements/receptorlearning</a>).</p

    The algorithm detects dichromatic and tetrachromatic retinal mosaics.

    No full text
    <p>On the left are embeddings of the dichromatic retinal mosaic: (<b>A</b>) the full embedding; (<b>B</b>) the embedding zoomed in on just the L cones; (<b>C</b>) the flattened L cone embedding; and (<b>D</b>) a histogram of the positions of the flattened L cone embedding with the best-fit of the single detected skew normal. On the right are embeddings of the tetrachromatic retinal mosaic: (<b>E</b>) the full embedding; (<b>F</b>) the embedding zoomed in on just the L, M, and anomalous (A) cones; (<b>G</b>) the flattened L, M, and A cone embedding; and (<b>H</b>) a histogram of the coordinates of the flattened L, M, and A cone embedding with best fit of a mixture of the detected skew normals. Note that the units on Panels <b>A</b>, <b>B</b>, <b>C</b>, <b>E</b>, <b>F</b>, and <b>G</b> are arbitrary, as MDS does not produce meaningful units, but rather yields a relative-distance-preserving embedding. Spectral sensitivity curves for L, M, and S cones are shown in <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1003652#pcbi-1003652-g002" target="_blank">Figure 2A</a>. Anomalous A cones were given values of 545 nm, and the tetrachromatic retinal mosaic had an L∶M∶A ratio of 1∶1∶1. L, A, M, and S cones are colored red, yellow, green, and blue, respectively.</p

    Natural image correlations are highly regular in both space and spectrum.

    No full text
    <p>(<b>A</b>) An RGB rendering of a hyperspectral image taken from the natural image database described by Chakrabarti and Zickler <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1003652#pcbi.1003652-Chakrabarti1" target="_blank">[12]</a>. The code used to render this figure is included in our gitHub repository (<a href="https://github.com/DavidBrainard/ReceptorLearning/" target="_blank">https://github.com/DavidBrainard/ReceptorLearning/</a>). (<b>B</b>) Image correlations for images from our combined database. Correlation is plotted as a function of distance (pixel). Each curve represents a different wavelength separation: the black curve represents no wavelength difference while the yellow curve represents a difference of 320 nm (<i>i.e.</i>, the 400 nm channel correlated with the 720 nm channel).</p

    The retinotopic organization of visual cortex as measured and modeled.

    No full text
    <p>(<b>A</b>) The polar angle map, of a subject from our 10° dataset, shown on an inflated left hemisphere. (<b>B</b>) The eccentricity map of the subject shown in part A, shown on an inflated right hemisphere. (<b>C</b>) The algebraic model of retinotopic organization. V1, V2, and V3 are colored white, light gray, and dark gray, respectively. (<b>D</b>) The cortical surface atlas space (<i>fsaverage_sym</i>) from the occipital pole after flattening to the 2D surface. The Hinds V1 border <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1003538#pcbi.1003538-Hinds1" target="_blank">[7]</a> is indicated by the dashed black line, and the algebraic model of retinotopic organization used in registration is plotted with all 0°, 90°, and 180° polar angle lines colored according to the legend and the 10° and 90° eccentricity lines dashed and colored white. Shown are the Calcarine Sulcus (CaS), the Parietal-occipital Sulcus (PoS), the Lingual sulcus (LiS), the Inferior Occipital Sulcus (IOS), the Collateral Sulcus (CoS), the posterior Collateral Sulcus (ptCoS), the Inferior Temporal Sulcus (ITS), and the Occipital Pole (OP).</p

    Errors by visual area for dataset D<sub>10°</sub>.

    No full text
    a<p>Errors are calculated in a typical leave-one-out fashion in which each subject is compared to the prediction found using all other subjects; all significant vertices between 1.25° and 8.75° of eccentricity are included, and the reported errors represent the median of all vertices from all subjects.</p>b<p>Median absolute leave-one-out error between expected and observed values of all vertices.</p>c<p>Median signed leave-one-out error, expected value minus observed value, of all vertices.</p>d<p>Median absolute leave-one-out error, as calculated by predicting the polar angle and eccentricity of the left-out subject from the confidence-weighted mean of all other subjects.</p>e<p>Median absolute error between observed values and those predicted by the algebraic model of retinotopy prior to any registration.</p>f<p>Median absolute error between observed values from two identical 20 minute scans.</p><p>Vertices for which the <i>F</i>-statistic of the polar angle and eccentricity assignments were below 5 were discarded.</p

    Correction of Distortion in Flattened Representations of the Cortical Surface Allows Prediction of V1-V3 Functional Organization from Anatomy

    No full text
    <div><p>Several domains of neuroscience offer map-like models that link location on the cortical surface to properties of sensory representation. Within cortical visual areas V1, V2, and V3, algebraic transformations can relate position in the visual field to the retinotopic representation on the flattened cortical sheet. A limit to the practical application of this structure-function model is that the cortex, while topologically a two-dimensional surface, is curved. Flattening of the curved surface to a plane unavoidably introduces local geometric distortions that are not accounted for in idealized models. Here, we show that this limitation is overcome by correcting the geometric distortion induced by cortical flattening. We use a mass-spring-damper simulation to create a registration between functional MRI retinotopic mapping data of visual areas V1, V2, and V3 and an algebraic model of retinotopy. This registration is then applied to the flattened cortical surface anatomy to create an anatomical template that is linked to the algebraic retinotopic model. This registered cortical template can be used to accurately predict the location and retinotopic organization of these early visual areas from cortical anatomy alone. Moreover, we show that prediction accuracy remains when extrapolating beyond the range of data used to inform the model, indicating that the registration reflects the retinotopic organization of visual cortex. We provide code for the mass-spring-damper technique, which has general utility for the registration of cortical structure and function beyond the visual cortex.</p></div

    Polar angle organization.

    No full text
    <p>(<b>A</b>) The mean weighted aggregate polar angle map of all subjects in dataset D<sub>10°</sub> shown in the cortical surface atlas space. (<b>B</b>) The mean weighted aggregate polar angle map from panel A shown in the corrected topology following MSD warping. A line plot of the algebraic model to which the MSD simulation registered the functional data is shown over the functional data. (<b>C</b>) The polar angle template plotted on the <i>fsaverage_sym</i> pial surface. This template was calculated by converting the prediction of polar angle from the idealized model, as applied to vertices in the corrected topology, back to the <i>fsaverage_sym</i> atlas. (<b>D</b>) Median absolute leave-one-out polar angle error for all vertices with predicted eccentricties between 1.25° and 8.75° shown in the <i>fsaverage_sym</i> atlas space. This error was calculated by comparing the predicted polar angle generated from each subset of 18 of the 19 subjects in the 10° dataset to the observed polar angle of the remaining subject. The median absolute overall leave-one-out error is 10.93° (<a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1003538#pcbi-1003538-t001" target="_blank">Tab. 1</a>). The highest errors occur near the foveal confluence and at the dorsal border of V3. (<b>E</b>) Absolute leave-one-out error of the polar angle prediction across all regions (V1, V2, and V3), plotted according to the predicted polar angle value. The thin gray line represents the median error while the thick black line shows a best-fit 5th order polynomial to the median error. The dashed lines demarcate similar fits to the upper and lower error quartiles. Error plots for individual regions are given in <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1003538#pcbi.1003538.s001" target="_blank">Fig. S1</a>.</p

    Eccentricity organization.

    No full text
    <p>(<b>A</b>) The mean weighted aggregate eccentricity map of all subjects in dataset D<sub>10°</sub> shown in the <i>fsaverage_sym</i> cortical atlas space. (<b>B</b>) The mean weighted aggregate eccentricity map from panel A shown in the corrected topology following MSD warping. A line plot of the algebraic model to which the MSD simulation registered the functional data is shown. (<b>C</b>) The eccentricity template plotted on the <i>fsaverage_sym</i> pial surface. This template was calculated by converting the prediction of eccentricity from the algebraic model, as applied to vertices in the corrected topology, back to the <i>fsaverage_sym</i> topology. (<b>D</b>) Median absolute leave-one-out eccentricity error for all vertices with predicted eccentricties between 1.25° and 8.75° shown in the <i>fsaverage_sym</i> atlas space. This error was calculated by comparing the predicted eccentricity generated from each subset of 18 of the 19 subjects in the 10° dataset to the observed eccentricity of the remaining subject. The median absolute overall leave-one-out error is 0.41° (<a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1003538#pcbi-1003538-t001" target="_blank">Tab. 1</a>). The highest errors occur near the outer eccentricity border of of our stimulus. (<b>E</b>) Absolute leave-one-out error of the eccentricity prediction across all regions (V1, V2, and V3), plotted according to the predicted polar angle value. Error plots for individual regions are given in <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1003538#pcbi.1003538.s002" target="_blank">Fig. S2</a>. (<b>F</b>) The mean weighted aggregate eccentricity map of all subjects in dataset D<sub>20°</sub> shown in the cortical patch corrected by MSD warping to the D<sub>10°</sub> dataset. Although this dataset includes eccentricities beyond those used to discover the corrected topology, the 20° aggregate data is in good (although not perfect) agreement with the prediction.</p
    corecore