10 research outputs found

    Fragment-Based Learning of Visual Object Categories in Non-Human Primates

    Get PDF
    When we perceive a visual object, we implicitly or explicitly associate it with an object category we know. Recent research has shown that the visual system can use local, informative image fragments of a given object, rather than the whole object, to classify it into a familiar category. We have previously reported, using human psychophysical studies, that when subjects learn new object categories using whole objects, they incidentally learn informative fragments, even when not required to do so. However, the neuronal mechanisms by which we acquire and use informative fragments, as well as category knowledge itself, have remained unclear. Here we describe the methods by which we adapted the relevant human psychophysical methods to awake, behaving monkeys and replicated key previous psychophysical results. This establishes awake, behaving monkeys as a useful system for future neurophysiological studies not only of informative fragments in particular, but also of object categorization and category learning in general

    Effect of M-scaling on categorization performance in Experiment 3.

    No full text
    <p>The animals performed a fragment-based categorization task where the sample stimulus in each trial was a fragment and the test stimulus was a whole object. All stimuli were presented at an eccentricity of 5° in the lower right quadrant. The performance of the animals for Main fragments is shown as a function of the fragment size. The performance for the Control fragments was at chance levels, as expected (not shown). For any given size, all stimuli, including all whole objects and fragments, were magnified by the same scaling factor. See <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0015444#s4" target="_blank">Materials and Methods</a> for details. The <i>open arrow</i> denotes the original size of the fragment (<i>i.e</i>., without magnification). The <i>filled blue arrow</i> denotes the M-scaled size appropriate for 5°. The <i>dotted horizontal lines</i> denote the performance of either animal when the stimuli were viewed foveally at their original, unmagnified size. Typical objects from the Main class and Control class are shown in the bottom right inset, with three selected Main fragments highlighted by <i>blue squares</i> on the object from the Main class.</p

    Changes in categorization performance as a function of training.

    No full text
    <p>Two animals fully trained in the categorization task learned two hitherto unknown categories (<i>icons at top left</i>). The percentages of correct trials during successive blocks of 50 trials each are shown for each animal. The <i>dashed line</i> represents chance level performance (50% correct). The training shown took 38 and 42 min from beginning to end, for Monkey 00 and Monkey 01, respectively. These learning durations mean that one can, in principle, carry out simultaneous microelectrode recordings of neuronal activity. See text for details.</p

    The trial paradigm.

    No full text
    <p>The animal performed a delayed same-different categorization task while maintaining fixation. After the animal established fixation on a central fixation target (‘+’) within an imaginary ±0.5° window (dashed square in far left frame), two eccentric stimuli were presented sequentially, each followed by a delay. After the second delay, the fixation target was turned off, at which time the animal indicated whether or not the two stimuli belonged to the same category by making a direct saccade to an appropriate saccade target (small blue squares in the far left frame). During the training phase of the study, both the stimuli during a given trial were whole objects. This figure shows a non-matching trial during training. Trials during the testing phase were identical (not shown), except that one of the stimuli during each trial was a fragment presented as a partial view of an object behind a light gray opaque occluder with a corresponding hole in it; the other stimulus in the trial was a whole object. In Experiments 1, 2 and 3, the fragment was presented as the first stimulus in each trial. See <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0015444#s4" target="_blank">Materials and Methods</a> for details.</p

    Performance in Experiment 2.

    No full text
    <p>Each bar shows the average performance (±SEM) of the animals with a given fragment. The <i>dotted blue line</i> denotes 50%, or chance level performance. The <i>light red lines</i> denote the mean (solid line) and ±SEM (dashed lines) performance of the animals during the respective last four blocks of training. Panels (<b>A</b>), (<b>B</b>) and (<b>C</b>) show the responses of the animals in the Main task <i>(i.e</i>., Z <i>vs</i>. X) using Main, Control and IPControl fragments respectively. With each Main fragment, the performance was significantly above chance (binomial tests, <i>p</i><0.05), and indistinguishable from the performance using whole objects (binomial tests, <i>p</i>>0.05, data not shown).</p

    Performance over the course of testing in Experiment 2.

    No full text
    <p>The performance of the animals during the testing sessions is shown. Each data point is averaged from 120 trials from each animal (60 trials each of Main- and Control fragments), randomly interleaved with each other. Note that the performance is lower than the average response for Main fragments (<a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0015444#pone-0015444-g007" target="_blank">Fig. 7A</a>), because the responses were averaged across Main- and Control fragments. Similar results were obtained when the data were analyzed separately for Main-, Control- or IPControl fragments in this experiment and in Experiment 1 (3-way ANOVA, testing blocks x fragment number x fragment type; <i>p</i><0.05 for testing blocks).</p

    Mutual Information of Individual Fragments in Experiment.

    No full text
    <p>Mutual Information of Individual Fragments in Experiment.</p

    Performance of the animals in Experiment 1.

    No full text
    <p>Each bar shows the average percentage (±SEM) of trials in which the animals classified a given object as belonging to class X given the corresponding fragment. The <i>dotted blue line</i> denotes 50%, or chance level performance. The <i>light red lines</i> denote the mean (solid line) and ±SEM (dashed lines) performance of the animals during the respective last four blocks of training. Panels (<b>A</b>), (<b>B</b>) and (<b>C</b>) show the responses of the animals in the Main task (<i>i.e</i>., X <i>vs</i>. Y) using Main, Control and IPControl fragments respectively. With each Main fragment, the performance was significantly above chance (binomial tests, <i>p</i><0.05), and indistinguishable from the performance using whole objects (binomial tests, <i>p</i>>0.05, data not shown).</p

    Naturalistic shape classes generated by virtual phylogenesis (VP).

    No full text
    <p>(<b>A</b>) The VP algorithm for generating naturalistic shape classes. This algorithm simulates biological evolution, in that shape characteristics evolve as random heritable variations are differentially propagated through successive generations <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0015444#pone.0015444-Hegd1" target="_blank">[12]</a>, <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0015444#pone.0015444-Bart2" target="_blank">[13]</a>. Note that the differences between, as well as within, the categories arise spontaneously and randomly during VP, rather than as a result of externally imposed rules, including the fragment selection process or any other classification scheme. The bottom of the evolutionary cascade denotes the three shape classes used in many of the experiments in this study. See refs. <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0015444#pone.0015444-Hegd1" target="_blank">[12]</a>, <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0015444#pone.0015444-Bart2" target="_blank">[13]</a> for additional examples of shape classes. (<b>B</b>) Shape variations within and across classes X, Y and Z as visualized by a metric multi-dimensional scaling (MDS) plot. Each data point represents one object from a given class (inset). MDS plots the data points so as to cluster similar data points together and disperse dissimilar data points from each other, so to provide a principled representation of the relevant classes (for details, see refs. <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0015444#pone.0015444-Kruskal1" target="_blank">[48]</a>, <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0015444#pone.0015444-Duda1" target="_blank">[49]</a>).</p
    corecore