52 research outputs found

    A sample sequence of horizontal occlusions (top) and vertical occlusions (bottom).

    No full text
    <p>All of the occlusion portions shown here correspond to the activation values above the threshold of the second best unit (see the last row of <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0081813#pone-0081813-g012" target="_blank">Figure 12</a>).</p

    Statistics of correlation coefficients.

    No full text
    <p>First column: correlations between responses of different filters at the same location. Second column: correlations between responses of the same filter at different locations. Third column: correlations between responses of different filters at different locations. The distance between locations is pixels on the original image space. First row: results on the S1 layer in <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0081813#pone-0081813-g002" target="_blank">Figure 2</a>. Second to fourth rows: results on the C1 layer in <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0081813#pone-0081813-g002" target="_blank">Figure 2</a> with max pooling, average pooling and square pooling, respectively, where the pooling ratio . Fifth row: mean of the absolute values of correlation coefficients with respect to the pooling ratio , where the open circles, asterisks and squares denote max pooling, average pooling and square pooling, respectively.</p

    Representation for general categories.

    No full text
    <p>First row: most selective units to the four categories. Second row: ROC of these units for identifying the corresponding categories. Horizontal axis: false positive rate. Vertical axis: true positive rate. Third to sixth rows: images that induced highest responses to the four units shown in the first row, respectively. The number above each image is the response value of the corresponding unit.</p

    Illustration of the first two layers of HMAX.

    No full text
    <p>The subscripts denote filter labels and the superscripts denote positions. Max pooling is only applied over positions.</p

    Illustration of sparse HMAX with six layers.

    No full text
    <p>Illustration of sparse HMAX with six layers.</p

    Visualization of S2 bases (bottom) and S3 bases (top) learned on the Caltech-101 dataset.

    No full text
    <p>From left to right, the columns display results on images from four categories: faces-easy, car-side, elephant and ibis, respectively.</p

    Visualization of S3 bases learned on images from mixed categories of the Caltech-101 dataset: faces-easy, car-side, elephant and ibis.

    No full text
    <p>Visualization of S3 bases learned on images from mixed categories of the Caltech-101 dataset: faces-easy, car-side, elephant and ibis.</p

    Visualization of S1 bases (left), S2 bases (middle) and S3 bases (right) learned on the Kyoto dataset.

    No full text
    <p>Visualization of S1 bases (left), S2 bases (middle) and S3 bases (right) learned on the Kyoto dataset.</p

    Classification accuracy of the L2-regularized HMAX with respect to different values of the regularization parameter

    No full text
    <p><b> on the Caltech-101 dataset.</b> The curve shows the average results over ten random splits of train/test samples and the error bars show the standard deviations. The x-axis is in the log scale.</p
    • …
    corecore