122,032 research outputs found

    Deep Pyramidal Residual Networks

    Full text link
    Deep convolutional neural networks (DCNNs) have shown remarkable performance in image classification tasks in recent years. Generally, deep neural network architectures are stacks consisting of a large number of convolutional layers, and they perform downsampling along the spatial dimension via pooling to reduce memory usage. Concurrently, the feature map dimension (i.e., the number of channels) is sharply increased at downsampling locations, which is essential to ensure effective performance because it increases the diversity of high-level attributes. This also applies to residual networks and is very closely related to their performance. In this research, instead of sharply increasing the feature map dimension at units that perform downsampling, we gradually increase the feature map dimension at all units to involve as many locations as possible. This design, which is discussed in depth together with our new insights, has proven to be an effective means of improving generalization ability. Furthermore, we propose a novel residual unit capable of further improving the classification accuracy with our new network architecture. Experiments on benchmark CIFAR-10, CIFAR-100, and ImageNet datasets have shown that our network architecture has superior generalization ability compared to the original residual networks. Code is available at https://github.com/jhkim89/PyramidNet}Comment: Accepted to CVPR 201

    A Counterexample to Cover's 2P Conjecture on Gaussian Feedback Capacity

    Full text link
    We provide a counterexample to Cover's conjecture that the feedback capacity CFBC_\textrm{FB} of an additive Gaussian noise channel under power constraint PP be no greater than the nonfeedback capacity CC of the same channel under power constraint 2P2P, i.e., CFB(P)≀C(2P)C_\textrm{FB}(P) \le C(2P).Comment: 2 pages, submitted to IEEE Transactions on Information Theor
    • …
    corecore