17,537 research outputs found

    Group Invariance, Stability to Deformations, and Complexity of Deep Convolutional Representations

    Get PDF
    The success of deep convolutional architectures is often attributed in part to their ability to learn multiscale and invariant representations of natural signals. However, a precise study of these properties and how they affect learning guarantees is still missing. In this paper, we consider deep convolutional representations of signals; we study their invariance to translations and to more general groups of transformations, their stability to the action of diffeomorphisms, and their ability to preserve signal information. This analysis is carried by introducing a multilayer kernel based on convolutional kernel networks and by studying the geometry induced by the kernel mapping. We then characterize the corresponding reproducing kernel Hilbert space (RKHS), showing that it contains a large class of convolutional neural networks with homogeneous activation functions. This analysis allows us to separate data representation from learning, and to provide a canonical measure of model complexity, the RKHS norm, which controls both stability and generalization of any learned model. In addition to models in the constructed RKHS, our stability analysis also applies to convolutional networks with generic activations such as rectified linear units, and we discuss its relationship with recent generalization bounds based on spectral norms

    CIFAR-10: KNN-based Ensemble of Classifiers

    Full text link
    In this paper, we study the performance of different classifiers on the CIFAR-10 dataset, and build an ensemble of classifiers to reach a better performance. We show that, on CIFAR-10, K-Nearest Neighbors (KNN) and Convolutional Neural Network (CNN), on some classes, are mutually exclusive, thus yield in higher accuracy when combined. We reduce KNN overfitting using Principal Component Analysis (PCA), and ensemble it with a CNN to increase its accuracy. Our approach improves our best CNN model from 93.33% to 94.03%

    A convolutional neural network based deep learning methodology for recognition of partial discharge patterns from high voltage cables

    Get PDF
    It is a great challenge to differentiate partial discharge (PD) induced by different types of insulation defects in high-voltage cables. Some types of PD signals have very similar characteristics and are specifically difficult to differentiate, even for the most experienced specialists. To overcome the challenge, a convolutional neural network (CNN)-based deep learning methodology for PD pattern recognition is presented in this paper. First, PD testing for five types of artificial defects in ethylene-propylene-rubber cables is carried out in high voltage laboratory to generate signals containing PD data. Second, 3500 sets of PD transient pulses are extracted, and then 33 kinds of PD features are established. The third stage applies a CNN to the data; typical CNN architecture and the key factors which affect the CNN-based pattern recognition accuracy are described. Factors discussed include the number of the network layers, convolutional kernel size, activation function, and pooling method. This paper presents a flowchart of the CNN-based PD pattern recognition method and an evaluation with 3500 sets of PD samples. Finally, the CNN-based pattern recognition results are shown and the proposed method is compared with two more traditional analysis methods, i.e., support vector machine (SVM) and back propagation neural network (BPNN). The results show that the proposed CNN method has higher pattern recognition accuracy than SVM and BPNN, and that the novel method is especially effective for PD type recognition in cases of signals of high similarity, which is applicable for industrial applications
    corecore