4 research outputs found

    Hyperparameter Optimization Of Deep Convolutional Neural Networks Architectures For Object Recognition

    Get PDF
    Recent advances in Convolutional Neural Networks (CNNs) have obtained promising results in difficult deep learning tasks. However, the success of a CNN depends on finding an architecture to fit a given problem. A hand-crafted architecture is a challenging, time-consuming process that requires expert knowledge and effort, due to a large number of architectural design choices. In this dissertation, we present an efficient framework that automatically designs a high-performing CNN architecture for a given problem. In this framework, we introduce a new optimization objective function that combines the error rate and the information learnt by a set of feature maps using deconvolutional networks (deconvnet). The new objective function allows the hyperparameters of the CNN architecture to be optimized in a way that enhances the performance by guiding the CNN through better visualization of learnt features via deconvnet. The actual optimization of the objective function is carried out via the Nelder-Mead Method (NMM). Further, our new objective function results in much faster convergence towards a better architecture. The proposed framework has the ability to explore a CNN architecture’s numerous design choices in an efficient way and also allows effective, distributed execution and synchronization via web services. Empirically, we demonstrate that the CNN architecture designed with our approach outperforms several existing approaches in terms of its error rate. Our results are also competitive with state-of-the-art results on the MNIST dataset and perform reasonably against the state-of-the-art results on CIFAR-10 and CIFAR-100 datasets. Our approach has a significant role in increasing the depth, reducing the size of strides, and constraining some convolutional layers not followed by pooling layers in order to find a CNN architecture that produces a high recognition performance. Moreover, we evaluate the effectiveness of reducing the size of the training set on CNNs using a variety of instance selection methods to speed up the training time. We then study how these methods impact classification accuracy. Many instance selection methods require a long run-time to obtain a subset of the representative dataset, especially if the training set is large and has a high dimensionality. One example of these algorithms is Random Mutation Hill Climbing (RMHC). We improve RMHC so that it performs faster than the original algorithm with the same accuracy

    Learning the Structure of Deep Architectures Using L1 Regularization

    No full text
    International audienceWe present a method that formulates the selection of the structure of a deep architecture as a penalized, discriminative learning problem. Up to now, the structure of deep architectures has been fixed by hand, and only the weights are learned using discriminative learning. Our work is a first attempt towards a more formal method of deep structure selection. We consider architectures consisting only of fully-connected layers, and our approach relies on diagonal matrices inserted between subsequent layers. By including an L1 norm of the diagonal entries of said matrices as a regularization penalty, we force the diagonals to be sparse, accordingly selecting the effective number of rows (respectively, columns) of the corresponding layer's (next layer's) weights matrix. We carry out experiments on a standard dataset and show that our method succeeds in selecting the structure of deep architectures of multiple layers. One variant of our architecture results in a feature vector of size as little as 3636, while retaining very high image classification performance
    corecore