41,361 research outputs found

    Large scale image classification and object detection

    Get PDF
    Dissertation supervisor: Dr. Tony X. Han.Includes vita.Significant advancement of research on image classification and object detection has been achieved in the past decade. Deep convolutional neural networks have exhibited superior performance in many visual recognition tasks including image classification, object detection, and scene labeling, due to their large learning capacity and resistance to overfit. However, learning a robust deep CNN model for object recognition is still quite challenging because image classification and object detection is a severely unbalanced large-scale problem. In this dissertation, we aim at improving the performance of image classification and object detection algorithms by taking advantage of deep convolutional neural networks by utilizing the following strategies: We introduce Deep Neural Pattern, a local feature densely extracted from an image with arbitrary resolution using a well trained deep convolutional neural network. We propose a latent CNN framework, which will automatically select the most discriminate region in the image to reduce the effect of irrelevant regions. We also develop a new combination scheme for multiple CNNs via Latent Model Ensemble to overcome the local minima problem of CNNs. In addition, a weakly supervised CNN framework, referred to as Multiple Instance Learning Convolutional Neural Networks is developed to alleviate strict label requirements. Finally, a novel residual-network architecture, Residual networks of Residual networks, is constructed to improve the optimization ability of very deep convolutional neural networks. All the proposed algorithms are validated by thorough experiments and have shown solid accuracy on large scale object detection and recognition benchmarks.Includes bibliographical references (pages 105-119)

    Recurrent Segmentation for Variable Computational Budgets

    Full text link
    State-of-the-art systems for semantic image segmentation use feed-forward pipelines with fixed computational costs. Building an image segmentation system that works across a range of computational budgets is challenging and time-intensive as new architectures must be designed and trained for every computational setting. To address this problem we develop a recurrent neural network that successively improves prediction quality with each iteration. Importantly, the RNN may be deployed across a range of computational budgets by merely running the model for a variable number of iterations. We find that this architecture is uniquely suited for efficiently segmenting videos. By exploiting the segmentation of past frames, the RNN can perform video segmentation at similar quality but reduced computational cost compared to state-of-the-art image segmentation methods. When applied to static images in the PASCAL VOC 2012 and Cityscapes segmentation datasets, the RNN traces out a speed-accuracy curve that saturates near the performance of state-of-the-art segmentation methods

    Learning Transferable Architectures for Scalable Image Recognition

    Full text link
    Developing neural network image classification models often requires significant architecture engineering. In this paper, we study a method to learn the model architectures directly on the dataset of interest. As this approach is expensive when the dataset is large, we propose to search for an architectural building block on a small dataset and then transfer the block to a larger dataset. The key contribution of this work is the design of a new search space (the "NASNet search space") which enables transferability. In our experiments, we search for the best convolutional layer (or "cell") on the CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking together more copies of this cell, each with their own parameters to design a convolutional architecture, named "NASNet architecture". We also introduce a new regularization technique called ScheduledDropPath that significantly improves generalization in the NASNet models. On CIFAR-10 itself, NASNet achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet achieves, among the published works, state-of-the-art accuracy of 82.7% top-1 and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than the best human-invented architectures while having 9 billion fewer FLOPS - a reduction of 28% in computational demand from the previous state-of-the-art model. When evaluated at different levels of computational cost, accuracies of NASNets exceed those of the state-of-the-art human-designed models. For instance, a small version of NASNet also achieves 74% top-1 accuracy, which is 3.1% better than equivalently-sized, state-of-the-art models for mobile platforms. Finally, the learned features by NASNet used with the Faster-RCNN framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO dataset
    • …
    corecore