Image recognition, semantic segmentation and photo adjustment using deep neural networks

Abstract

Deep Neural Networks (DNNs) have proven to be effective models for solving various problems in computer vision. Multi-Layer Perceptron Networks, Convolutional Neural Networks and Recurrent Neural Networks are representative examples of DNNs in the setting of supervised learning. The key ingredients in the successful development of DNN-based models include but not limited to task-specific designs of network architecture, discriminative feature representation learning and scalable training algorithms. In this thesis, we describe a collection of DNN-based models to address three challenging computer vision tasks, namely large-scale visual recognition, image semantic segmentation and automatic photo adjustment. For each task, the network architecture is carefully designed on the basis of the nature of the task. For large-scale visual recognition, we design a hierarchical Convolutional Neural Network to fully exploit a semantic hierarchy among visual categories. The resulting model can be deemed as an ensemble of specialized classifiers. We improve state-of-the-art results at an affordable increase of the computational cost. For image semantic segmentation, we integrate convolutional layers with novel spatially recurrent layers for incorporating global contexts into the prediction process. The resulting hybrid network is capable of learning improved feature representations, which lead to more accurate region recognition and boundary localization. Combined with a post-processing step involving a fully-connected conditional random field, our hybrid network achieves new state-of-the-art results on a large benchmark dataset. For automatic photo adjustment, we take a data-driven approach to learn the underlying color transforms from manually enhanced examples. We formulate the learning problem as a regression task, which can be approached with a Multi-Layer Perceptron network. We concatenate global contextual features, local contextual features as well as pixel-wise features and feed them into the deep network. State-of-the-art results are achieved on datasets with both global and local stylized adjustments

    Similar works