3 research outputs found

    Efficient Deep Feature Learning and Extraction via StochasticNets

    Full text link
    Deep neural networks are a powerful tool for feature learning and extraction given their ability to model high-level abstractions in highly complex data. One area worth exploring in feature learning and extraction using deep neural networks is efficient neural connectivity formation for faster feature learning and extraction. Motivated by findings of stochastic synaptic connectivity formation in the brain as well as the brain's uncanny ability to efficiently represent information, we propose the efficient learning and extraction of features via StochasticNets, where sparsely-connected deep neural networks can be formed via stochastic connectivity between neurons. To evaluate the feasibility of such a deep neural network architecture for feature learning and extraction, we train deep convolutional StochasticNets to learn abstract features using the CIFAR-10 dataset, and extract the learned features from images to perform classification on the SVHN and STL-10 datasets. Experimental results show that features learned using deep convolutional StochasticNets, with fewer neural connections than conventional deep convolutional neural networks, can allow for better or comparable classification accuracy than conventional deep neural networks: relative test error decrease of ~4.5% for classification on the STL-10 dataset and ~1% for classification on the SVHN dataset. Furthermore, it was shown that the deep features extracted using deep convolutional StochasticNets can provide comparable classification accuracy even when only 10% of the training data is used for feature learning. Finally, it was also shown that significant gains in feature extraction speed can be achieved in embedded applications using StochasticNets. As such, StochasticNets allow for faster feature learning and extraction performance while facilitate for better or comparable accuracy performances.Comment: 10 pages. arXiv admin note: substantial text overlap with arXiv:1508.0546

    Randomly-connected Non-Local Conditional Random Fields

    Get PDF
    Structural data modeling is an important field of research. Structural data are the combination of latent variables being related to each other. The incorporation of these relations in modeling and taking advantage of those to have a robust estimation is an open field of research. There are several approaches that involve these relations such as Markov chain models or random field frameworks. Random fields specify the relations among random variables in the context of probability distributions. Markov random fields are generative models used to represent the prior distribution among random variables. On the other hand, conditional random fields (CRFs) are known as discriminative models computing the posterior probability of random variables given observations directly. CRFs are one of the most powerful frameworks in image modeling. However practical CRFs typically have edges only between nearby nodes. Utilizing more interactions and expressive relations among nodes make these methods impractical for large-scale applications, due to the high computational complexity. Nevertheless, studies have demonstrated that obtaining long-range interactions in the modeling improves the modeling accuracy and addresses the short-boundary bias problem to some extent. Recent work has shown that fully connected CRFs can be tractable by defining specific potential functions. Although the proposed frameworks present algorithms to efficiently manage the fully connected interactions/relatively dense random fields, there exists the unanswered question that fully connected interactions are usually useful in modeling. To the best of our knowledge, no research has been conducted to answer this question and the focus of research was to introduce a tractable approach to utilize all connectivity interactions. This research aims to analyze this question and attempts to provide an answer. It demonstrates that how long-range of connections might be useful. Motivated by the answer of this question, a novel framework to tackle the computational complexity of a fully connected random fields without requiring specific potential functions is proposed. Inspired by random graph theory and sampling methods, this thesis introduces a new clique structure called stochastic cliques. The stochastic cliques specify the range of effective connections dynamically which converts a conditional random field (CRF) to a randomly-connected CRF. The randomly-connected CRF (RCRF) is a marriage between random graphs and random fields, benefiting from the advantages of fully connected graphs while maintaining computational tractability. To address the limitations of RCRF, the proposed stochastic clique structure is utilized in a deep structural approach (deep structure randomly-connected conditional random field (DRCRF)) where various range of connectivities are obtained in a hierarchical framework to maintain the computational complexity while utilizing long-range interactions. In this thesis the concept of randomly-connected non-local conditional random fields is explored to address the smoothness issues of local random fields. To demonstrate the effectiveness of the proposed approaches, they are compared with state-of-the-art methods on interactive image segmentation problem. A comprehensive analysis is done via different datasets with noiseless and noisy situations. The results shows that the proposed method can compete with state-of-the-art algorithms on the interactive image segmentation problem
    corecore