60 research outputs found

    Emergence of Shape Bias in Convolutional Neural Networks through Activation Sparsity

    Full text link
    Current deep-learning models for object recognition are known to be heavily biased toward texture. In contrast, human visual systems are known to be biased toward shape and structure. What could be the design principles in human visual systems that led to this difference? How could we introduce more shape bias into the deep learning models? In this paper, we report that sparse coding, a ubiquitous principle in the brain, can in itself introduce shape bias into the network. We found that enforcing the sparse coding constraint using a non-differential Top-K operation can lead to the emergence of structural encoding in neurons in convolutional neural networks, resulting in a smooth decomposition of objects into parts and subparts and endowing the networks with shape bias. We demonstrated this emergence of shape bias and its functional benefits for different network structures with various datasets. For object recognition convolutional neural networks, the shape bias leads to greater robustness against style and pattern change distraction. For the image synthesis generative adversary networks, the emerged shape bias leads to more coherent and decomposable structures in the synthesized images. Ablation studies suggest that sparse codes tend to encode structures, whereas the more distributed codes tend to favor texture. Our code is host at the github repository: \url{https://github.com/Crazy-Jack/nips2023_shape_vs_texture}Comment: Published as NeurIPS 2023 (Oral

    Scalable Nonlinear Embeddings for Semantic Category-based Image Retrieval

    Full text link
    We propose a novel algorithm for the task of supervised discriminative distance learning by nonlinearly embedding vectors into a low dimensional Euclidean space. We work in the challenging setting where supervision is with constraints on similar and dissimilar pairs while training. The proposed method is derived by an approximate kernelization of a linear Mahalanobis-like distance metric learning algorithm and can also be seen as a kernel neural network. The number of model parameters and test time evaluation complexity of the proposed method are O(dD) where D is the dimensionality of the input features and d is the dimension of the projection space - this is in contrast to the usual kernelization methods as, unlike them, the complexity does not scale linearly with the number of training examples. We propose a stochastic gradient based learning algorithm which makes the method scalable (w.r.t. the number of training examples), while being nonlinear. We train the method with up to half a million training pairs of 4096 dimensional CNN features. We give empirical comparisons with relevant baselines on seven challenging datasets for the task of low dimensional semantic category based image retrieval.Comment: ICCV 2015 preprin

    Butterfly Image Classification Using Color Quantization Method on HSV Color Space and Local Binary Pattern

    Get PDF
    A lot of methods are used to develop on image research. Image detection to relay back new information, widely used in various research field, such as health, agriculture or other field research. Various methods are used and developed to get better results. A combination of several methods is performed for testing as part of the research contribution. In this study will perform the combination results of the process color feature extraction with texture features. In color feature extraction using HSV color space method that gets 72 feature extraction and on texture feature extraction using local binary pattern that gets 256 feature extraction. The process of merging the two extracted results gets 328 new feature extractions. The result of combining color feature extraction and texture feature extraction is further classified. Results from image classification of butterflies get an accuracy score of 72%. The results obtained will be tested performance. The results obtained from performance testing get precision value, recall and f-measure respectively 76%, 72% and 74
    • …
    corecore