2 research outputs found

    End-to-End Efficient Representation Learning via Cascading Combinatorial Optimization

    Full text link
    We develop hierarchically quantized efficient embedding representations for similarity-based search and show that this representation provides not only the state of the art performance on the search accuracy but also provides several orders of speed up during inference. The idea is to hierarchically quantize the representation so that the quantization granularity is greatly increased while maintaining the accuracy and keeping the computational complexity low. We also show that the problem of finding the optimal sparse compound hash code respecting the hierarchical structure can be optimized in polynomial time via minimum cost flow in an equivalent flow network. This allows us to train the method end-to-end in a mini-batch stochastic gradient descent setting. Our experiments on Cifar100 and ImageNet datasets show the state of the art search accuracy while providing several orders of magnitude search speedup respectively over exhaustive linear search over the dataset.Comment: Accepted and to appear at CVPR 201

    flexgrid2vec: Learning Efficient Visual Representations Vectors

    Full text link
    We propose flexgrid2vec, a novel approach for image representation learning. Existing visual representation methods suffer from several issues, including the need for highly intensive computation, the risk of losing in-depth structural information and the specificity of the method to certain shapes or objects. flexgrid2vec converts an image to a low-dimensional feature vector. We represent each image with a graph of flexible, unique node locations and edge distances. flexgrid2vec is a multi-channel GCN that learns features of the most representative image patches. We have investigated both spectral and non-spectral implementations of the GCN node-embedding. Specifically, we have implemented flexgrid2vec based on different node-aggregation methods, such as vector summation, concatenation and normalisation with eigenvector centrality. We compare the performance of flexgrid2vec with a set of state-of-the-art visual representation learning models on binary and multi-class image classification tasks. Although we utilise imbalanced, low-size and low-resolution datasets, flexgrid2vec shows stable and outstanding results against well-known base classifiers. flexgrid2vec achieves 96.23% on CIFAR-10, 83.05% on CIFAR-100, 94.50% on STL-10, 98.8% on ASIRRA and 89.69% on the COCO dataset.Comment: 13 page
    corecore