235,309 research outputs found

    RICH AND EFFICIENT VISUAL DATA REPRESENTATION

    Get PDF
    Increasing the size of training data in many computer vision tasks has shown to be very effective. Using large scale image datasets (e.g. ImageNet) with simple learning techniques (e.g. linear classifiers) one can achieve state-of-the-art performance in object recognition compared to sophisticated learning techniques on smaller image sets. Semantic search on visual data has become very popular. There are billions of images on the internet and the number is increasing every day. Dealing with large scale image sets is intense per se. They take a significant amount of memory that makes it impossible to process the images with complex algorithms on single CPU machines. Finding an efficient image representation can be a key to attack this problem. A representation being efficient is not enough for image understanding. It should be comprehensive and rich in carrying semantic information. In this proposal we develop an approach to computing binary codes that provide a rich and efficient image representation. We demonstrate several tasks in which binary features can be very effective. We show how binary features can speed up large scale image classification. We present learning techniques to learn the binary features from supervised image set (With different types of semantic supervision; class labels, textual descriptions). We propose several problems that are very important in finding and using efficient image representation

    ViCE: Visual Concept Embedding Discovery and Superpixelization

    Full text link
    Recent self-supervised computer vision methods have demonstrated equal or better performance to supervised methods, opening for AI systems to learn visual representations from practically unlimited data. However, these methods are classification-based and thus ineffective for learning dense feature maps required for unsupervised semantic segmentation. This work presents a method to effectively learn dense semantically rich visual concept embeddings applicable to high-resolution images. We introduce superpixelization as a means to decompose images into a small set of visually coherent regions, allowing efficient learning of dense semantics by swapped prediction. The expressiveness of our dense embeddings is demonstrated by significantly improving the SOTA representation quality benchmarks on COCO (+16.27 mIoU) and Cityscapes (+19.24 mIoU) for both low- and high-resolution images
    • …
    corecore