1,285 research outputs found

    Optimizing E-Commerce Product Classification Using Transfer Learning

    Get PDF
    The global e-commerce market is snowballing at a rate of 23% per year. In 2017, retail e-commerce users were 1.66 billion and sales worldwide amounted to 2.3 trillion US dollars, and e-retail revenues are projected to grow to 4.88 trillion USD in 2021. With the immense popularity that e-commerce has gained over past few years comes the responsibility to deliver relevant results to provide rich user experience. In order to do this, it is essential that the products on the ecommerce website be organized correctly into their respective categories. Misclassification of products leads to irrelevant results for users which not just reflects badly on the website, it could also lead to lost customers. With ecommerce sites nowadays providing their portal as a platform for third party merchants to sell their products as well, maintaining a consistency in product categorization becomes difficult. Therefore, automating this process could be of great utilization. This task of automation done on the basis of text could lead to discrepancies since the website itself, its various merchants, and users, all could use different terminologies for a product and its category. Thus, using images becomes a plausible solution for this problem. Dealing with images can best be done using deep learning in the form of convolutional neural networks. This is a computationally expensive task, and in order to keep the accuracy of a traditional convolutional neural network while reducing the hours it takes for the model to train, this project aims at using a technique called transfer learning. Transfer learning refers to sharing the knowledge gained from one task for another where new model does not need to be trained from scratch in order to reduce the time it takes for training. This project aims at using various product images belonging to five categories from an ecommerce platform and developing an algorithm that can accurately classify products in their respective categories while taking as less time as possible. The goal is to first test the performance of transfer learning against traditional convolutional networks. Then the next step is to apply transfer learning to the downloaded dataset and assess its performance on the accuracy and time taken to classify test data that the model has never seen before

    Improving Transfer Learning for Use in Multi-Spectral Data

    Get PDF
    Recently Nasa as well as the European Space Agency have made observational satellites images public. The main reason behind opening it to public is to foster research among university students and corporations alike. Sentinel is a program by the European Space Agency which has plans to release a series of seven satellites in lower earth orbit for observing land and sea patterns. Recently huge datasets have been made public by the Sentinel program. Many advancements have been made in the field of computer vision in the last decade. Krizhevsky, Sutskever & Hinton, 2012, revolutionized the field of image analysis by training deep neural nets and introduced the idea of using convolutions to obtain a high accuracy value on coloured image dataset of more than one million images known as Imagenet ILSVRC. Convolutional Neural Network, or CNN architecture has undergone much improvement since then. One CNN model known as Resnet or Residual Network architecture (He, Zhang, Ren & Sun, 2015) has seen mass acceptance in particular owing to it processing speed and high accuracy. Resnet is widely used for applying features it learned in Imagenet ILSVRC tasks into other image classification or object detection tasks. This concept, in the domain of deep learning, is known as Transfer learning, where a classifier is trained on a bigger more complex task and then learning is transferred to a smaller, more specific task. Transfer learning can often lead to good performance on new smaller tasks and this approach has given state of the art results in several problem domains of image classification and even in object detection (Dai, Li, He, & Sun, 2016). The real problem is that not all the problems in computer vision field belongs to regular RGB images or images consisting of only Red, Green, and Blue band set. For example, a field like medical image analysis has most of the images belonging to greyscale color space, while most of the Remote sensing images collected by satellites belong to multispectral bands of light. Transferring features learned from Imagenet ILSVRC tasks to these fields might give you higher accuracy than training from scratch, but it is a problem of fundamentally incorrect approach. Thus, there is a need to create network models that can learn from single channel or multispectral images iv and can transfer features seamlessly to similar domains with smaller datasets.This thesis presents a study in multispectral image analysis using multiple ways of feature transfer. In this study, Transfer Learning of features is done using a Resnet50 model which is trained on RGB images, and another Resnet50 model which is trained on Greyscale images alone. The dataset used to pretrain these models is a combination of images from ImageNet (Deng, Dong, Socher, Li, Li, & Fei-Fei, 2009) and Eurosat (Helber, Bischke, Dengel, & Borth. 2017). The idea behind choosing Resnet50 is that it has been doing really well in image processing and transfer learning and has outperformed all the other traditional techniques, while still not being computationally prohibitive to train in the context of this work. An attempt is made to classify different land-cover classes in multispectral images taken up by Sentinel 2A satellite. The dataset used here has a key challenge of a smaller number of samples, which means a CNN classifier trained from scratch on these small number of samples will be highly inaccurate and overfitted. This thesis focuses on improving the accuracies of this classifier using transfer learning, and the performance is measured after fine-tuning the baseline above Resnet50 model. The experiment results show that fine-tuning the Greyscale or single channel based Resnet50 model helps in improving the accuracy a bit more than using a RGB trained Resnet50 model for fine tuning, though it haven\u27t achieved great result due to the limitation of lesser computational power and smaller dataset to train a large computer vision network like Resnet50. This work is a contribution towards improving classification in domain of multispectral images usually taken up by satellites. There is no baseline model available right now, which can be used to transfer features to single or multispectral domains like the rest of RGB image field has. The contribution of this work is to build such a classifier for multispectral domain and to extend the state of the art in such computer vision domains

    Bayesian Joint Modelling for Object Localisation in Weakly Labelled Images

    Get PDF
    Abstract—We address the problem of localisation of objects as bounding boxes in images and videos with weak labels. This weakly supervised object localisation problem has been tackled in the past using discriminative models where each object class is localised independently from other classes. In this paper, a novel framework based on Bayesian joint topic modelling is proposed, which differs significantly from the existing ones in that: (1) All foreground object classes are modelled jointly in a single generative model that encodes multiple object co-existence so that “explaining away ” inference can resolve ambiguity and lead to better learning and localisation. (2) Image backgrounds are shared across classes to better learn varying surroundings and “push out ” objects of interest. (3) Our model can be learned with a mixture of weakly labelled and unlabelled data, allowing the large volume of unlabelled images on the Internet to be exploited for learning. Moreover, the Bayesian formulation enables the exploitation of various types of prior knowledge to compensate for the limited supervision offered by weakly labelled data, as well as Bayesian domain adaptation for transfer learning. Extensive experiments on the PASCAL VOC, ImageNet and YouTube-Object videos datasets demonstrate the effectiveness of our Bayesian joint model for weakly supervised object localisation
    • …
    corecore