3 research outputs found

    Visual Co-occurence Learning using Denoising Autoencoders

    Get PDF
    Modern recommendation systems are leveraging the recent advances in deep neural networks to provide better recommendations. In addition to making accurate recommendations to users, we are interested in the recommendation of items that are complementary to a set of other items. More specifically, given a user query containing items from different categories, we seek to recommend one or more items from our inventory based on latent representations of their visual appearance. For this purpose, a denoising autoencoder (DAE) is used. The capacity of DAEs to remove the noise from corrupted inputs by predicting their corresponding uncorrupted counterparts is investigated. Used with the right corruption process, we show that they can be used as regular prediction models. Furthermore, we measure experimentally two of their specificities. The first is their capacity to predict any potentially missing variable from their inputs. The second is their ability to predict multiple missing variables at the same time given a limited amount of information at their disposal. Finally, we experiment with the use of DAEs to recommend fashion items that are jointly fashionable with a user query. Latent representations of items contained in the user query are being fed into a DAE to predict the latent representation of the ideal item to recommend. This ideal item is then matched to a real item from our inventory that we end up recommending to the user
    corecore