81 research outputs found

    Leveraging the multimodal information from video content for video recommendation

    Get PDF
    Since the popularisation of media streaming, a number of video streaming services are continually buying new video content to mine the potential profit. As such, newly added content has to be handled appropriately to be recommended to suitable users. In this dissertation, the new item cold-start problem is addressed by exploring the potential of various deep learning features to provide video recommendations. The deep learning features investigated include features that capture the visual-appearance, as well as audio and motion information from video content. Different fusion methods are also explored to evaluate how well these feature modalities can be combined to fully exploit the complementary information captured by them. Experiments on a real-world video dataset for movie recommendations show that deep learning features outperform hand crafted features. In particular, it is found that recommendations generated with deep learning audio features and action-centric deep learning features are superior to Mel-frequency cepstral coefficients (MFCC) and state-of-the-art improved dense trajectory (iDT) features. It was also found that the combination of various deep learning features with textual metadata and hand-crafted features provide significant improvement in recommendations, as compared to combining only deep learning and hand-crafted features.Dissertation (MEng (Computer Engineering))--University of Pretoria, 2021.The MultiChoice Research Chair of Machine Learning at the University of PretoriaUP Postgraduate Masters Research bursaryElectrical, Electronic and Computer EngineeringMEng (Computer Engineering)Unrestricte
    • …
    corecore