3 research outputs found

    Graph-Driven Generative Models for Heterogeneous Multi-Task Learning

    Full text link
    We propose a novel graph-driven generative model, that unifies multiple heterogeneous learning tasks into the same framework. The proposed model is based on the fact that heterogeneous learning tasks, which correspond to different generative processes, often rely on data with a shared graph structure. Accordingly, our model combines a graph convolutional network (GCN) with multiple variational autoencoders, thus embedding the nodes of the graph i.e., samples for the tasks) in a uniform manner while specializing their organization and usage to different tasks. With a focus on healthcare applications (tasks), including clinical topic modeling, procedure recommendation and admission-type prediction, we demonstrate that our method successfully leverages information across different tasks, boosting performance in all tasks and outperforming existing state-of-the-art approaches.Comment: Accepted by AAAI-202

    Short-Video Marketing in E-commerce: Analyzing and Predicting Consumer Response

    Get PDF
    This study analyzes and predicts consumer viewing response to e-commerce short-videos (ESVs). We first construct a large-scale ESV dataset that contains 23,001 ESVs across 40 product categories. The dataset consists of the consumer response label in terms of average viewing durations and human-annotated ESV content attributes. Using the constructed dataset and mixed-effects model, we find that product description, product demonstration, pleasure, and aesthetics are four key determinants of ESV viewing duration. Furthermore, we design a content-based multimodal-multitask framework to predict consumer viewing response to ESVs. We propose the information distillation module to extract the shared, special, and conflicted information from ESV multimodal features. Additionally, we employ a hierarchical multitask classification module to capture feature-level and label-level dependencies. We conduct extensive experiments to evaluate the prediction performance of our proposed framework. Taken together, our paper provides theoretical and methodological contributions to the IS and relevant literature

    Learning Perceptual Embeddings With Two Related Tasks For Joint Predictions Of Media Interestingness And Emotions

    No full text
    Integrating media elements of various medium, multimedia is capable of expressing complex information in a neat and compact way. Early studies have linked different sensory presentation in multimedia with the perception of human-like concepts. Yet, the richness of information in multimedia makes understanding and predicting user perceptions in multimedia content a challenging task both to the machine and the human mind. This paper presents a novel multi-task feature extraction method for accurate prediction of user perceptions in multimedia content. Differentiating from the conventional feature extraction algorithms which focus on perfecting a single task, the proposed model recognizes the commonality between different perceptions (e.g., interestingness and emotional impact), and attempts to jointly optimize the performance of all the tasks through uncovered commonality features. Using both a media interestingness dataset and a media emotion dataset for user perception prediction tasks, the proposed model attempts to simultaneously characterize the individualities of each task and capture the commonalities shared by both tasks, and achieves better accuracy in predictions than other competing algorithms on real-world datasets of two related tasks: MediaEval 2017 Predicting Media Interestingness Task and MediaEval 2017 Emotional Impact of Movies Task
    corecore