168 research outputs found

    Compulsive Smartphone Use: The Roles of Flow, Reinforcement Motives, and Convenience

    Get PDF
    Along with its rapid growth of penetration, smartphone has become highly prevalent in recent years. Meanwhile, compulsive smartphone use emerges as a rising concern. Given that research on compulsive smartphone use is scarce in the information systems literature, this paper aims to reveal its significant determinants to enrich the theoretical development in this area. In particular, we incorporate flow, reinforcement motives (i.e., instant gratification and mood regulation), and convenience in the research model to examine their influences on compulsive smartphone use. We conduct an empirical online survey with 384 valid responses to assess the model. The findings show that flow and reinforcement motives have direct and significant effects on compulsive use. Convenience affects compulsive use indirectly through flow, while flow further mediates the effects of reinforcement motives on compulsive use. Implications for both research and practice are offered

    Learning an Effective Context-Response Matching Model with Self-Supervised Tasks for Retrieval-based Dialogues

    Full text link
    Building an intelligent dialogue system with the ability to select a proper response according to a multi-turn context is a great challenging task. Existing studies focus on building a context-response matching model with various neural architectures or PLMs and typically learning with a single response prediction task. These approaches overlook many potential training signals contained in dialogue data, which might be beneficial for context understanding and produce better features for response prediction. Besides, the response retrieved from existing dialogue systems supervised by the conventional way still faces some critical challenges, including incoherence and inconsistency. To address these issues, in this paper, we propose learning a context-response matching model with auxiliary self-supervised tasks designed for the dialogue data based on pre-trained language models. Specifically, we introduce four self-supervised tasks including next session prediction, utterance restoration, incoherence detection and consistency discrimination, and jointly train the PLM-based response selection model with these auxiliary tasks in a multi-task manner. By this means, the auxiliary tasks can guide the learning of the matching model to achieve a better local optimum and select a more proper response. Experiment results on two benchmarks indicate that the proposed auxiliary self-supervised tasks bring significant improvement for multi-turn response selection in retrieval-based dialogues, and our model achieves new state-of-the-art results on both datasets.Comment: 10 page

    Knowledge from Large-Scale Protein Contact Prediction Models Can Be Transferred to the Data-Scarce RNA Contact Prediction Task

    Full text link
    RNA, whose functionality is largely determined by its structure, plays an important role in many biological activities. The prediction of pairwise structural proximity between each nucleotide of an RNA sequence can characterize the structural information of the RNA. Historically, this problem has been tackled by machine learning models using expert-engineered features and trained on scarce labeled datasets. Here, we find that the knowledge learned by a protein-coevolution Transformer-based deep neural network can be transferred to the RNA contact prediction task. As protein datasets are orders of magnitude larger than those for RNA contact prediction, our findings and the subsequent framework greatly reduce the data scarcity bottleneck. Experiments confirm that RNA contact prediction through transfer learning using a publicly available protein model is greatly improved. Our findings indicate that the learned structural patterns of proteins can be transferred to RNAs, opening up potential new avenues for research.Comment: Minor revision. The code is available at https://github.com/yiren-jian/CoT-RNA-Transfe

    Deep Hashing Based Fusing Index Method for Large-Scale Image Retrieval

    Get PDF
    Hashing has been widely deployed to perform the Approximate Nearest Neighbor (ANN) search for the large-scale image retrieval to solve the problem of storage and retrieval efficiency. Recently, deep hashing methods have been proposed to perform the simultaneous feature learning and the hash code learning with deep neural networks. Even though deep hashing has shown the better performance than traditional hashing methods with handcrafted features, the learned compact hash code from one deep hashing network may not provide the full representation of an image. In this paper, we propose a novel hashing indexing method, called the Deep Hashing based Fusing Index (DHFI), to generate a more compact hash code which has stronger expression ability and distinction capability. In our method, we train two different architecture’s deep hashing subnetworks and fuse the hash codes generated by the two subnetworks together to unify images. Experiments on two real datasets show that our method can outperform state-of-the-art image retrieval applications
    • …
    corecore