3,824 research outputs found

    Impacting Factors of Postgraduates’ Behavioral Intention and Satisfaction in Using Online Learning in Chengdu University

    Get PDF
    Purpose: The study aims to investigate impacting factors of behavioral intention and satisfaction of postgraduate students in using online learning based on Technology Acceptance Model (TAM), the Unified Theory of Acceptance and Use of Technology (UTAUT), and the Information Systems Success (ISS). Research design, data and methodology: A quantitative method was applied to distribute questionnaire to 500 students of Chengdu University of China. Judgmental sampling, stratified random sampling, and convenience sampling were used as sampling techniques. Prior to data collection, item objective congruence (IOC) and Cronbach’s Alpha reliability test were used to validate the data. For the data analysis, confirmatory factor analysis (CFA) and structural equation model (SEM) were employed to measure factor loading, reliability, validity and goodness of fit indices. Results: Behavioral Intention had the strongest significant effect on satisfaction, followed by social Influence, perceived ease of use, effort expectancy, perceived usefulness on behavioral intention. Additionally, perceived ease of use significantly affected on perceived usefulness. In opposite, the relationship between self-efficacy and behavioral intention was not supported. Conclusions: Academic researchers and school leaders would adapt the important factors impacting behavioral intention and satisfaction in the selection of online learning system to meet student’s needs and their learning objectives

    Unsupervised Domain Adaptation for Face Recognition in Unlabeled Videos

    Full text link
    Despite rapid advances in face recognition, there remains a clear gap between the performance of still image-based face recognition and video-based face recognition, due to the vast difference in visual quality between the domains and the difficulty of curating diverse large-scale video datasets. This paper addresses both of those challenges, through an image to video feature-level domain adaptation approach, to learn discriminative video frame representations. The framework utilizes large-scale unlabeled video data to reduce the gap between different domains while transferring discriminative knowledge from large-scale labeled still images. Given a face recognition network that is pretrained in the image domain, the adaptation is achieved by (i) distilling knowledge from the network to a video adaptation network through feature matching, (ii) performing feature restoration through synthetic data augmentation and (iii) learning a domain-invariant feature through a domain adversarial discriminator. We further improve performance through a discriminator-guided feature fusion that boosts high-quality frames while eliminating those degraded by video domain-specific factors. Experiments on the YouTube Faces and IJB-A datasets demonstrate that each module contributes to our feature-level domain adaptation framework and substantially improves video face recognition performance to achieve state-of-the-art accuracy. We demonstrate qualitatively that the network learns to suppress diverse artifacts in videos such as pose, illumination or occlusion without being explicitly trained for them.Comment: accepted for publication at International Conference on Computer Vision (ICCV) 201
    • …
    corecore