564 research outputs found

    Depth Based Permutation Test For General Differences In Two Multivariate Populations

    Get PDF
    For two p-dimensional data sets, interest exists in testing if they come from the common population distribution. Proposed is a practical, effective and easy to implement procedure for the testing problem. The proposed procedure is a permutation test based on the concept of the depth of one observation relative to some population distribution. The proposed test is demonstrated to be consistent. A small Monte Carlo simulation was conducted to evaluate the power of the proposed test. The proposed test is applied to some numerical examples

    The Economic Consequences of Financial Misreporting: Evidence from Employee Responses

    Get PDF
    This study investigates the economic consequences of financial misreporting arising from employee responses. Specifically, we examine two employee reactions: (1) withdrawing their human capital and (2) reducing holding of employer stock, in both misreporting period and post-restatement period. We find an increase in employee turnover and a decrease in employee holding of employer stock in the post-restatement period (restatement effect) and some evidence that employees start to react in the period of misreporting (misreporting effect). We also find some evidence that the misreporting effect varies with employee tenure in the misreporting period and the restatement effect varies with the severity of misreporting in the post-restatement period. We further show that our results are not driven by labor demand, increased likelihood of executive turnover, declining stock prices, and internal control weakness disclosures, and are robust to a matched sample estimation. Overall, our study provides evidence of human capital costs of financial misreporting to misreporting firms, shedding new light on the negative consequences of accounting failures

    A Graph-Neural-Network-Based Social Network Recommendation Algorithm Using High-Order Neighbor Information

    Get PDF
    Social-network-based recommendation algorithms leverage rich social network information to alleviate the problem of data sparsity and boost the recommendation performance. However, traditional social-network-based recommendation algorithms ignore high-order collaborative signals or only consider the first-order collaborative signal when learning users’ and items’ latent representations, resulting in suboptimal recommendation performance. In this paper, we propose a graph neural network (GNN)-based social recommendation model that utilizes the GNN framework to capture high-order collaborative signals in the process of learning the latent representations of users and items. Specifically, we formulate the representations of entities, i.e., users and items, by stacking multiple embedding propagation layers to recursively aggregate multi-hop neighborhood information on both the user–item interaction graph and the social network graph. Hence, the collaborative signals hidden in both the user–item interaction graph and the social network graph are explicitly injected into the final representations of entities. Moreover, we ease the training process of the proposed GNN-based social recommendation model and alleviate overfitting by adopting a lightweight GNN framework that only retains the neighborhood aggregation component and abandons the feature transformation and nonlinear activation components. The experimental results on two real-world datasets show that our proposed GNN-based social recommendation method outperforms the state-of-the-art recommendation algorithms

    Graph Neural Networks Boosted Personalized Tag Recommendation Algorithm

    Get PDF
    Personalized tag recommender systems recommend a set of tags for items based on users’ historical behaviors, and play an important role in the collaborative tagging systems. However, traditional personalized tag recommendation methods cannot guarantee that the collaborative signal hidden in the interactions among entities is effectively encoded in the process of learning the representations of entities, resulting in insufficient expressive capacity for characterizing the preferences or attributes of entities. In this paper, we proposed a graph neural networks boosted personalized tag recommendation model, which integrates the graph neural networks into the pairwise interaction tensor factorization model. Specifically, we consider two types of interaction graph (i.e. the user-tag interaction graph and the item-tag interaction graph) that is derived from the tag assignments. For each interaction graph, we exploit the graph neural networks to capture the collaborative signal that is encoded in the interaction graph and integrate the collaborative signal into the learning of representations of entities by transmitting and assembling the representations of entity neighbors along the interaction graphs. In this way, we explicitly capture the collaborative signal, resulting in rich and meaningful representations of entities. Experimental results on real world datasets show that our proposed graph neural networks boosted personalized tag recommendation model outperforms the traditional tag recommendation models

    Human Action Recognition Using Hybrid Deep Evolving Neural Networks

    Get PDF

    The Clinical Relevance of Serum NDKA, NMDA, PARK7, and UFDP Levels with Phlegm-Heat Syndrome and Treatment Efficacy Evaluation of Traditional Chinese Medicine in Acute Ischemic Stroke

    Get PDF
    According to the methods of Patient-Reported Outcome (PRO) based on the patient reports internationally and referring to U.S. Food and Drug Administration (FDA) guide, some scholars developed this PRO of stroke which is consistent with China’s national conditions, and using it the feel of stroke patients was introduced into the clinical efficacy evaluation system of stoke. “Ischemic Stroke TCM Syndrome Factor Diagnostic Scale (ISTSFDS)” and “Ischemic Stroke TCM Syndrome Factor Evaluation Scale (ISTSFES)” were by “Major State Basic Research Development Program of China (973 Program) (number 2003CB517102).” ISTSFDS can help to classify and diagnose the CM syndrome reasonably and objectively with application of syndrome factors. Six syndrome factors, internal-wind syndrome, internal-fire syndrome, phlegm-dampness syndrome, blood-stasis syndrome, qi-deficiency syndrome, and yin-deficiency syndrome, were included in ISTSFDS and ISTSFES. TCM syndrome factor was considered to be present if the score was greater than or equal to 10 according to ISTSFDS. In our study, patients with phlegm-heat syndrome were recruited, who met the diagnosis of both “phlegm-dampness” and “internal-fire” according to ISTSFDS. ISTSFES was used to assess the syndrome severity; in our study it was used to assess the severity of phlegm-heat syndrome (phlegm-heat syndrome scores = phlegm-dampness syndrome scores + internal-fire syndrome scores)

    Alternative Pseudo-Labeling for Semi-Supervised Automatic Speech Recognition

    Full text link
    When labeled data is insufficient, semi-supervised learning with the pseudo-labeling technique can significantly improve the performance of automatic speech recognition. However, pseudo-labels are often noisy, containing numerous incorrect tokens. Taking noisy labels as ground-truth in the loss function results in suboptimal performance. Previous works attempted to mitigate this issue by either filtering out the nosiest pseudo-labels or improving the overall quality of pseudo-labels. While these methods are effective to some extent, it is unrealistic to entirely eliminate incorrect tokens in pseudo-labels. In this work, we propose a novel framework named alternative pseudo-labeling to tackle the issue of noisy pseudo-labels from the perspective of the training objective. The framework comprises several components. Firstly, a generalized CTC loss function is introduced to handle noisy pseudo-labels by accepting alternative tokens in the positions of incorrect tokens. Applying this loss function in pseudo-labeling requires detecting incorrect tokens in the predicted pseudo-labels. In this work, we adopt a confidence-based error detection method that identifies the incorrect tokens by comparing their confidence scores with a given threshold, thus necessitating the confidence score to be discriminative. Hence, the second proposed technique is the contrastive CTC loss function that widens the confidence gap between the correctly and incorrectly predicted tokens, thereby improving the error detection ability. Additionally, obtaining satisfactory performance with confidence-based error detection typically requires extensive threshold tuning. Instead, we propose an automatic thresholding method that uses labeled data as a proxy for determining the threshold, thus saving the pain of manual tuning.Comment: Accepted by IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP), 202

    Large-scale Multi-Modal Pre-trained Models: A Comprehensive Survey

    Full text link
    With the urgent demand for generalized deep models, many pre-trained big models are proposed, such as BERT, ViT, GPT, etc. Inspired by the success of these models in single domains (like computer vision and natural language processing), the multi-modal pre-trained big models have also drawn more and more attention in recent years. In this work, we give a comprehensive survey of these models and hope this paper could provide new insights and helps fresh researchers to track the most cutting-edge works. Specifically, we firstly introduce the background of multi-modal pre-training by reviewing the conventional deep learning, pre-training works in natural language process, computer vision, and speech. Then, we introduce the task definition, key challenges, and advantages of multi-modal pre-training models (MM-PTMs), and discuss the MM-PTMs with a focus on data, objectives, network architectures, and knowledge enhanced pre-training. After that, we introduce the downstream tasks used for the validation of large-scale MM-PTMs, including generative, classification, and regression tasks. We also give visualization and analysis of the model parameters and results on representative downstream tasks. Finally, we point out possible research directions for this topic that may benefit future works. In addition, we maintain a continuously updated paper list for large-scale pre-trained multi-modal big models: https://github.com/wangxiao5791509/MultiModal_BigModels_SurveyComment: Accepted by Machine Intelligence Researc
    • 

    corecore