82 research outputs found
The Impact of Corporate Governance on Earnings Management: Empirical Evidence from Listed Companies in Hong Kong
This paper studies the relationship between corporate governance structure and earnings management of listed companies in Hong Kong. The governance structure information and financial data of 911 companies in Hong Kong Stock Exchange made up the sample data of this paper. The duration of sample was a period of three years from 2014 to 2016. The modified Jones model was applied to measure the levels of discretionary accruals, and then the absolute value of discretionary accruals was utilized to represent the levels of earnings management. This paper concentrates on the impact of seven corporate governance factors on earnings management by regression of three panel data models. Then this paper found that four corporate governance factors are significantly correlated with the levels of earnings management. The board independence, the audit committee size and the frequency of audit committee meetings were found to have a significant negative relationship with earnings management, while the meetings frequency of the board of directors displayed a significant positive correlation with earnings management. The main contribution of this paper is to supplement the knowledge body about the impacts of corporate governance on earnings management of listed companies in Hong Kong, as well as several practical implications
Beyond Hard Samples: Robust and Effective Grammatical Error Correction with Cycle Self-Augmenting
Recent studies have revealed that grammatical error correction methods in the
sequence-to-sequence paradigm are vulnerable to adversarial attack, and simply
utilizing adversarial examples in the pre-training or post-training process can
significantly enhance the robustness of GEC models to certain types of attack
without suffering too much performance loss on clean data. In this paper, we
further conduct a thorough robustness evaluation of cutting-edge GEC methods
for four different types of adversarial attacks and propose a simple yet very
effective Cycle Self-Augmenting (CSA) method accordingly. By leveraging the
augmenting data from the GEC models themselves in the post-training process and
introducing regularization data for cycle training, our proposed method can
effectively improve the model robustness of well-trained GEC models with only a
few more training epochs as an extra cost. More concretely, further training on
the regularization data can prevent the GEC models from over-fitting on
easy-to-learn samples and thus can improve the generalization capability and
robustness towards unseen data (adversarial noise/samples). Meanwhile, the
self-augmented data can provide more high-quality pseudo pairs to improve model
performance on the original testing data. Experiments on four benchmark
datasets and seven strong models indicate that our proposed training method can
significantly enhance the robustness of four types of attacks without using
purposely built adversarial examples in training. Evaluation results on clean
data further confirm that our proposed CSA method significantly improves the
performance of four baselines and yields nearly comparable results with other
state-of-the-art models. Our code is available at
https://github.com/ZetangForward/CSA-GEC
- …