109 research outputs found

    The Effect Of Tax Rate Change On Dividend Payout

    Get PDF
    President Bush’s 2003 tax cut has revived the topic of dividend policy. Dividend payout depends on many factors, such as earnings, size, and growth in addition to the tax rate. To study the effect of a change in tax rates on dividends, we need to control for other factors that may affect them. Following Fama and French (2001) approach, we divide our sample firms into three different categories characterized by profitability, investment opportunity, and size; and we estimate the averaged dividend forecast errors for four groups in each category. We find size to be the most important factor related to dividends when taxes are not taken into account. In addition, empirical evidence suggests that profitability is the only factor related to dividends when tax rates are included. In other words, the more profitable the firms are, the more likely they pay higher dividends as applicable tax rates decline

    Building Better Li Metal Anodes in Liquid Electrolyte: Challenges and Progress

    Get PDF
    Li metal has been widely recognized as a promising anode candidate for high-energy-density batteries. However, the inherent limitations of Li metal, that is, the low Coulombic efficiency and dendrite issues, make it still far from practical applications. In short, the low Coulombic efficiency shortens the cycle life of Li metal batteries, while the dendrite issue raises safety concerns. Thanks to the great efforts of the research community, prolific fundamental understanding as well as approaches for mitigating Li metal anode safety have been extensively explored. In this Review, Li electrochemical deposition behaviors have been systematically summarized, and recent progress in electrode design and electrolyte system optimization is reviewed. Finally, we discuss the future directions, opportunities, and challenges of Li metal anodes

    MELA: Multilingual Evaluation of Linguistic Acceptability

    Full text link
    Recent benchmarks for Large Language Models (LLMs) have mostly focused on application-driven tasks such as complex reasoning and code generation, and this has led to a scarcity in purely linguistic evaluation of LLMs. Against this background, we introduce Multilingual Evaluation of Linguistic Acceptability -- MELA, the first multilingual benchmark on linguistic acceptability with 48K samples covering 10 languages from a diverse set of language families. We establish baselines of commonly used LLMs along with supervised models, and conduct cross-lingual transfer and multi-task learning experiments with XLM-R. In pursuit of multilingual interpretability, we analyze the weights of fine-tuned XLM-R to explore the possibility of identifying transfer difficulty between languages. Our results show that ChatGPT benefits much from in-context examples but still lags behind fine-tuned XLM-R, while the performance of GPT-4 is on par with fine-tuned XLM-R even in zero-shot setting. Cross-lingual and multi-task learning experiments show that unlike semantic tasks, in-language training data is crucial in acceptability judgements. Results in layerwise probing indicate that the upper layers of XLM-R become a task-specific but language-agnostic region for multilingual acceptability judgment. We also introduce the concept of conflicting weight, which could be a potential indicator for the difficulty of cross-lingual transfer between languages. Our data will be available at https://github.com/sjtu-compling/MELA.Comment: Work in progres

    KD-MVS: Knowledge Distillation Based Self-supervised Learning for Multi-view Stereo

    Full text link
    Supervised multi-view stereo (MVS) methods have achieved remarkable progress in terms of reconstruction quality, but suffer from the challenge of collecting large-scale ground-truth depth. In this paper, we propose a novel self-supervised training pipeline for MVS based on knowledge distillation, termed KD-MVS, which mainly consists of self-supervised teacher training and distillation-based student training. Specifically, the teacher model is trained in a self-supervised fashion using both photometric and featuremetric consistency. Then we distill the knowledge of the teacher model to the student model through probabilistic knowledge transferring. With the supervision of validated knowledge, the student model is able to outperform its teacher by a large margin. Extensive experiments performed on multiple datasets show our method can even outperform supervised methods
    • …
    corecore