345 research outputs found

    代謝関連臓器の脂質蓄積に対する小児期の長期運動効果とトレーニング解除の影響

    Get PDF
    広島大学(Hiroshima University)博士(保健学)Doctor of Philosophy in Health Sciencedoctora

    REGULATING AN EXTRACURRICULAR PROGRAMME ON SWIMMING SAFETY AND DROWNING PREVENTION FOR STUDENTS IN GRADES 6 AND 7 BAC LIEU CITY, BAC LIEU PROVINCE, VIETNAM

    Get PDF
    The paper employs traditional sports research methodologies involving document reference, interview, pedagogical observation, and pedagogical testing. Based on theoretical, practical, pedagogical, and scientific principles, the authors construct an extra-curricular program in swimming and drowning prevention skills for students in grades 6 and 7 in Bac Lieu City, Bac Lieu province. The program is designed to provide students with the ability to identify danger, and grasp the causes of drowning and how to prevent them, thereby developing self-defense skills and drowning prevention. The study team creates a program to teach lower secondary pupils the aforesaid skills in addition to instructing them how to utilize buoys, ropes, towels, poles, and to swim 25 meters breaststroke.  Article visualizations

    Exotic States Emerged By Spin-Orbit Coupling, Lattice Modulation and Magnetic Field in Lieb Nano-ribbons

    Get PDF
    The Lieb nano-ribons with the spin-orbit coupling, the lattice modulation and the magnetic field are exactly studied. They are constructed from the Lieb lattice with two open boundaries in a direction. The interplay between the spin-orbit coupling, the lattice modulation and the magnetic field emerges various exotic ground states. With certain conditions of the spin-orbit coupling, the lattice modulation, the magnetic field and filling the ground state becomes half metallic or half topological. In the half metallic ground state, one spin component is metallic, while the other spin component is insulating. In the half topological ground state, one spin component is topological, while the other spin component is topological trivial. The model exhibits very rich phase diagram

    Does BLEU Score Work for Code Migration?

    Full text link
    Statistical machine translation (SMT) is a fast-growing sub-field of computational linguistics. Until now, the most popular automatic metric to measure the quality of SMT is BiLingual Evaluation Understudy (BLEU) score. Lately, SMT along with the BLEU metric has been applied to a Software Engineering task named code migration. (In)Validating the use of BLEU score could advance the research and development of SMT-based code migration tools. Unfortunately, there is no study to approve or disapprove the use of BLEU score for source code. In this paper, we conducted an empirical study on BLEU score to (in)validate its suitability for the code migration task due to its inability to reflect the semantics of source code. In our work, we use human judgment as the ground truth to measure the semantic correctness of the migrated code. Our empirical study demonstrates that BLEU does not reflect translation quality due to its weak correlation with the semantic correctness of translated code. We provided counter-examples to show that BLEU is ineffective in comparing the translation quality between SMT-based models. Due to BLEU's ineffectiveness for code migration task, we propose an alternative metric RUBY, which considers lexical, syntactical, and semantic representations of source code. We verified that RUBY achieves a higher correlation coefficient with the semantic correctness of migrated code, 0.775 in comparison with 0.583 of BLEU score. We also confirmed the effectiveness of RUBY in reflecting the changes in translation quality of SMT-based translation models. With its advantages, RUBY can be used to evaluate SMT-based code migration models.Comment: 12 pages, 5 figures, ICPC '19 Proceedings of the 27th International Conference on Program Comprehensio

    Enhance Incomplete Utterance Restoration by Joint Learning Token Extraction and Text Generation

    Full text link
    This paper introduces a model for incomplete utterance restoration (IUR). Different from prior studies that only work on extraction or abstraction datasets, we design a simple but effective model, working for both scenarios of IUR. Our design simulates the nature of IUR, where omitted tokens from the context contribute to restoration. From this, we construct a Picker that identifies the omitted tokens. To support the picker, we design two label creation methods (soft and hard labels), which can work in cases of no annotation of the omitted tokens. The restoration is done by using a Generator with the help of the Picker on joint learning. Promising results on four benchmark datasets in extraction and abstraction scenarios show that our model is better than the pretrained T5 and non-generative language model methods in both rich and limited training data settings. The code will be also available.Comment: This is the early version of the paper accepted by NAACL 2022. It includes 10 pages, 2 figure
    corecore