52,809 research outputs found

    Superplastic Bulging of Fine-Grained Zirconia

    Full text link
    Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/65850/1/j.1151-2916.1990.tb06585.x.pd

    Giant isotope effect and spin state transition induced by oxygen isotope exchange in (Pr1−xSmx)0.7Ca0.3CoO3Pr_{1-x}Sm_x)_{0.7}Ca_{0.3}CoO_3

    Full text link
    We systematically investigate effect of oxygen isotope in (Pr1−xSmx)0.7Ca0.3CoO3(Pr_{1-x}Sm_x)_{0.7}Ca_{0.3}CoO_3 which shows a crossover with x from ferromagnetic metal to the insulator with spin-state transition. A striking feature is that effect of oxygen isotope on the ferromagnetic transition is negligible in the metallic phase, while replacing 16O^{16}O with 18O^{18}O leads to a giant up-shift of the spin-state transition temperature (TsT_s) in the insulating phase, especially TsT_s shifts from 36 to 54 K with isotope component αS=−4.7\alpha_S=-4.7 for the sample with x=0.175. A metal-insulator transition is induced by oxygen isotope exchange in the sample x=0.172 being close to the insulating phase. The contrasting behaviors observed in the two phases can be well explained by occurrence of static Jahn-Teller distortions in the insulating phase, while absence of them in the metallic phase.Comment: 4 pages, 5 figure

    Accelerating and Improving AlphaZero Using Population Based Training

    Full text link
    AlphaZero has been very successful in many games. Unfortunately, it still consumes a huge amount of computing resources, the majority of which is spent in self-play. Hyperparameter tuning exacerbates the training cost since each hyperparameter configuration requires its own time to train one run, during which it will generate its own self-play records. As a result, multiple runs are usually needed for different hyperparameter configurations. This paper proposes using population based training (PBT) to help tune hyperparameters dynamically and improve strength during training time. Another significant advantage is that this method requires a single run only, while incurring a small additional time cost, since the time for generating self-play records remains unchanged though the time for optimization is increased following the AlphaZero training algorithm. In our experiments for 9x9 Go, the PBT method is able to achieve a higher win rate for 9x9 Go than the baselines, each with its own hyperparameter configuration and trained individually. For 19x19 Go, with PBT, we are able to obtain improvements in playing strength. Specifically, the PBT agent can obtain up to 74% win rate against ELF OpenGo, an open-source state-of-the-art AlphaZero program using a neural network of a comparable capacity. This is compared to a saturated non-PBT agent, which achieves a win rate of 47% against ELF OpenGo under the same circumstances.Comment: accepted by AAAI2020 as oral presentation. In this version, supplementary materials are adde
    • …
    corecore