53,821 research outputs found
Superplastic Bulging of Fine-Grained Zirconia
Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/65850/1/j.1151-2916.1990.tb06585.x.pd
Maximum a Posteriori Adaptation of Network Parameters in Deep Models
We present a Bayesian approach to adapting parameters of a well-trained
context-dependent, deep-neural-network, hidden Markov model (CD-DNN-HMM) to
improve automatic speech recognition performance. Given an abundance of DNN
parameters but with only a limited amount of data, the effectiveness of the
adapted DNN model can often be compromised. We formulate maximum a posteriori
(MAP) adaptation of parameters of a specially designed CD-DNN-HMM with an
augmented linear hidden networks connected to the output tied states, or
senones, and compare it to feature space MAP linear regression previously
proposed. Experimental evidences on the 20,000-word open vocabulary Wall Street
Journal task demonstrate the feasibility of the proposed framework. In
supervised adaptation, the proposed MAP adaptation approach provides more than
10% relative error reduction and consistently outperforms the conventional
transformation based methods. Furthermore, we present an initial attempt to
generate hierarchical priors to improve adaptation efficiency and effectiveness
with limited adaptation data by exploiting similarities among senones
Accelerating and Improving AlphaZero Using Population Based Training
AlphaZero has been very successful in many games. Unfortunately, it still
consumes a huge amount of computing resources, the majority of which is spent
in self-play. Hyperparameter tuning exacerbates the training cost since each
hyperparameter configuration requires its own time to train one run, during
which it will generate its own self-play records. As a result, multiple runs
are usually needed for different hyperparameter configurations. This paper
proposes using population based training (PBT) to help tune hyperparameters
dynamically and improve strength during training time. Another significant
advantage is that this method requires a single run only, while incurring a
small additional time cost, since the time for generating self-play records
remains unchanged though the time for optimization is increased following the
AlphaZero training algorithm. In our experiments for 9x9 Go, the PBT method is
able to achieve a higher win rate for 9x9 Go than the baselines, each with its
own hyperparameter configuration and trained individually. For 19x19 Go, with
PBT, we are able to obtain improvements in playing strength. Specifically, the
PBT agent can obtain up to 74% win rate against ELF OpenGo, an open-source
state-of-the-art AlphaZero program using a neural network of a comparable
capacity. This is compared to a saturated non-PBT agent, which achieves a win
rate of 47% against ELF OpenGo under the same circumstances.Comment: accepted by AAAI2020 as oral presentation. In this version,
supplementary materials are adde
Giant isotope effect and spin state transition induced by oxygen isotope exchange in (
We systematically investigate effect of oxygen isotope in
which shows a crossover with x from
ferromagnetic metal to the insulator with spin-state transition. A striking
feature is that effect of oxygen isotope on the ferromagnetic transition is
negligible in the metallic phase, while replacing with leads
to a giant up-shift of the spin-state transition temperature () in the
insulating phase, especially shifts from 36 to 54 K with isotope
component for the sample with x=0.175. A metal-insulator
transition is induced by oxygen isotope exchange in the sample x=0.172 being
close to the insulating phase. The contrasting behaviors observed in the two
phases can be well explained by occurrence of static Jahn-Teller distortions in
the insulating phase, while absence of them in the metallic phase.Comment: 4 pages, 5 figure
- …