3,059 research outputs found
Giant current-driven domain wall mobility in (Ga,Mn)As
We study theoretically hole current-driven domain wall dynamics in (Ga,Mn)As.
We show that the spin-orbit coupling causes significant hole reflection at the
domain wall, even in the adiabatic limit when the wall is much thicker than the
Fermi wavelength, resulting in spin accumulation and mistracking between
current-carrying spins and the domain wall magnetization. This increases the
out-of-plane non-adiabatic spin transfer torque and consequently the
current-driven domain wall mobility by three to four orders of magnitude.
Trends and magnitude of the calculated domain wall current mobilities agree
with experimental findings.Comment: Final version accepted by Physical Review Letter
Selection for Seed Size and Coleoptile Length in Timothy (Phleum Pratense L.)
The intention of this study was to show the effect of selection for seed weight and coleoptile length on morphology and agronomically important characters in timothy (Phleum pratense L.). Two cycles of selection increased the seed weight as well as the length of coleoptile and root. The emergence from deep sowing in sand and in the field were insignificantly increased, whereas the percentage stand and the dry matter yield were decreased, albeit insignificantly. Inbreeding and linkage effects were considered possible causes for this
MAP- and MLE-Based Teaching
Imagine a learner L who tries to infer a hidden concept from a collection of
observations. Building on the work [4] of Ferri et al., we assume the learner
to be parameterized by priors P(c) and by c-conditional likelihoods P(z|c)
where c ranges over all concepts in a given class C and z ranges over all
observations in an observation set Z. L is called a MAP-learner (resp. an
MLE-learner) if it thinks of a collection S of observations as a random sample
and returns the concept with the maximum a-posteriori probability (resp. the
concept which maximizes the c-conditional likelihood of S). Depending on
whether L assumes that S is obtained from ordered or unordered sampling resp.
from sampling with or without replacement, we can distinguish four different
sampling modes. Given a target concept c in C, a teacher for a MAP-learner L
aims at finding a smallest collection of observations that causes L to return
c. This approach leads in a natural manner to various notions of a MAP- or
MLE-teaching dimension of a concept class C. Our main results are: We show that
this teaching model has some desirable monotonicity properties. We clarify how
the four sampling modes are related to each other. As for the (important!)
special case, where concepts are subsets of a domain and observations are
0,1-labeled examples, we obtain some additional results. First of all, we
characterize the MAP- and MLE-teaching dimension associated with an optimally
parameterized MAP-learner graph-theoretically. From this central result, some
other ones are easy to derive. It is shown, for instance, that the MLE-teaching
dimension is either equal to the MAP-teaching dimension or exceeds the latter
by 1. It is shown furthermore that these dimensions can be bounded from above
by the so-called antichain number, the VC-dimension and related combinatorial
parameters. Moreover they can be computed in polynomial time
- …