248 research outputs found
Finite size corrections to scaling in high Reynolds number turbulence
We study analytically and numerically the corrections to scaling in
turbulence which arise due to the finite ratio of the outer scale of
turbulence to the viscous scale , i.e., they are due to finite size
effects as anisotropic forcing or boundary conditions at large scales. We find
that the deviations \dzm from the classical Kolmogorov scaling of the velocity moments \langle |\u(\k)|^m\rangle \propto k^{-\zeta_m}
decrease like . Our numerics employ a
reduced wave vector set approximation for which the small scale structures are
not fully resolved. Within this approximation we do not find independent
anomalous scaling within the inertial subrange. If anomalous scaling in the
inertial subrange can be verified in the large limit, this supports the
suggestion that small scale structures should be responsible, originating from
viscosity either in the bulk (vortex tubes or sheets) or from the boundary
layers (plumes or swirls)
Examples of the Zeroth Theorem of the History of Physics
The zeroth theorem of the history of science (enunciated by E. P. Fischer)
and widely known in the mathematics community as Arnol'd's Principle (decreed
by M. V. Berry), states that a discovery (rule, regularity, insight) named
after someone (often) did not originate with that person. I present five
examples from physics: the Lorentz condition defining the Lorentz gauge of the
electromagnetic potentials; the Dirac delta function (x); the Schumann
resonances of the earth-ionosphere cavity; the Weizsacker-Williams method of
virtual quanta; the BMT equation of spin dynamics. I give illustrated thumbnail
sketches of both the true and reputed discoverers and quote from their
"discovery" publications.Comment: 36 pages, 8 figures. Small revisions, added material and references -
Arnol'd's law, Emil Wiechert. Submitted to Am. J. Phy
Mass splittings of nuclear isotopes in chiral soliton approach
The differences of the masses of nuclear isotopes with atomic numbers between
\~10 and ~30 can be described within the chiral soliton approach in
satisfactory agreement with data. Rescaling of the model is necessary for this
purpose - decrease of the Skyrme constant by about 30%, providing the "nuclear
variant" of the model. The asymmetric term in Weizsaecker-Bethe- Bacher mass
formula for nuclei can be obtained as the isospin dependent quantum correction
to the nucleus energy. Some predictions for the binding energies of neutron
rich nuclides are made in this way, from, e.g. Be-16 and B-19 to Ne-31 and
Na-32. Neutron rich nuclides with high values of isospin are unstable relative
to strong interactions. The SK4 (Skyrme) variant of the model, as well as SK6
variant (6-th order term in chiral derivatives in the lagrangian as solitons
stabilizer) are considered, and the rational map approximation is used to
describe multiskyrmions.Comment: 16 pages, 10 tables, 2 figures. Figures are added and few misprints
are removed. Submitted to Phys. Atom. Nucl. (Yad. Fiz.
Necessary Optimality Conditions for Higher-Order Infinite Horizon Variational Problems on Time Scales
We obtain Euler-Lagrange and transversality optimality conditions for
higher-order infinite horizon variational problems on a time scale. The new
necessary optimality conditions improve the classical results both in the
continuous and discrete settings: our results seem new and interesting even in
the particular cases when the time scale is the set of real numbers or the set
of integers.Comment: This is a preprint of a paper whose final and definite form will
appear in Journal of Optimization Theory and Applications (JOTA). Paper
submitted 17-Nov-2011; revised 24-March-2012 and 10-April-2012; accepted for
publication 15-April-201
DC and AC Josephson effects with superfluid Fermi atoms across a Feshbach resonance
We show that both DC and AC Josephson effects with superfluid Fermi atoms in
the BCS-BEC crossover can be described at zero temperature by a nonlinear
Schrodinger equation (NLSE). By comparing our NLSE with mean-field extended BCS
calculations, we find that the NLSE is reliable in the BEC side of the
crossover up to the unitarity limit. The NLSE can be used for weakly-linked
atomic superfluids also in the BCS side of the crossover by taking the
tunneling energy as a phenomenological parameter.Comment: 8 pages, 4 figures, presented at the Scientific Seminar on Physics of
Cold Trapped Atoms, 17th International Laser Physics Workshop (Trondheim,
June 30 - July 4, 2008
Photon and Z induced heavy charged lepton pair production at a hadron supercollider
We investigate the pair production of charged heavy leptons via
photon-induced processes at the proposed CERN Large Hadron Collider (LHC).
Using effective photon and Z approximations, rates are given for
production due to fusion and fusion for the cases of
inelastic, elastic and semi-elastic collisions. These are compared with
the corresponding rates for production via the gluon fusion and Drell-Yan
mechanisms. Various and differential luminosities
for collisions are also presented.Comment: 22 pages, RevTex 3.0, 6 uuencoded and compressed postscript figures
included. Reference to one paper changed from the original preprint number to
the published version. Everything else unchange
On the exchange of intersection and supremum of sigma-fields in filtering theory
We construct a stationary Markov process with trivial tail sigma-field and a
nondegenerate observation process such that the corresponding nonlinear
filtering process is not uniquely ergodic. This settles in the negative a
conjecture of the author in the ergodic theory of nonlinear filters arising
from an erroneous proof in the classic paper of H. Kunita (1971), wherein an
exchange of intersection and supremum of sigma-fields is taken for granted.Comment: 20 page
Hadronic Cross-sections in two photon Processes at a Future Linear Collider
In this note we address the issue of measurability of the hadronic
cross-sections at a future photon collider as well as for the two-photon
processes at a future high energy linear collider. We extend, to
higher energy, our previous estimates of the accuracy with which the \gamgam\
cross-section needs to be measured, in order to distinguish between different
theoretical models of energy dependence of the total cross-sections. We show
that the necessary precision to discriminate among these models is indeed
possible at future linear colliders in the Photon Collider option. Further we
note that even in the option a measurement of the hadron production
cross-section via \gamgam processes, with an accuracy necessary to allow
discrimination between different theoretical models, should be possible. We
also comment briefly on the implications of these predictions for hadronic
backgrounds at the future TeV energy collider CLIC.Comment: 20 pages, 5 figures, LaTeX. Added an acknowledgemen
Turbulent Control of the Star Formation Efficiency
Supersonic turbulence plays a dual role in molecular clouds: On one hand, it
contributes to the global support of the clouds, while on the other it promotes
the formation of small-scale density fluctuations, identifiable with clumps and
cores. Within these, the local Jeans length \Ljc is reduced, and collapse
ensues if \Ljc becomes smaller than the clump size and the magnetic support
is insufficient (i.e., the core is ``magnetically supercritical''); otherwise,
the clumps do not collapse and are expected to re-expand and disperse on a few
free-fall times. This case may correspond to a fraction of the observed
starless cores. The star formation efficiency (SFE, the fraction of the cloud's
mass that ends up in collapsed objects) is smaller than unity because the mass
contained in collapsing clumps is smaller than the total cloud mass. However,
in non-magnetic numerical simulations with realistic Mach numbers and
turbulence driving scales, the SFE is still larger than observational
estimates. The presence of a magnetic field, even if magnetically
supercritical, appears to further reduce the SFE, but by reducing the
probability of core formation rather than by delaying the collapse of
individual cores, as was formerly thought. Precise quantification of these
effects as a function of global cloud parameters is still needed.Comment: Invited review for the conference "IMF@50: the Initial Mass Function
50 Years Later", to be published by Kluwer Academic Publishers, eds. E.
Corbelli, F. Palla, and H. Zinnecke
Stellar structure and compact objects before 1940: Towards relativistic astrophysics
Since the mid-1920s, different strands of research used stars as "physics
laboratories" for investigating the nature of matter under extreme densities
and pressures, impossible to realize on Earth. To trace this process this paper
is following the evolution of the concept of a dense core in stars, which was
important both for an understanding of stellar evolution and as a testing
ground for the fast-evolving field of nuclear physics. In spite of the divide
between physicists and astrophysicists, some key actors working in the
cross-fertilized soil of overlapping but different scientific cultures
formulated models and tentative theories that gradually evolved into more
realistic and structured astrophysical objects. These investigations culminated
in the first contact with general relativity in 1939, when J. Robert
Oppenheimer and his students George Volkoff and Hartland Snyder systematically
applied the theory to the dense core of a collapsing neutron star. This
pioneering application of Einstein's theory to an astrophysical compact object
can be regarded as a milestone in the path eventually leading to the emergence
of relativistic astrophysics in the early 1960s.Comment: 83 pages, 4 figures, submitted to the European Physical Journal
- …