11,903 research outputs found
Dynamical effects of a one-dimensional multibarrier potential of finite range
We discuss the properties of a large number N of one-dimensional (bounded)
locally periodic potential barriers in a finite interval. We show that the
transmission coefficient, the scattering cross section , and the
resonances of depend sensitively upon the ratio of the total spacing
to the total barrier width. We also show that a time dependent wave packet
passing through the system of potential barriers rapidly spreads and deforms, a
criterion suggested by Zaslavsky for chaotic behaviour. Computing the spectrum
by imposing (large) periodic boundary conditions we find a Wigner type
distribution. We investigate also the S-matrix poles; many resonances occur for
certain values of the relative spacing between the barriers in the potential
Drift rate control of a Brownian processing system
A system manager dynamically controls a diffusion process Z that lives in a
finite interval [0,b]. Control takes the form of a negative drift rate \theta
that is chosen from a fixed set A of available values. The controlled process
evolves according to the differential relationship dZ=dX-\theta(Z) dt+dL-dU,
where X is a (0,\sigma) Brownian motion, and L and U are increasing processes
that enforce a lower reflecting barrier at Z=0 and an upper reflecting barrier
at Z=b, respectively. The cumulative cost process increases according to the
differential relationship d\xi =c(\theta(Z)) dt+p dU, where c(\cdot) is a
nondecreasing cost of control and p>0 is a penalty rate associated with
displacement at the upper boundary. The objective is to minimize long-run
average cost. This problem is solved explicitly, which allows one to also solve
the following, essentially equivalent formulation: minimize the long-run
average cost of control subject to an upper bound constraint on the average
rate at which U increases. The two special problem features that allow an
explicit solution are the use of a long-run average cost criterion, as opposed
to a discounted cost criterion, and the lack of state-related costs other than
boundary displacement penalties. The application of this theory to power
control in wireless communication is discussed.Comment: Published at http://dx.doi.org/10.1214/105051604000000855 in the
Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute
of Mathematical Statistics (http://www.imstat.org
Fast Structuring of Radio Networks for Multi-Message Communications
We introduce collision free layerings as a powerful way to structure radio
networks. These layerings can replace hard-to-compute BFS-trees in many
contexts while having an efficient randomized distributed construction. We
demonstrate their versatility by using them to provide near optimal distributed
algorithms for several multi-message communication primitives.
Designing efficient communication primitives for radio networks has a rich
history that began 25 years ago when Bar-Yehuda et al. introduced fast
randomized algorithms for broadcasting and for constructing BFS-trees. Their
BFS-tree construction time was rounds, where is the network
diameter and is the number of nodes. Since then, the complexity of a
broadcast has been resolved to be rounds. On the other hand, BFS-trees have been used as a crucial building
block for many communication primitives and their construction time remained a
bottleneck for these primitives.
We introduce collision free layerings that can be used in place of BFS-trees
and we give a randomized construction of these layerings that runs in nearly
broadcast time, that is, w.h.p. in rounds for any constant . We then use these
layerings to obtain: (1) A randomized algorithm for gathering messages
running w.h.p. in rounds. (2) A randomized -message
broadcast algorithm running w.h.p. in rounds. These
algorithms are optimal up to the small difference in the additive
poly-logarithmic term between and . Moreover, they imply the
first optimal round randomized gossip algorithm
Efficiency at optimal work from finite reservoirs: a probabilistic perspective
We revisit the classic thermodynamic problem of maximum work extraction from
two arbitrary sized hot and cold reservoirs, modelled as perfect gases.
Assuming ignorance about the extent to which the process has advanced, which
implies an ignorance about the final temperatures, we quantify the prior
information about the process and assign a prior distribution to the unknown
temperature(s). This requires that we also take into account the temperature
values which are regarded to be unphysical in the standard theory, as they lead
to a contradiction with the physical laws. Instead in our formulation, such
values appear to be consistent with the given prior information and hence are
included in the inference. We derive estimates of the efficiency at optimal
work from the expected values of the final temperatures, and show that these
values match with the exact expressions in the limit when any one of the
reservoirs is very large compared to the other. For other relative sizes of the
reservoirs, we suggest a weighting procedure over the estimates from two valid
inference procedures, that generalizes the procedure suggested earlier in [J.
Phys. A: Math. Theor. {\bf 46}, 365002 (2013)]. Thus a mean estimate for
efficiency is obtained which agrees with the optimal performance to a high
accuracy.Comment: 14 pages, 6 figure
Reply to a Commentary "Asking photons where they have been without telling them what to say"
Interesting objections to conclusions of our experiment with nested
interferometers raised by Salih in a recent Commentary are analysed and
refuted.Comment: Published version (Frontiers in Physics) to revised version of the
Commentar
Quaternion normalization in additive EKF for spacecraft attitude determination
This work introduces, examines, and compares several quaternion normalization algorithms, which are shown to be an effective stage in the application of the additive extended Kalman filter (EKF) to spacecraft attitude determination, which is based on vector measurements. Two new normalization schemes are introduced. They are compared with one another and with the known brute force normalization scheme, and their efficiency is examined. Simulated satellite data are used to demonstrate the performance of all three schemes. A fourth scheme is suggested for future research. Although the schemes were tested for spacecraft attitude determination, the conclusions are general and hold for attitude determination of any three dimensional body when based on vector measurements, and use an additive EKF for estimation, and the quaternion for specifying the attitude
Quaternion normalization in spacecraft attitude determination
Attitude determination of spacecraft usually utilizes vector measurements such as Sun, center of Earth, star, and magnetic field direction to update the quaternion which determines the spacecraft orientation with respect to some reference coordinates in the three dimensional space. These measurements are usually processed by an extended Kalman filter (EKF) which yields an estimate of the attitude quaternion. Two EKF versions for quaternion estimation were presented in the literature; namely, the multiplicative EKF (MEKF) and the additive EKF (AEKF). In the multiplicative EKF, it is assumed that the error between the correct quaternion and its a-priori estimate is, by itself, a quaternion that represents the rotation necessary to bring the attitude which corresponds to the a-priori estimate of the quaternion into coincidence with the correct attitude. The EKF basically estimates this quotient quaternion and then the updated quaternion estimate is obtained by the product of the a-priori quaternion estimate and the estimate of the difference quaternion. In the additive EKF, it is assumed that the error between the a-priori quaternion estimate and the correct one is an algebraic difference between two four-tuple elements and thus the EKF is set to estimate this difference. The updated quaternion is then computed by adding the estimate of the difference to the a-priori quaternion estimate. If the quaternion estimate converges to the correct quaternion, then, naturally, the quaternion estimate has unity norm. This fact was utilized in the past to obtain superior filter performance by applying normalization to the filter measurement update of the quaternion. It was observed for the AEKF that when the attitude changed very slowly between measurements, normalization merely resulted in a faster convergence; however, when the attitude changed considerably between measurements, without filter tuning or normalization, the quaternion estimate diverged. However, when the quaternion estimate was normalized, the estimate converged faster and to a lower error than with tuning only. In last years, symposium we presented three new AEKF normalization techniques and we compared them to the brute force method presented in the literature. The present paper presents the issue of normalization of the MEKF and examines several MEKF normalization techniques
Solid-state electronic spin coherence time approaching one second
Solid-state electronic spin systems such as nitrogen-vacancy (NV) color
centers in diamond are promising for applications of quantum information,
sensing, and metrology. However, a key challenge for such solid-state systems
is to realize a spin coherence time that is much longer than the time for
quantum spin manipulation protocols. Here we demonstrate an improvement of more
than two orders of magnitude in the spin coherence time () of NV centers
compared to previous measurements: s at 77 K, which enables
coherent NV spin manipulations before decoherence. We employed
dynamical decoupling pulse sequences to suppress NV spin decoherence due to
magnetic noise, and found that is limited to approximately half of the
longitudinal spin relaxation time () over a wide range of temperatures,
which we attribute to phonon-induced decoherence. Our results apply to
ensembles of NV spins and do not depend on the optimal choice of a specific NV,
which could advance quantum sensing, enable squeezing and many-body
entanglement in solid-state spin ensembles, and open a path to simulating a
wide range of driven, interaction-dominated quantum many-body Hamiltonians
Denaturation of Circular DNA: Supercoil Mechanism
The denaturation transition which takes place in circular DNA is analyzed by
extending the Poland-Scheraga model to include the winding degrees of freedom.
We consider the case of a homopolymer whereby the winding number of the double
stranded helix, released by a loop denaturation, is absorbed by
\emph{supercoils}. We find that as in the case of linear DNA, the order of the
transition is determined by the loop exponent . However the first order
transition displayed by the PS model for in linear DNA is replaced by a
continuous transition with arbitrarily high order as approaches 2, while
the second-order transition found in the linear case in the regime
disappears. In addition, our analysis reveals that melting under fixed linking
number is a \emph{condensation transition}, where the condensate is a
macroscopic loop which appears above the critical temperature.Comment: 9 pages, 4 figure
- …