48,588 research outputs found
Active repositioning of storage units in Robotic Mobile Fulfillment Systems
In our work we focus on Robotic Mobile Fulfillment Systems in e-commerce
distribution centers. These systems were designed to increase pick rates by
employing mobile robots bringing movable storage units (so-called pods) to pick
and replenishment stations as needed, and back to the storage area afterwards.
One advantage of this approach is that repositioning of inventory can be done
continuously, even during pick and replenishment operations. This is primarily
accomplished by bringing a pod to a storage location different than the one it
was fetched from, a process we call passive pod repositioning. Additionally,
this can be done by explicitly bringing a pod from one storage location to
another, a process we call active pod repositioning. In this work we introduce
first mechanisms for the latter technique and conduct a simulation-based
experiment to give first insights of their effect
Risk, cohabitation and marriage
This paper introduces imperfect information,learning,and risk aversion in a two sided matching model.The modelprovides a theoreticalframework for the com- monly occurring phenomenon of cohabitation followed by marriage,and is con- sistent with empirical findings on these institutions.The paper has three major results.First,individuals set higher standards for marriage than for cohabitation. When the true worth of a cohabiting partner is revealed,some cohabiting unions are converted into marriage while others are not.Second,individuals cohabit within classes.Third,the premium that compensates individuals for the higher risk involved in marriage over a cohabiting partnership is derived.This premium can be decomposed into two parts.The first part is a function of the individual ’s level of risk aversion,while the second part is a function of the di difference in risk between marriage and cohabitation.
Signature of Inverse Compton emission from blazars
Blazars are classified into high, intermediate and low energy peaked sources
based on the location of their synchrotron peak. This lies in infra-red/optical
to ultra-violet bands for low and intermediate peaked blazars. The transition
from synchrotron to inverse Compton emission falls in the X-ray bands for such
sources. We present the spectral and timing analysis of 14 low and intermediate
energy peaked blazars ob- served with XMMNewton spanning 31 epochs. Parametric
fits to X-ray spectra helps constrain the possible location of transition from
the high energy end of the syn- chrotron to the low energy end of the inverse
Compton emission. In seven sources in our sample, we infer such a transition
and constrain the break energy in the range 0.6 10 keV. The Lomb-Scargle
periodogram is used to estimate the power spectral density (PSD) shape. It is
well described by a power law in a majority of light curves, the index being
flatter compared to general expectation from AGN, ranging here between 0.01 and
1.12, possibly due to short observation durations resulting in an absence of
long term trends. A toy model involving synchrotron self-Compton (SSC) and
exter- nal Compton (EC; disk, broad line region, torus) mechanisms are used to
estimate magnetic field strength 6 0.03 - 0.88 G in sources displaying the
energy break and infer a prominent EC contribution. The timescale for
variability being shorter than synchrotron cooling implies steeper PSD slopes
which are inferred in these sources.Comment: 24 pages, 6 Tables, 13 figures, Accepted for MNRA
Dirac neutrinos and anomaly-free discrete gauge symmetries
Relying on Dirac neutrinos allows an infinity of anomaly-free discrete gauge
symmetries to be imposed on the Supersymmetric Standard Model, some of which
are GUT-compatible.Comment: 24 pages, minor changes, existence of flipped discrete gauge
symmetries is pointed ou
Computing Teichm\"{u}ller Maps between Polygons
By the Riemann-mapping theorem, one can bijectively map the interior of an
-gon to that of another -gon conformally. However, (the boundary
extension of) this mapping need not necessarily map the vertices of to
those . In this case, one wants to find the ``best" mapping between these
polygons, i.e., one that minimizes the maximum angle distortion (the
dilatation) over \textit{all} points in . From complex analysis such maps
are known to exist and are unique. They are called extremal quasiconformal
maps, or Teichm\"{u}ller maps.
Although there are many efficient ways to compute or approximate conformal
maps, there is currently no such algorithm for extremal quasiconformal maps.
This paper studies the problem of computing extremal quasiconformal maps both
in the continuous and discrete settings.
We provide the first constructive method to obtain the extremal
quasiconformal map in the continuous setting. Our construction is via an
iterative procedure that is proven to converge quickly to the unique extremal
map. To get to within of the dilatation of the extremal map, our
method uses iterations. Every step of the iteration
involves convex optimization and solving differential equations, and guarantees
a decrease in the dilatation. Our method uses a reduction of the polygon
mapping problem to that of the punctured sphere problem, thus solving a more
general problem.
We also discretize our procedure. We provide evidence for the fact that the
discrete procedure closely follows the continuous construction and is therefore
expected to converge quickly to a good approximation of the extremal
quasiconformal map.Comment: 28 pages, 6 figure
Inner product computation for sparse iterative solvers on\ud distributed supercomputer
Recent years have witnessed that iterative Krylov methods without re-designing are not suitable for distribute supercomputers because of intensive global communications. It is well accepted that re-engineering Krylov methods for prescribed computer architecture is necessary and important to achieve higher performance and scalability. The paper focuses on simple and practical ways to re-organize Krylov methods and improve their performance for current heterogeneous distributed supercomputers. In construct with most of current software development of Krylov methods which usually focuses on efficient matrix vector multiplications, the paper focuses on the way to compute inner products on supercomputers and explains why inner product computation on current heterogeneous distributed supercomputers is crucial for scalable Krylov methods. Communication complexity analysis shows that how the inner product computation can be the bottleneck of performance of (inner) product-type iterative solvers on distributed supercomputers due to global communications. Principles of reducing such global communications are discussed. The importance of minimizing communications is demonstrated by experiments using up to 900 processors. The experiments were carried on a Dawning 5000A, one of the fastest and earliest heterogeneous supercomputers in the world. Both the analysis and experiments indicates that inner product computation is very likely to be the most challenging kernel for inner product-based iterative solvers to achieve exascale
Optimal CUR Matrix Decompositions
The CUR decomposition of an matrix finds an
matrix with a subset of columns of together with an matrix with a subset of rows of as well as a
low-rank matrix such that the matrix approximates the matrix
that is, , where
denotes the Frobenius norm and is the best matrix
of rank constructed via the SVD. We present input-sparsity-time and
deterministic algorithms for constructing such a CUR decomposition where
and and rank. Up to constant
factors, our algorithms are simultaneously optimal in and rank.Comment: small revision in lemma 4.
- …
