48,588 research outputs found

    Active repositioning of storage units in Robotic Mobile Fulfillment Systems

    Full text link
    In our work we focus on Robotic Mobile Fulfillment Systems in e-commerce distribution centers. These systems were designed to increase pick rates by employing mobile robots bringing movable storage units (so-called pods) to pick and replenishment stations as needed, and back to the storage area afterwards. One advantage of this approach is that repositioning of inventory can be done continuously, even during pick and replenishment operations. This is primarily accomplished by bringing a pod to a storage location different than the one it was fetched from, a process we call passive pod repositioning. Additionally, this can be done by explicitly bringing a pod from one storage location to another, a process we call active pod repositioning. In this work we introduce first mechanisms for the latter technique and conduct a simulation-based experiment to give first insights of their effect

    Risk, cohabitation and marriage

    Get PDF
    This paper introduces imperfect information,learning,and risk aversion in a two sided matching model.The modelprovides a theoreticalframework for the com- monly occurring phenomenon of cohabitation followed by marriage,and is con- sistent with empirical findings on these institutions.The paper has three major results.First,individuals set higher standards for marriage than for cohabitation. When the true worth of a cohabiting partner is revealed,some cohabiting unions are converted into marriage while others are not.Second,individuals cohabit within classes.Third,the premium that compensates individuals for the higher risk involved in marriage over a cohabiting partnership is derived.This premium can be decomposed into two parts.The first part is a function of the individual ’s level of risk aversion,while the second part is a function of the di difference in risk between marriage and cohabitation.

    Signature of Inverse Compton emission from blazars

    Full text link
    Blazars are classified into high, intermediate and low energy peaked sources based on the location of their synchrotron peak. This lies in infra-red/optical to ultra-violet bands for low and intermediate peaked blazars. The transition from synchrotron to inverse Compton emission falls in the X-ray bands for such sources. We present the spectral and timing analysis of 14 low and intermediate energy peaked blazars ob- served with XMMNewton spanning 31 epochs. Parametric fits to X-ray spectra helps constrain the possible location of transition from the high energy end of the syn- chrotron to the low energy end of the inverse Compton emission. In seven sources in our sample, we infer such a transition and constrain the break energy in the range 0.6 10 keV. The Lomb-Scargle periodogram is used to estimate the power spectral density (PSD) shape. It is well described by a power law in a majority of light curves, the index being flatter compared to general expectation from AGN, ranging here between 0.01 and 1.12, possibly due to short observation durations resulting in an absence of long term trends. A toy model involving synchrotron self-Compton (SSC) and exter- nal Compton (EC; disk, broad line region, torus) mechanisms are used to estimate magnetic field strength 6 0.03 - 0.88 G in sources displaying the energy break and infer a prominent EC contribution. The timescale for variability being shorter than synchrotron cooling implies steeper PSD slopes which are inferred in these sources.Comment: 24 pages, 6 Tables, 13 figures, Accepted for MNRA

    Dirac neutrinos and anomaly-free discrete gauge symmetries

    Full text link
    Relying on Dirac neutrinos allows an infinity of anomaly-free discrete gauge symmetries to be imposed on the Supersymmetric Standard Model, some of which are GUT-compatible.Comment: 24 pages, minor changes, existence of flipped discrete gauge symmetries is pointed ou

    Computing Teichm\"{u}ller Maps between Polygons

    Full text link
    By the Riemann-mapping theorem, one can bijectively map the interior of an nn-gon PP to that of another nn-gon QQ conformally. However, (the boundary extension of) this mapping need not necessarily map the vertices of PP to those QQ. In this case, one wants to find the ``best" mapping between these polygons, i.e., one that minimizes the maximum angle distortion (the dilatation) over \textit{all} points in PP. From complex analysis such maps are known to exist and are unique. They are called extremal quasiconformal maps, or Teichm\"{u}ller maps. Although there are many efficient ways to compute or approximate conformal maps, there is currently no such algorithm for extremal quasiconformal maps. This paper studies the problem of computing extremal quasiconformal maps both in the continuous and discrete settings. We provide the first constructive method to obtain the extremal quasiconformal map in the continuous setting. Our construction is via an iterative procedure that is proven to converge quickly to the unique extremal map. To get to within ϵ\epsilon of the dilatation of the extremal map, our method uses O(1/ϵ4)O(1/\epsilon^{4}) iterations. Every step of the iteration involves convex optimization and solving differential equations, and guarantees a decrease in the dilatation. Our method uses a reduction of the polygon mapping problem to that of the punctured sphere problem, thus solving a more general problem. We also discretize our procedure. We provide evidence for the fact that the discrete procedure closely follows the continuous construction and is therefore expected to converge quickly to a good approximation of the extremal quasiconformal map.Comment: 28 pages, 6 figure

    Inner product computation for sparse iterative solvers on\ud distributed supercomputer

    Get PDF
    Recent years have witnessed that iterative Krylov methods without re-designing are not suitable for distribute supercomputers because of intensive global communications. It is well accepted that re-engineering Krylov methods for prescribed computer architecture is necessary and important to achieve higher performance and scalability. The paper focuses on simple and practical ways to re-organize Krylov methods and improve their performance for current heterogeneous distributed supercomputers. In construct with most of current software development of Krylov methods which usually focuses on efficient matrix vector multiplications, the paper focuses on the way to compute inner products on supercomputers and explains why inner product computation on current heterogeneous distributed supercomputers is crucial for scalable Krylov methods. Communication complexity analysis shows that how the inner product computation can be the bottleneck of performance of (inner) product-type iterative solvers on distributed supercomputers due to global communications. Principles of reducing such global communications are discussed. The importance of minimizing communications is demonstrated by experiments using up to 900 processors. The experiments were carried on a Dawning 5000A, one of the fastest and earliest heterogeneous supercomputers in the world. Both the analysis and experiments indicates that inner product computation is very likely to be the most challenging kernel for inner product-based iterative solvers to achieve exascale

    Optimal CUR Matrix Decompositions

    Full text link
    The CUR decomposition of an m×nm \times n matrix AA finds an m×cm \times c matrix CC with a subset of c<nc < n columns of A,A, together with an r×nr \times n matrix RR with a subset of r<mr < m rows of A,A, as well as a c×rc \times r low-rank matrix UU such that the matrix CURC U R approximates the matrix A,A, that is, ACURF2(1+ϵ)AAkF2 || A - CUR ||_F^2 \le (1+\epsilon) || A - A_k||_F^2, where .F||.||_F denotes the Frobenius norm and AkA_k is the best m×nm \times n matrix of rank kk constructed via the SVD. We present input-sparsity-time and deterministic algorithms for constructing such a CUR decomposition where c=O(k/ϵ)c=O(k/\epsilon) and r=O(k/ϵ)r=O(k/\epsilon) and rank(U)=k(U) = k. Up to constant factors, our algorithms are simultaneously optimal in c,r,c, r, and rank(U)(U).Comment: small revision in lemma 4.
    corecore