111,193 research outputs found

    Block-Structured Supermarket Models

    Full text link
    Supermarket models are a class of parallel queueing networks with an adaptive control scheme that play a key role in the study of resource management of, such as, computer networks, manufacturing systems and transportation networks. When the arrival processes are non-Poisson and the service times are non-exponential, analysis of such a supermarket model is always limited, interesting, and challenging. This paper describes a supermarket model with non-Poisson inputs: Markovian Arrival Processes (MAPs) and with non-exponential service times: Phase-type (PH) distributions, and provides a generalized matrix-analytic method which is first combined with the operator semigroup and the mean-field limit. When discussing such a more general supermarket model, this paper makes some new results and advances as follows: (1) Providing a detailed probability analysis for setting up an infinite-dimensional system of differential vector equations satisfied by the expected fraction vector, where "the invariance of environment factors" is given as an important result. (2) Introducing the phase-type structure to the operator semigroup and to the mean-field limit, and a Lipschitz condition can be obtained by means of a unified matrix-differential algorithm. (3) The matrix-analytic method is used to compute the fixed point which leads to performance computation of this system. Finally, we use some numerical examples to illustrate how the performance measures of this supermarket model depend on the non-Poisson inputs and on the non-exponential service times. Thus the results of this paper give new highlight on understanding influence of non-Poisson inputs and of non-exponential service times on performance measures of more general supermarket models.Comment: 65 pages; 7 figure

    The visibility based Tapered Gridded Estimator (TGE) for the redshifted 21-cm power spectrum

    Get PDF
    We present the improved visibility based Tapered Gridded Estimator (TGE) for the power spectrum of the diffuse sky signal. The visibilities are gridded to reduce the computation, and tapered through a convolution to suppress the contribution from the outer regions of the telescope's field of view. The TGE also internally estimates the noise bias, and subtracts this out to give an unbiased estimate of the power spectrum. An earlier version of the 2D TGE for the angular power spectrum CC_{\ell} is improved and then extended to obtain the 3D TGE for the power spectrum P(k)P({\bf k}) of the 21-cm brightness temperature fluctuations. Analytic formulas are also presented for predicting the variance of the binned power spectrum. The estimator and its variance predictions are validated using simulations of 150MHz150 \, {\rm MHz} GMRT observations. We find that the estimator accurately recovers the input model for the 1D Spherical Power Spectrum P(k)P(k) and the 2D Cylindrical Power Spectrum P(k,k)P(k_\perp,k_\parallel), and the predicted variance is also in reasonably good agreement with the simulations.Comment: 19 pages, 13 figures. Accepted for publication in MNRAS. The definitive version will be available at http://mnrasl.oxfordjournals.org

    Mixing Hardware and Software Reversibility for Speculative Parallel Discrete Event Simulation

    Get PDF
    Speculative parallel discrete event simulation requires a support for reversing processed events, also called state recovery, when causal inconsistencies are revealed. In this article we present an approach where state recovery relies on a mix of hardware- and software-based techniques. We exploit the Hardware Transactional Memory (HTM) support, as offered by Intel Haswell CPUs, to process events as in-memory transactions, which are possibly committed only after their causal consistency is verified. At the same time, we exploit an innovative software-based reversibility technique, fully relying on transparent software instrumentation targeting x86/ELF objects, which enables undoing side effects by events with no actual backward re-computation. Each thread within our speculative processing engine dynamically (on a per-event basis) selects which recovery mode to rely on (hardware vs software) depending on varying runtime dynamics. The latter are captured by a lightweight analytic model indicating to what extent the HTM support (not paying any instrumentation cost) is efficient, and after what level of events’ parallelism it starts degrading its performance, e.g., due to excessive data conflicts while manipulating causality meta-data within HTM-based transactions. We released our implementation as open source software and provide experimental results for an assessment of its effectiveness. © Springer International Publishing Switzerland 2016

    The Power of Adaptivity in Quantum Query Algorithms

    Full text link
    Motivated by limitations on the depth of near-term quantum devices, we study the depth-computation trade-off in the query model, where the depth corresponds to the number of adaptive query rounds and the computation per layer corresponds to the number of parallel queries per round. We achieve the strongest known separation between quantum algorithms with rr versus r1r-1 rounds of adaptivity. We do so by using the kk-fold Forrelation problem introduced by Aaronson and Ambainis (SICOMP'18). For k=2rk=2r, this problem can be solved using an rr round quantum algorithm with only one query per round, yet we show that any r1r-1 round quantum algorithm needs an exponential (in the number of qubits) number of parallel queries per round. Our results are proven following the Fourier analytic machinery developed in recent works on quantum-classical separations. The key new component in our result are bounds on the Fourier weights of quantum query algorithms with bounded number of rounds of adaptivity. These may be of independent interest as they distinguish the polynomials that arise from such algorithms from arbitrary bounded polynomials of the same degree.Comment: 35 pages, 9 figure

    Boltzmann sampling of unlabelled structures

    Get PDF
    Boltzmann models from statistical physics combined with methods from analytic combinatorics give rise to efficient algorithms for the random generation of unlabelled objects. The resulting algorithms generate in an unbiased manner discrete configurations that may have nontrivial symmetries, and they do so by means of real-arithmetic computations. We present a collection of construction rules for such samplers, which applies to a wide variety of combinatorial classes, including integer partitions, necklaces, unlabelled functional graphs, dictionaries, series-parallel circuits, term trees and acyclic molecules obeying a variety of constraints, and so on. Under an abstract real-arithmetic computation model, the algorithms are, for many classical structures, of linear complexity provided a small tolerance is allowed on the size of the object drawn. As opposed to many of their discrete competitors, the resulting programs routinely make it possible to generate random objects of sizes in the range 10⁴ –10⁶

    Transit timing to first order in eccentricity

    Get PDF
    Characterization of transiting planets with transit timing variations (TTVs) requires understanding how to translate the observed TTVs into masses and orbital elements of the planets. This can be challenging in multi-planet transiting systems, but fortunately these systems tend to be nearly plane-parallel and low eccentricity. Here we present a novel derivation of analytic formulae for TTVs that are accurate to first order in the planet-star mass ratios and in the orbital eccentricities. These formulae are accurate in proximity to first order resonances, as well as away from resonance, and compare well with more computationally expensive N-body integrations in the low eccentricity, low mass-ratio regime when applied to simulated and to actual multi-transiting Kepler planet systems. We make code available for implementing these formulae.Comment: Revised to match published version; associated code may be found at https://github.com/ericagol/TTVFaste

    An efficient industrial big-data engine

    Get PDF
    Current trends in industrial systems opt for the use of different big-data engines as a mean to process huge amounts of data that cannot be processed with an ordinary infrastructure. The number of issues an industrial infrastructure has to face is large and includes challenges such as the definition of different efficient architecture setups for different applications, and the definition of specific models for industrial analytics. In this context, the article explores the development of a medium size big-data engine (i.e. implementation) able to improve performance in map-reduce computing by splitting the analytic into different segments that may be processed by the engine in parallel using a hierarchical model. This type of facility reduces end-to-end computation time for all segments with their results then merged with other information from other segments after their processing in parallel. This type of setup increases performance of current clusters improving I/O operations remarkably as empirical results revealed.Work partially supported by “Distributed Java Infrastructure for Real-Time Big-data” (CAS14/00118), eMadrid (S2013/ICE-2715), HERMES-SMARTDRIVER (TIN2013-46801-C4-2-R), and AUDACity (TIN2016-77158-C4-1-R)
    corecore