17 research outputs found

    TTL Approximations of the Cache Replacement Algorithms LRU(m) and h-LRU

    Get PDF
    International audienceComputer system and network performance can be significantly improved by caching frequently used information. When the cache size is limited, the cache replacement algorithm has an important impact on the effectiveness of caching. In this paper we introduce time-to-live (TTL) approximations to determine the cache hit probability of two classes of cache replacement algorithms: h-LRU and LRU(m). These approximations only require the requests to be generated according to a general Markovian arrival process (MAP). This includes phase-type renewal processes and the IRM model as special cases. We provide both numerical and theoretical support for the claim that the proposed TTL approximations are asymptotically exact. In particular, we show that the transient hit probability converges to the solution of a set of ODEs (under the IRM model), where the fixed point of the set of ODEs corresponds to the TTL approximation. We use this approximation and trace-based simulation to compare the performance of h-LRU and LRU(m). First, we show that they perform alike, while the latter requires less work when a hit/miss occurs. Second, we show that as opposed to LRU, h-LRU and LRU(m) are sensitive to the correlation between consecutive inter-request times. Last, we study cache partitioning. In all tested cases, the hit probability improved by partitioning the cache into different parts—each being dedicated to a particular content provider. However, the gain is limited and the optimal partition sizes are very sensitive to the problem's parameters

    Lattice Green's Functions of the Higher-Dimensional Face-Centered Cubic Lattices

    Full text link
    We study the face-centered cubic lattice (fcc) in up to six dimensions. In particular, we are concerned with lattice Green's functions (LGF) and return probabilities. Computer algebra techniques, such as the method of creative telescoping, are used for deriving an ODE for a given LGF. For the four- and five-dimensional fcc lattices, we give rigorous proofs of the ODEs that were conjectured by Guttmann and Broadhurst. Additionally, we find the ODE of the LGF of the six-dimensional fcc lattice, a result that was not believed to be achievable with current computer hardware.Comment: 16 pages, final versio

    Resource allocation policies for service provisioning systems

    Get PDF
    This thesis is concerned with maximising the efficiency of hosting of service provisioning systems consisting of clusters or networks of servers. The tools employed are those of probabilistic modelling, optimization and simulation. First, a system where the servers in a cluster may be switched dynamically and preemptively from one kind of work to another is examined. The demand consists of two job types joining separate queues, with different arrival and service characteristics, and also different relative importance represented by appropriate holding costs. The switching of a server from queue i to queue j incurs a cost which may be monetary or may involve a period of unavail- ability. The optimal switching policy is obtained numerically by solving a dynamic programming equation. Two heuristic policies - one static and one are evaluated by simulation and are compared to the optimal dynamic - policy. The dynamic heuristic is shown to perform well over a range of pa- rameters, including changes in demand. The model, analysis and evaluation are then generalized to an arbitrary number, M, of job types. Next, the problem of how best to structure and control a distributed com- puter system containing many processors is considered. The performance trade-offs associated with different tree structures are evaluated approximately by applying appropriate queueing models. It is shown that. for a given set of parameters and job distribution policy, there is an optimal tree structure that minimizes the overall average response time. This is obtained numerically through comparison of average response times. A simple heuris- tic policy is shown to perform well under certain conditions. The last model addresses the trade-offs between reliability and perfor- mance. A number of servers, each of which goes through alternating periods of being operative and inoperative, offer services to an incoming stream of demands. The objective is to evaluate and optimize performance and cost metrics. A large real-life data set containing information about server break- downs is analyzed first. The results indicate that the durations of the oper- ative periods are not distributed exponentially. However, hyperexponential distributions are found to be a good fit for the observed data. A model based on these distributions is then formulated, and is solved exactly using the method of spectral expansion. A simple approximation which is accu- rate for heavily loaded systems is also proposed. The results of a number of numerical experiments are reported.EThOS - Electronic Theses Online ServiceBritish Telecom, North-East Regional e-Science CentreGBUnited Kingdo

    Non-acyclicity of coset lattices and generation of finite groups

    Get PDF

    Transient Analysis of Large-scale Stochastic Service Systems

    Get PDF
    The transient analysis of large-scale systems is often difficult even when the systems belong to the simplest M/M/n type of queues. To address analytical difficulties, previous studies have been conducted under various asymptotic regimes by suitably accelerating parameters, thereby establishing some useful mathematical frameworks and giving insights into important characteristics and intuitions. However, some studies show significant limitations when used to approximate real service systems: (i) they are more relevant to steady-state analysis; (ii) they emphasize proofs of convergence results rather than numerical methods to obtain system performance; and (iii) they provide only one set of limit processes regardless of actual system size. Attempting to overcome the drawbacks of previous studies, this dissertation studies the transient analysis of large-scale service systems with time-dependent parameters. The research goal is to develop a methodology that provides accurate approximations based on a technique called uniform acceleration, utilizing the theory of strong approximations. We first investigate and discuss the possible inaccuracy of limit processes obtained from employing the technique. As a solution, we propose adjusted fluid and diffusion limits that are specifically designed to approximate large, finite-sized systems. We find that the adjusted limits significantly improve the quality of approximations and hold asymptotic exactness as well. Several numerical results provide evidence of the effectiveness of the adjusted limits. We study both a call center which is a canonical example of large-scale service systems and an emerging peer-based Internet multimedia service network known as P2P. Based on our findings, we introduce a possible extension to systems which show non-Markovian behavior that is unaddressed by the uniform acceleration technique. We incorporate the denseness of phase-type distributions into the derivation of limit processes. The proposed method offers great potential to accurately approximate performance measures of non-Markovian systems with less computational burden

    On the throughput optimization in large-scale batch-processing systems

    Get PDF
    We analyse a data-processing system with clients producing jobs which are processed in batches by parallel servers; the system throughput critically depends on the batch size and a corresponding sub-additive speedup function. In practice, throughput optimization relies on numerical searches for the optimal batch size, a process that can take up to multiple days in existing commercial systems. In this paper, we model the system in terms of a closed queueing network; a standard Markovian analysis yields the optimal throughput in time. Our main contribution is a mean-field model of the system for the regime where the system size is large. We show that the mean-field model has a unique, globally attractive stationary point which can be found in closed form and which characterizes the asymptotic throughput of the system as a function of the batch size. Using this expression we find the asymptotically optimal throughput in time. Numerical settings from a large commercial system reveal that this asymptotic optimum is accurate in practical finite regimes
    corecore