7,129 research outputs found
Separation of timescales in a two-layered network
We investigate a computer network consisting of two layers occurring in, for
example, application servers. The first layer incorporates the arrival of jobs
at a network of multi-server nodes, which we model as a many-server Jackson
network. At the second layer, active servers at these nodes act now as
customers who are served by a common CPU. Our main result shows a separation of
time scales in heavy traffic: the main source of randomness occurs at the
(aggregate) CPU layer; the interactions between different types of nodes at the
other layer is shown to converge to a fixed point at a faster time scale; this
also yields a state-space collapse property. Apart from these fundamental
insights, we also obtain an explicit approximation for the joint law of the
number of jobs in the system, which is provably accurate for heavily loaded
systems and performs numerically well for moderately loaded systems. The
obtained results for the model under consideration can be applied to
thread-pool dimensioning in application servers, while the technique seems
applicable to other layered systems too.Comment: 8 pages, 2 figures, 1 table, ITC 24 (2012
The evaluation of computer performance by means of state-dependent queueing network models
Imperial Users onl
Recommended from our members
Construction of a support tool for the design of the activity structures based computer system architectures
This thesis was submitted for the degree of Doctor of Philosophy and was awarded by Brunel University.This thesis is a reapproachment of diverse design concepts, brought to bear upon the computer system
engineering problem of identification and control of highly constrained multiprocessing (HCM)
computer machines. It contributes to the area of meta/general systems methodology, and brings
a new insight into the design formalisms, and results afforded by bringing together various design
concepts that can be used for the construction of highly constrained computer system architectures.
A unique point of view is taken by assuming the process of identification and control of HCM
computer systems to be the process generated by the Activity Structures Methodology (ASM).
The research in ASM has emerged from the Neuroscience research, aiming at providing the
techniques for combining the diverse knowledge sources that capture the 'deep knowledge' of this
application field in an effective formal and computer representable form. To apply the ASM design
guidelines in the realm of the distributed computer system design, we provide new design definitions
for the identification and control of such machines in terms of realisations. These realisation definitions
characterise the various classes of the identification and control problem. The classes covered
consist of:
1. the identification of the designer activities,
2. the identification and control of the machine's distributed structures of behaviour,
3. the identification and control of the conversational environment activities (i.e. the randomised/
adaptive activities and interactions of both the user and the machine environments),
4. the identification and control of the substrata needed for the realisation of the machine, and
5. the identification of the admissible design data, both user-oriented and machineoriented,
that can force the conversational environment to act in a self-regulating
manner.
All extent results are considered in this context, allowing the development of both necessary
conditions for machine identification in terms of their distributed behaviours as well as the substrata
structures of the unknown machine and sufficient conditions in terms of experiments on the unknown
machine to achieve the self-regulation behaviour.
We provide a detailed description of the design and implementation of the support software tool
which can be used for aiding the process of constructing effective, HCM computer systems, based
on various classes of identification and control. The design data of a highly constrained system, the
NUKE, are used to verify the tool logic as well as the various identification and control procedures.
Possible extensions as well as future work implied by the results are considered.Government of Ira
Power series approximations for two-class generalized processor sharing systems
We develop power series approximations for a discrete-time queueing system with two parallel queues and one processor. If both queues are nonempty, a customer of queue 1 is served with probability beta, and a customer of queue 2 is served with probability 1-beta. If one of the queues is empty, a customer of the other queue is served with probability 1. We first describe the generating function U(z (1),z (2)) of the stationary queue lengths in terms of a functional equation, and show how to solve this using the theory of boundary value problems. Then, we propose to use the same functional equation to obtain a power series for U(z (1),z (2)) in beta. The first coefficient of this power series corresponds to the priority case beta=0, which allows for an explicit solution. All higher coefficients are expressed in terms of the priority case. Accurate approximations for the mean stationary queue lengths are obtained from combining truncated power series and Pad, approximation
Performance controls for distributed telecommunication services
As the Internet and Telecommunications domains merge, open telecommunication service architectures such as TINA, PARLAY and PINT are becoming prevalent. Distributed Computing is a common engineering component in these technologies and promises to bring improvements to the scalability, reliability and flexibility of telecommunications service delivery systems. This distributed approach to service delivery introduces new performance concerns. As service logic is decomposed into software components and distnbuted across network resources, significant additional resource loading is incurred due to inter-node communications. This fact makes the choice of distribution of components in the network and the distribution of load between these components critical design and operational issues which must be resolved to guarantee a high level of service for the customer and a profitable network for the service operator.
Previous research in the computer science domain has addressed optimal placement of components from the perspectives of minimising run time, minimising communications costs or balancing of load between network resources. This thesis proposes a more extensive optimisation model, which we argue, is more useful for addressing concerns pertinent to the telecommunications domain. The model focuses on providing optimal throughput and profitability of network resources and on overload protection whilst allowing flexibility in terms of the cost of installation of component copies and differentiation in the treatment of service types, in terms of fairness to the customer and profitability to the operator. Both static (design-time) component distribution and dynamic (run-time) load distribution algorithms are developed using Linear and Mixed Integer Programming techniques. An efficient, but sub-optimal, run-time solution, employing Market-based control, is also proposed.
The performance of these algorithms is investigated using a simulation model of a distributed service platform, which is based on TINA service components interacting with the Intelligent Network through gateways. Simulation results are verified using Layered Queuing Network analytic modelling Results show significant performance gains over simpler methods of performance control and demonstrate how trade-offs in network profitability, fairness and network cost are possible
Performance measurement and analysis of large filestores
PhD ThesisPerformance measurements of two large time-sharing computer systems
are presented, with emphasis on their disk filestores. Similarities
of process behaviour are found in the measured systems and another system
reported in the literature. Individual processes make i/o requests in
sequences, or bursts. Burst lengths have a mean of two with a large
variance; within a burst, file i/o requests are spatially sequential in
intent and are temporally related.
Characterizations of these behaviour patterns form the basis of a
methodology for filestore evaluation and design. Descriptions of spatial
and temporal load are abstracted from software traces without loss
of any performance factor; these descriptions are inputs to a statistical
model of the processes in the environment of the filestore. The filestore
is represented by a simulation queuing model. The method specifies the
inputs to the composite model and describes the calibration of outputs
to match observable outputs. A model is built by this method, and validated
for different loads.
The model is used for three evaluation experiments. Disk request scheduling
is not statistically significant; filestore layout and disk capacity are
highly significant; disks with fast-access areas are shown to improve
performance by taking advantage of spatial accessing patterns. The limits
of performance of a novel filestore equipped with a cache store are
explored to determine guidelines for this new design. Modest improvements
resulting from this design are shown to produce a considerable improvement
in overall system performance.The Science Research Council:
The University of Newcastle upon Tyne
Model-driven development of data intensive applications over cloud resources
The proliferation of sensors over the last years has generated large amounts of raw data, forming data streams that need to be processed. In many cases, cloud resources are used for such processing, exploiting their flexibility, but these sensor streaming applications often need to support operational and control actions that have real-time and low-latency requirements that go beyond the cost effective and flexible solutions supported by existing cloud frameworks, such as Apache Kafka, Apache Spark Streaming, or Map-Reduce Streams. In this paper, we describe a model-driven and stepwise refinement methodological approach for streaming applications executed over clouds. The central role is assigned to a set of Petri Net models for specifying functional and non-functional requirements. They support model reuse, and a way to combine formal analysis, simulation, and approximate computation of minimal and maximal boundaries of non-functional requirements when the problem is either mathematically or computationally intractable. We show how our proposal can assist developers in their design and implementation decisions from a performance perspective. Our methodology allows to conduct performance analysis: The methodology is intended for all the engineering process stages, and we can (i) analyse how it can be mapped onto cloud resources, and (ii) obtain key performance indicators, including throughput or economic cost, so that developers are assisted in their development tasks and in their decision taking. In order to illustrate our approach, we make use of the pipelined wavefront array
A system for the simulation of hardware to software allocation and performance evaluation
Imperial Users onl
- …