96 research outputs found
Optimisation of server selection for maximising utility in Erlang-loss systems
This paper undertakes the challenge of server selection problem in Erlang-loss system (ELS). We propose a novel approach to the server selection problem in the ELS taking into account probabilistic modelling to reflect a practical scenario when user arrivals vary over time. The proposed framework is divided into three stages, including i) developing a new method for server selection based on the M/M/n/n queuing model with probabilistic arrivals; ii) combining server allocation results with further research on utility-maximising server selection to optimise system performance; and iii) designing a heuristic approach to efficiently solve the developed optimisation problem. Simulation results show that by using this framework, Internet Service Providers (ISPs) can significantly improve QoS for better revenue with optimal server allocation in their data centre networks
Just Queuing: Policy-Based Scheduling Mechanism for Packet Switching Networks
The pervasiveness of the Internet and its applications lead to the potential increment of the users’ demands for more services with economical prices. The diversity of Internet traffic requires some classification and prioritisation since some traffic deserve much attention with less delay and loss compared to others. Current scheduling mechanisms are exposed to the trade-off between three major properties namely fairness, complexity and protection. Therefore, the question remains about how to improve the fairness and protection with less complex implementation. This research is designed to enhance scheduling mechanism by providing sustainability to the fairness and protection properties with simplicity in implementation; and hence higher service quality particularly for real-time applications. Extra elements are applied to the main fairness equation to improve the fairness property. This research adopts the restricted charge policy which imposes the protection of normal user. In terms of the complexity property, genetic algorithm has an advantage in holding the fitness score of the queue in separate storage space which potentially minimises the complexity of the algorithm. The integrity between conceptual, analytical and experimental approach verifies the efficiency of the proposed mechanism. The proposed mechanism is validated by using the emulation and the validation experiments involve real router flow data. The results of the evaluation showed fair bandwidth distribution similar to the popular Weighted Fair Queuing (WFQ) mechanism. Furthermore, better protection was exhibited in the results compared with the WFQ and two other scheduling mechanisms. The complexity of the proposed mechanism reached O(log(n)) which is considered as potentially low. Furthermore, this mechanism is limited to the wired networks and hence future works could improve the mechanism to be adopted in mobile ad-hoc networks or any other wireless networks. Moreover, more improvements could be applied to the proposed mechanism to enhance its deployment in the virtual circuits switching network such as the asynchronous transfer mode networks
Job-shop scheduling with approximate methods
Imperial Users onl
Stochastic scheduling and workload allocation : QoS support and profitable brokering in computing grids
Abstract: The Grid can be seen as a collection of services each of which performs some functionality. Users of the Grid seek to use combinations of these services to perform the overall task they need to achieve. In general this can be seen as aset of services with a workflow document describing how these services should be combined. The user may also have certain constraints on the workflow operations, such as execution time or cost ----t~ th~ user, specified in the form of a Quality of Service (QoS) document. The users . submit their workflow to a brokering service along with the QoS document. The brokering service's task is to map any given workflow to a subset of the Grid services taking the QoS and state of the Grid into account -- service availability and performance. We propose an approach for generating constraint equations describing the workflow, the QoS requirements and the state of the Grid. This set of equations may be solved using Mixed-Integer Linear Programming (MILP), which is the traditional method. We further develop a novel 2-stage stochastic MILP which is capable of dealing with the volatile nature of the Grid and adapting the selection of the services during the lifetime of the workflow. We present experimental results comparing our approaches, showing that the . 2-stage stochastic programming approach performs consistently better than other traditional approaches. Next we addresses workload allocation techniques for Grid workflows in a multi-cluster Grid We model individual clusters as MIMIk. queues and obtain a numerical solutio~ for missed deadlines (failures) of tasks of Grid workflows. We also present an efficient algorithm for obtaining workload allocations of clusters. Next we model individual cluster resources as G/G/l queues and solve an optimisation problem that minimises QoS requirement violation, provides QoS guarantee and outperforms reservation based scheduling algorithms. Both approaches are evaluated through an experimental simulation and the results confirm that the proposed workload allocation strategies combined with traditional scheduling algorithms performs considerably better in terms of satisfying QoS requirements of Grid workflows than scheduling algorithms that don't employ such workload allocation techniques. Next we develop a novel method for Grid brokers that aims at maximising profit whilst satisfying end-user needs with a sufficient guarantee in a volatile utility Grid. We develop a develop a 2-stage stochastic MILP which is capable of dealing with the volatile nature . of the Grid and obtaining cost bounds that ensure that end-user cost is minimised or satisfied and broker's profit is maximised with sufficient guarantee. These bounds help brokers know beforehand whether the budget limits of end-users can be satisfied and. if not then???????? obtain appropriate future leases from service providers. Experimental results confirm the efficacy of our approach.Imperial Users onl
Revenue maximization problems in commercial data centers
PhD ThesisAs IT systems are becoming more important everyday, one of the main concerns is that users may
face major problems and eventually incur major costs if computing systems do not meet the expected
performance requirements: customers expect reliability and performance guarantees, while
underperforming systems loose revenues. Even with the adoption of data centers as the hub of
IT organizations and provider of business efficiencies the problems are not over because it is extremely
difficult for service providers to meet the promised performance guarantees in the face of
unpredictable demand. One possible approach is the adoption of Service Level Agreements (SLAs),
contracts that specify a level of performance that must be met and compensations in case of failure.
In this thesis I will address some of the performance problems arising when IT companies sell
the service of running ‘jobs’ subject to Quality of Service (QoS) constraints. In particular, the aim
is to improve the efficiency of service provisioning systems by allowing them to adapt to changing
demand conditions.
First, I will define the problem in terms of an utility function to maximize. Two different models
are analyzed, one for single jobs and the other useful to deal with session-based traffic. Then,
I will introduce an autonomic model for service provision. The architecture consists of a set of
hosted applications that share a certain number of servers. The system collects demand and performance
statistics and estimates traffic parameters. These estimates are used by management policies
which implement dynamic resource allocation and admission algorithms. Results from a number of
experiments show that the performance of these heuristics is close to optimal.QoSP (Quality of Service Provisioning)British Teleco
Generalised Radio Resource Sharing Framework for Heterogeneous Radio Networks
Recent years have seen a significant interest in quantitative measurements of licensed
and unlicensed spectrum use. Several research groups, companies and regulatory bodies
have conducted studies of varying times and locations with the aim to capture the over-
all utilisation rate of spectrum. The studies have shown that large amount of allocated
spectrum are under-utilised, and create the so called \spectrum holes", resulting in a
waste of valuable frequency resources. In order to satisfy the requirements of increased
demands of spectrum resources and to improve spectrum utilisation, dynamic spectrum
sharing (DSS) is proposed in the literature along with cognitive radio networks (CRNs).
DSS and CRNs have been studied from many perspectives, for example spectrum sensing
to identify the idle channels has been under the microscope to improve detection proba-
bility. As well as spectrum sensing, the DSS performance analysis remains an important
topic moving towards better spectrum utilisation to meet the exponential growth of
traffi�c demand. In this dissertation we have studied both techniques to achieve different
objectives such as enhancing the probability of detection and spectrum utilisation.
In order to improve spectrum sensing decisions we have proposed a cooperative spec-
trum sensing scheme which takes the propagation conditions into consideration. The
proposed location aware scheme shows an improved performance over conventional hard
combination scheme, highlighting the requirements of location awareness in cognitive
radio networks (CRNs).
Due to the exponentially growing wireless applications and services, traffi�c demand is
increasing rapidly. To cope with such growth wireless network operators seek radio
resource cooperation strategies for their users with the highest possible grade of service
(GoS). However, it is diffi�cult to fathom the potential benefits of such cooperation, thus
we propose a set of analytical models for DSS to analyse the blocking probability gain and
degradation for operators. The thesis focuses on examining the performance gains that
DSS can entail, in different scenarios. A number of dynamic spectrum sharing scenarios
are proposed. The proposed models focus on measuring the blocking probability of
secondary network operators as a trade-o� with a marginal increase of the blocking
probability of a primary network in return of monetary rewards. We derived the global
balance equation and an explicit expression of the blocking probability for each model.
The robustness of the proposed analytical models is evaluated under different scenarios
by considering varying tra�c intensities, different network sizes and adding reserved
resources (or pooled capacity). The results show that the blocking probabilities can
be reduced significantly with the proposed analytical DSS models in comparison to the
existing local spectrum access schemes.
In addition to the sharing models, we further assume that the secondary operator aims
to borrow spectrum bandwidths from primary operators when more spectrum resources
available for borrowing than the actual demand considering a merchant mode. Two
optimisation models are proposed using stochastic optimisation models in which the secondary operator (i) spends the minimum amount of money to achieve the target
GoS assuming an unrestricted budget or (ii) gains the maximum amount of pro�t to
achieve the target GoS assuming restricted budget. Results obtained from each model
are then compared with results derived from algorithms in which spectrum borrowings
were random. Comparisons showed that the gain in the results obtained from our pro-
posed stochastic optimisation model is significantly higher than heuristic counterparts.
A post-optimisation performance analysis of the operators in the form of analysis of
blocking probability in various scenarios is investigated to determine the probable per-
formance gain and degradation of the secondary and primary operators respectively.
We mathematically model the sharing agreement scenario and derive the closed form
solution of blocking probabilities for each operator. Results show how the secondary
and primary operators perform in terms of blocking probability under various offered
loads and sharing capacity.
The simulation results demonstrate that at most trading windows, the proposed opti-
mal algorithms outperforms their heuristic counterparts. When we consider 80 cells,
the proposed pro�t maximisation algorithm results in 33.3% gain in net pro�t to the
secondary operators as well as facilitating 2.35% more resources than the heuristic ap-
proach. In addition, the cost minimisation algorithm results in 46.34% gain over the
heuristic algorithm when considering the same number of cells (80)
On the development life cycle of distributed functional applications: a case study
[Abstract] In a world where technology plays a major, increasing role day after day,
efforts devoted to develop better software are never too much. Both industry
and academia are well aware of this, and keep on working to face the new
problems and challenges that arise, more efficiently and effectively each time.
Companies show their interest in cutting-edge methods, techniques, and tools,
especially when they are backed up with empirical results that show practical
benefits. On the other hand, academia is more than ever aware of real-world
problems, and it is succeeding in connecting its research efforts to actual case
studies.
This thesis follows the mentioned trend, as it presents a study on software
applications development based on a real case. As its main novelty and contribution,
the integral process of software development is addressed from the
functional paradigm point of view. In contrast with the traditional imperative
paradigm, the functional paradigm represents not only a different way of developing
applications, but also a distinct manner of thinking about software
itself. This work goes through the characteristics and properties that functional
technology gives to both software and its development process, from
the early analysis and design development phases, up to the final and no less
critical verification and validation stages. In particular, the strengths and opportunities
that emerge in the broad field of testing, thanks to the use of the
functional paradigm, are explored in depth.
From the analysis of this process being put into practise in a real software
development experience, we draw conclusions about the convenience of applying
a functional approach to complex domains. At the same time, we extract
a reusable engineering methodology to do so
- …