24 research outputs found
Detecting Markov Chain Instability: A Monte Carlo Approach
We devise a Monte Carlo based method for detecting whether a non-negative
Markov chain is stable for a given set of parameter values. More precisely, for
a given subset of the parameter space, we develop an algorithm that is capable
of deciding whether the set has a subset of positive Lebesgue measure for which
the Markov chain is unstable. The approach is based on a variant of simulated
annealing, and consequently only mild assumptions are needed to obtain
performance guarantees.
The theoretical underpinnings of our algorithm are based on a result stating
that the stability of a set of parameters can be phrased in terms of the
stability of a single Markov chain that searches the set for unstable
parameters. Our framework leads to a procedure that is capable of performing
statistically rigorous tests for instability, which has been extensively tested
using several examples of standard and non-standard queueing networks
A Correction Term for the Covariance of Renewal-Reward Processes with Multivariate Rewards
We consider a renewal-reward process with multivariate rewards. Such a
process is constructed from an i.i.d.\ sequence of time periods, to each of
which there is associated a multivariate reward vector. The rewards in each
time period may depend on each other and on the period length, but not on the
other time periods. Rewards are accumulated to form a vector valued process
that exhibits jumps in all coordinates simultaneously, only at renewal epochs.
We derive an asymptotically exact expression for the covariance function (over
time) of the rewards, which is used to refine a central limit theorem for the
vector of rewards. As illustrated by a numerical example, this refinement can
yield improved accuracy, especially for moderate time-horizons
Analyzing large frequency disruptions in power systems using large deviations theory
We propose a method for determining the most likely cause, in terms of
conventional generator outages and renewable fluctuations, of power system
frequency reaching a predetermined level that is deemed unacceptable to the
system operator. Our parsimonious model of system frequency incorporates
primary and secondary control mechanisms, and supposes that conventional
outages occur according to a Poisson process and renewable fluctuations follow
a diffusion process. We utilize a large deviations theory based approach that
outputs the most likely cause of a large excursion of frequency from its
desired level. These results yield the insight that current levels of renewable
power generation do not significantly increase system vulnerability in terms of
frequency deviations relative to conventional failures. However, for a large
range of model parameters it is possible that such vulnerabilities may arise as
renewable penetration increases.Comment: Accepted to PMAPS 2020 (the 16th International Conference on
Probabilistic Methods Applied to Power Systems
Ranking transmission lines by overload probability using the empirical rate function
We develop a non-parametric procedure for ranking transmission lines in a
power system according to the probability that they will overload due to
stochastic renewable generation or demand-side load fluctuations, and compare
this procedure to several benchmark approaches. Using the IEEE 39-bus test
network we provide evidence that our approach, which statistically estimates
the rate function for each line, is highly promising relative to alternative
methods which count overload events or use incorrect parametric assumptions.Comment: Accepted to PMAPS 2020 (the 16th International Conference on
Probabilistic Methods Applied to Power Systems
Optimisation of stochastic networks with blocking: a functional-form approach
This paper introduces a class of stochastic networks with blocking, motivated
by applications arising in cellular network planning, mobile cloud computing,
and spare parts supply chains. Blocking results in lost revenue due to
customers or jobs being permanently removed from the system. We are interested
in striking a balance between mitigating blocking by increasing service
capacity, and maintaining low costs for service capacity. This problem is
further complicated by the stochastic nature of the system. Owing to the
complexity of the system there are no analytical results available that
formulate and solve the relevant optimization problem in closed form.
Traditional simulation-based methods may work well for small instances, but the
associated computational costs are prohibitive for networks of realistic size.
We propose a hybrid functional-form based approach for finding the optimal
resource allocation, combining the speed of an analytical approach with the
accuracy of simulation-based optimisation. The key insight is to replace the
computationally expensive gradient estimation in simulation optimisation with a
closed-form analytical approximation that is calibrated using a single
simulation run. We develop two implementations of this approach and conduct
extensive computational experiments on complex examples to show that it is
capable of substantially improving system performance. We also provide evidence
that our approach has substantially lower computational costs compared to
stochastic approximation
Transient provisioning and performance evaluation for cloud computing platforms: A capacity value approach
User demand on the computational resources of cloud computing platforms varies over time. These variations in demand can be predictable or unpredictable, resulting in ‘bursty’ fluctuations in demand. Furthermore, demand can arrive in batches, and users whose demands are not met can be impatient. We demonstrate how to compute the expected revenue loss over a finite time horizon in the presence of all these model characteristics through the use of matrix analytic methods. We then illustrate how to use this knowledge to make frequent short term provisioning decisions — transient provisioning. It is seen that taking each of the characteristics of fluctuating user demand (predictable, unpredictable, batchy) into account can result in a substantial reduction of losses. Moreover, our transient provisioning framework allows for a wide variety of system behaviors to be modeled and gives simple expressions for expected revenue loss which are straightforward to evaluate numerically
Human and mouse essentiality screens as a resource for disease gene discovery
The identification of causal variants in sequencing studies remains a considerable challenge that can be partially addressed by new gene-specific knowledge. Here, we integrate measures of how essential a gene is to supporting life, as inferred from viability and phenotyping screens performed on knockout mice by the International Mouse Phenotyping Consortium and essentiality screens carried out on human cell lines. We propose a cross-species gene classification across the Full Spectrum of Intolerance to Loss-of-function (FUSIL) and demonstrate that genes in five mutually exclusive FUSIL categories have differing biological properties. Most notably, Mendelian disease genes, particularly those associated with developmental disorders, are highly overrepresented among genes non-essential for cell survival but required for organism development. After screening developmental disorder cases from three independent disease sequencing consortia, we identify potentially pathogenic variants in genes not previously associated with rare diseases. We therefore propose FUSIL as an efficient approach for disease gene discovery. Discovery of causal variants for monogenic disorders has been facilitated by whole exome and genome sequencing, but does not provide a diagnosis for all patients. Here, the authors propose a Full Spectrum of Intolerance to Loss-of-Function (FUSIL) categorization that integrates gene essentiality information to aid disease gene discovery
Modelling complex stochastic systems: approaches to management and stability
This thesis is about coping with variability in outcomes for complex stochastic systems. We focus on systems where jobs arrive randomly throughout time to utilise resources for a random amount of time before departure. The systems we investigate are primarily concerned with the communication and storage of data. The thesis is partitioned into two parts. The first part studies systems where congestion leads to jobs waiting for service (queueing systems) and the second part considers systems where congestion leads to losses due to departures before provision of service (loss systems).For queueing systems, we are mainly interested in the management objective of ensuring that the expected time a job must wait before entering is finite --- a property known as stability. Finite waiting times occur naturally for loss systems due to the balking behaviour of jobs in response to congestion and so our attention in this case turns to the more ambitious goal of managing systems in such a way that the number of lost jobs is minimised.Each part consists of an introductory chapter providing background knowledge, which is followed by three chapters containing original research. In both parts we progress through these chapters by first applying traditional analytical approaches to novel models and then developing novel simulation-based approaches for models which are out of reach of traditional approaches.We begin our research on queueing networks in Part 1 by considering a network of infinite-server queues with the special feature that, triggered by specific events, the network population vector may undergo a linear transformation. We use moment generating functions to obtain expressions for transient and stationary moments of the queue size vector and characterise the set of parameters for which the system is stable. A variety of systems fit in the framework developed, such as networks of retrial queues, networks in which jobs can be rerouted when links fail, and storage systems.In the next chapter of Part 1 we study the recently introduced Queue-Proportional Rate Allocation scheduling algorithm for multihop radio networks. The main contribution is a proof using fluid limit techniques to show that a natural generalisation of this policy to allow weighting of packets at each link, to reflect nonhomogeneous priorities, retains the maximal stability property. We also state a conjecture that in heavy traffic the diffusion-scaled workload process of the network converges weakly to a reflected Brownian motion and that in this weak limit the vector of queue lengths is always proportional to the traffic arrival rate vector.We conclude Part 1 by devising a simulation-based method for detecting whether a non-negative Markov chain is unstable for a given set of parameter values. More precisely, for a given subset of the parameter space, we develop an algorithm that can decide whether the set has a subset of positive Lebesgue measure for which the Markov chain is unstable. The approach is based on a variant of simulated annealing, and consequently only mild assumptions are needed to obtain rigorous performance guarantees. Our framework leads to a procedure that can perform statistically rigorous tests for instability, which has been extensively tested using several examples of standard and non-standard queueing networks.We begin our investigation of loss systems in Part 2 by considering a finite-capacity Erlang B model that alternates between active and inactive states according to a two-state modulating Markov process. Jobs arrives to the system as a Poisson process but are blocked from entry when the system is at capacity or inactive. We use Laplace transforms to derive expressions for the revenue lost during short term planning horizons. These expressions can be used to assess alternative system designs.In the next chapter of Part 2 we develop a sophisticated loss system type model for cloud computing systems. User demand on the computational resources of cloud computing platforms varies over time. These variations in the arrival process can be predictable or unpredictable, resulting in time-varying and `bursty' demand fluctuations. Furthermore, jobs can arrive in batches, and users whose demands are not met can be impatient. We demonstrate how to compute the expected revenue loss over a finite time horizon in the presence of all these model characteristics using matrix analytic methods. It is seen that taking these characteristics of fluctuating user demand into account can result in a substantial reduction of losses.We conclude Part 2 by developing an optimisation framework for a model applicable to mobile cloud edge computing systems. Our model is a stochastic network with blocking: jobs attempt to be processed sequentially at nodes in a network but are lost when they attempt to access a node that is at capacity. The problem is mathematically intractable in general and time consuming to solve using standard simulation methods. Our novel method combines simulation with analytical approximations to quickly obtain high quality solutions. We extensively test our approach using several complex models
Ranking transmission lines by overload probability using the empirical rate function
We develop a non-parametric procedure for ranking transmission lines in a power system according to the probability that they will overload due to stochastic renewable generation or demand-side load fluctuations, and compare this procedure to several benchmark approaches. Using the IEEE 39-bus test network we provide evidence that our approach
Analyzing large frequency disruptions in power systems using large deviations theory
We propose a method for determining the most likely cause, in terms of conventional generator outages and renewable fluctuations, of power system frequency reaching a predetermined level that is deemed unacceptable to the system operator. Our parsimonious model of system frequency incorporates primary and secondary control mechanisms, and suppo