52 research outputs found
Importance Sampling Simulation of Population Overflow in Two-node Tandem Networks
In this paper we consider the application of importance sampling in simulations of Markovian tandem networks in order to estimate the probability of rare events, such as network population overflow. We propose a heuristic methodology to obtain a good approximation to the 'optimal' state-dependent change of measure (importance sampling distribution). Extensive experimental results on 2-node tandem networks are very encouraging, yielding asymptotically efficient estimates (with bounded relative error) where no other state-independent importance sampling techniques are known to be efficient The methodology avoids the costly optimization involved in other recently proposed approaches to approximate the 'optimal' state-dependent change of measure. Moreover, the insight drawn from the heuristic promises its applicability to larger networks and more general topologies
Secure Distributed Dynamic State Estimation in Wide-Area Smart Grids
Smart grid is a large complex network with a myriad of vulnerabilities,
usually operated in adversarial settings and regulated based on estimated
system states. In this study, we propose a novel highly secure distributed
dynamic state estimation mechanism for wide-area (multi-area) smart grids,
composed of geographically separated subregions, each supervised by a local
control center. We firstly propose a distributed state estimator assuming
regular system operation, that achieves near-optimal performance based on the
local Kalman filters and with the exchange of necessary information between
local centers. To enhance the security, we further propose to (i) protect the
network database and the network communication channels against attacks and
data manipulations via a blockchain (BC)-based system design, where the BC
operates on the peer-to-peer network of local centers, (ii) locally detect the
measurement anomalies in real-time to eliminate their effects on the state
estimation process, and (iii) detect misbehaving (hacked/faulty) local centers
in real-time via a distributed trust management scheme over the network. We
provide theoretical guarantees regarding the false alarm rates of the proposed
detection schemes, where the false alarms can be easily controlled. Numerical
studies illustrate that the proposed mechanism offers reliable state estimation
under regular system operation, timely and accurate detection of anomalies, and
good state recovery performance in case of anomalies
Recommended from our members
Rare-Event Estimation and Calibration for Large-Scale Stochastic Simulation Models
Stochastic simulation has been widely applied in many domains. More recently, however, the rapid surge of sophisticated problems such as safety evaluation of intelligent systems has posed various challenges to conventional statistical methods. Motivated by these challenges, in this thesis, we develop novel methodologies with theoretical guarantees and numerical applications to tackle them from different perspectives.
In particular, our works can be categorized into two areas: (1) rare-event estimation (Chapters 2 to 5) where we develop approaches to estimating the probabilities of rare events via simulation; (2) model calibration (Chapters 6 and 7) where we aim at calibrating the simulation model so that it is close to reality.
In Chapter 2, we study rare-event simulation for a class of problems where the target hitting sets of interest are defined via modern machine learning tools such as neural networks and random forests. We investigate an importance sampling scheme that integrates the dominating point machinery in large deviations and sequential mixed integer programming to locate the underlying dominating points. We provide efficiency guarantees and numerical demonstration of our approach.
In Chapter 3, we propose a new efficiency criterion for importance sampling, which we call probabilistic efficiency. Conventionally, an estimator is regarded as efficient if its relative error is sufficiently controlled. It is widely known that when a rare-event set contains multiple "important regions" encoded by the dominating points, importance sampling needs to account for all of them via mixing to achieve efficiency. We argue that the traditional analysis recipe could suffer from intrinsic looseness by using relative error as an efficiency criterion. Thus, we propose the new efficiency notion to tighten this gap. In particular, we show that under the standard Gartner-Ellis large deviations regime, an importance sampling that uses only the most significant dominating points is sufficient to attain this efficiency notion.
In Chapter 4, we consider the estimation of rare-event probabilities using sample proportions output by crude Monte Carlo. Due to the recent surge of sophisticated rare-event problems, efficiency-guaranteed variance reduction may face implementation challenges, which motivate one to look at naive estimators. In this chapter we construct confidence intervals for the target probability using this naive estimator from various techniques, and then analyze their validity as well as tightness respectively quantified by the coverage probability and relative half-width.
In Chapter 5, we propose the use of extreme value analysis, in particular the peak-over-threshold method which is popularly employed for extremal estimation of real datasets, in the simulation setting. More specifically, we view crude Monte Carlo samples as data to fit on a generalized Pareto distribution. We test this idea on several numerical examples. The results show that in the absence of efficient variance reduction schemes, it appears to offer potential benefits to enhance crude Monte Carlo estimates.
In Chapter 6, we investigate a framework to develop calibration schemes in parametric settings, which satisfies rigorous frequentist statistical guarantees via a basic notion that we call eligibility set designed to bypass non-identifiability via a set-based estimation. We investigate a feature extraction-then-aggregation approach to construct these sets that target at multivariate outputs. We demonstrate our methodology on several numerical examples, including an application to calibration of a limit order book market simulator.
In Chapter 7, we study a methodology to tackle the NASA Langley Uncertainty Quantification Challenge, a model calibration problem under both aleatory and epistemic uncertainties. Our methodology is based on an integration of distributionally robust optimization and importance sampling. The main computation machinery in this integrated methodology amounts to solving sampled linear programs. We present theoretical statistical guarantees of our approach via connections to nonparametric hypothesis testing, and numerical performances including parameter calibration and downstream decision and risk evaluation tasks
Methods for performance evaluation of networks : fast simulation of loss systems and analysis of Internet congestion control
Performance evaluation of modern telecommunication networks by means of mathematical modeling frequently results in a situation, whereby an exact analytical solution poses a difficult problem in terms of computational evaluation. In this thesis, two such problems are studied and a different approach for easing the computational burden is developed in each case.
The first part of the thesis considers the problem of evaluating blocking probabilities in loss systems, which are often used as models for the call scale behavior of modern networks. In this case, the solution to the problem can be given a well known analytical expression, but in practice it can not be used for computing the blocking probabilities due to the prohibitive size of the state space of the system. Then one can use simulation to obtain estimates of the blocking probabilities. For increasing the efficiency of the simulation, i.e., for reducing the variance of the estimates, several novel and increasingly more efficient methods are presented. Noticeable variance reductions are obtained by applying the method of conditional expectations. However, even greater variance reductions are gained by using importance sampling. In the thesis several importance sampling based methods are presented, of which the inverse convolution approach provides variance reductions surpassing all previously reported results in the literature.
In the second part of the thesis, the problem of congestion control in the Internet is studied. Specifically, the focus is on modeling the interaction between the TCP rate control algorithm and the RED buffer management mechanism. By using various analytical approximations, a novel dynamic model is derived for describing the interaction between an idealized TCP source population and a RED controlled buffer. Ultimately, the model consists of a set of coupled retarded functional differential equations (RFDEs) governing the time dependent expectations of the stochastic system state variables. This model is used to explore the dependency of the equilibrium of the system on the parameters of the physical system. Additionally, methods are derived allowing the stability of the system to be analyzed. In particular, sufficient and necessary conditions are obtained for the RFDE system, such that the system is asymptotically stable.reviewe
Extended Abstracts: PMCCS3: Third International Workshop on Performability Modeling of Computer and Communication Systems
Coordinated Science Laboratory was formerly known as Control Systems LaboratoryThe pages of the front matter that are missing from the PDF were blank
Automated decision making and problem solving. Volume 2: Conference presentations
Related topics in artificial intelligence, operations research, and control theory are explored. Existing techniques are assessed and trends of development are determined
Tools and Algorithms for the Construction and Analysis of Systems
This open access two-volume set constitutes the proceedings of the 27th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2021, which was held during March 27 – April 1, 2021, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2021. The conference was planned to take place in Luxembourg and changed to an online format due to the COVID-19 pandemic. The total of 41 full papers presented in the proceedings was carefully reviewed and selected from 141 submissions. The volume also contains 7 tool papers; 6 Tool Demo papers, 9 SV-Comp Competition Papers. The papers are organized in topical sections as follows: Part I: Game Theory; SMT Verification; Probabilities; Timed Systems; Neural Networks; Analysis of Network Communication. Part II: Verification Techniques (not SMT); Case Studies; Proof Generation/Validation; Tool Papers; Tool Demo Papers; SV-Comp Tool Competition Papers
Event-triggered Learning
Machine learning has seen many recent breakthroughs. Inspired by these, learningcontrol
systems emerged. In essence, the goal is to learn models and control policies
for dynamical systems. Dealing with learning-control systems is hard and there are
several key challenges that differ from classical machine learning tasks. Conceptually,
excitation and exploration play a major role in learning-control systems. On
the one hand, we usually aim for controllers that stabilize a system with the goal of
avoiding deviations from a setpoint or reference. However, we also need informative
data for learning, which is often not the case when controllers work well. Therefore,
there is a problem due to the opposing objectives of many control theoretical tasks
and the requirements for successful learning outcomes.
Additionally, change of dynamics or other conditions is often encountered for
control systems in practice. For example, new tasks, changing load conditions, or
different external conditions have a substantial influence on the underlying distribution.
Learning can provide the flexibility to adapt the behavior of learning-control
systems to these events.
Since learning has to be applied with sufficient excitation there are many practical
situations that hinge on the following problem:
"When to trigger learning updates in learning-control systems?"
This is the core question of this thesis and despite its relevance, there is no general
method that provides an answer. We propose and develop a new paradigm for
principled decision making on when to learn, which we call event-triggered learning
(ETL).
The first triggers that we discuss are designed for networked control systems. All
agents use model-based predictions to anticipate the other agents’ behavior which
makes communication only necessary when the predictions deviate too much. Essentially,
an accurate model can save communication, while a poor model leads to
poor predictions and thus frequent updates. The learning triggers are based on the
inter-communication times (the time between two communication instances). They
are independent and identically distributed random variables, which directly leads
to sound guarantees. The framework is validated in experiments and leads to 70%
communication savings for wireless sensor networks that monitor human walking.
In the second part, we consider optimal control algorithms and start with linear
quadratic regulators. A perfect model yields the best possible controller, while poor models result in poor controllers. Thus, by analyzing the control performance, we
can infer the model’s accuracy. From a technical point of view, we have to deal
with correlated data and work with more sophisticated tools to provide the desired
theoretical guarantees. While we obtain a powerful test that is tightly tailored to the
problem at hand, it does not generalize to different control architectures. Therefore,
we also consider a more general point of view, where we recast the learning of linear
systems as a filtering problem. We leverage Kalman filter-based techniques to derive
a sound test and utilize the point estimate of the parameters for targeted learning
experiments. The algorithm is independent of the underlying control architecture,
but demonstrated for model predictive control.
Most of the results in the first two parts critically depend on linearity assumptions
in the dynamics and further problem-specific properties. In the third part, we
take a step back and ask the fundamental question of how to compare (nonlinear)
dynamical systems directly from state data. We propose a kernel two-sample test
that compares stationary distributions of dynamical systems. Additionally, we introduce
a new type of mixing that can directly be estimated from data to deal with
the autocorrelations.
In summary, this thesis introduces a new paradigm for deciding when to trigger
updates in learning-control systems. Additionally, we develop three instantiations
of this paradigm for different learning-control problems. Further, we present applications
of the algorithms that yield substantial communication savings, effective
controller updates, and the detection of anomalies in human walking data
- …