428 research outputs found
Echo State Queueing Network: a new reservoir computing learning tool
In the last decade, a new computational paradigm was introduced in the field
of Machine Learning, under the name of Reservoir Computing (RC). RC models are
neural networks which a recurrent part (the reservoir) that does not
participate in the learning process, and the rest of the system where no
recurrence (no neural circuit) occurs. This approach has grown rapidly due to
its success in solving learning tasks and other computational applications.
Some success was also observed with another recently proposed neural network
designed using Queueing Theory, the Random Neural Network (RandNN). Both
approaches have good properties and identified drawbacks. In this paper, we
propose a new RC model called Echo State Queueing Network (ESQN), where we use
ideas coming from RandNNs for the design of the reservoir. ESQNs consist in
ESNs where the reservoir has a new dynamics inspired by recurrent RandNNs. The
paper positions ESQNs in the global Machine Learning area, and provides
examples of their use and performances. We show on largely used benchmarks that
ESQNs are very accurate tools, and we illustrate how they compare with standard
ESNs.Comment: Proceedings of the 10th IEEE Consumer Communications and Networking
Conference (CCNC), Las Vegas, USA, 201
Estimating the Probability of a Rare Event Over a Finite Time Horizon
We study an approximation for the zero-variance change of measure to estimate the probability of a rare event in a continuous-time Markov chain. The rare event occurs when the chain reaches a given set of states before some fixed time limit. The jump rates of the chain are expressed as functions of a rarity parameter in a way that the probability of the rare event goes to zero when the rarity parameter goes to zero, and the behavior of our estimators is studied in this asymptotic regime. After giving a general expression for the zero-variance change of measure in this situation, we develop an approximation of it via a power series and show that this approximation provides a bounded relative error when the rarity parameter goes to zero. We illustrate the performance of our approximation on small numerical examples of highly reliableMarkovian systems. We compare it to a previously proposed heuristic that combines forcing with balanced failure biaising. We also exhibit the exact zero-variance change of measure for these examples and compare it with these two approximations
Random Neural Networks and applications
Context of the tutorial: the IEEE CIS Summer School on Computational Intelligence and Applications (IEEE CIS SSoCIA 2022), associated with the 8th IEEE Latin American Conference on Computational Intelligence (IEEE LA-CCI 2022).DoctoralRandom Neural Networks are a class of Neural Networks coming from Stochastic Processes and, in particular, from Queuing Models. They have some nice properties and they have reached good performances in several application areas. They are, in fact, queuing systems seen as Neural machines, and the two uses (probabilistic models for the performance evaluation of systems, or learning machines similar as the other more standard families of Neural Networks) refer to the same mathematical objects. They have the appealing that, as other special models that are unknown for most experts in Machine Learning, their testing in and/or adaptation to the many areas where standard Machine Learning techniques have obtained great successes is totally open.In the tutorial, we will introduce Random Neurons and the networks we can build with them, plus some details about the numerical techniques needed to learn with them. We will also underline the reasons that make them at least extremely interesting. We will also describe some of their successful applications, including our examples. We will focus on learning, but we will mention other uses of these models in performance evaluation, in the analysis of biological systems, and in optimization
Network reliability, performability metrics, rare events and standard Monte Carlo
International audienceIn this paper we consider static models in network reliability, that cover a huge family of applications, going way beyond the case of networks of any kind. The analysis of these models is in general #P-complete, and Monte Carlo remains the only effective approach. We underline the interest in moving from the typical binary world where components and systems are either up or down, to a multi-variate one, where the up state is decomposed into several performance levels. This is also called a performability view of the system. The chapter then proposes a different view of Monte Carlo procedures, where instead of trying to reduce the variance of the estimators, we focus on their time complexities. This view allows a first straightforward way of exploring these metrics. The chapter focuses on the resilience, which is the expected number of pairs of nodes that are connected by at least one path in the model. We discuss the ability of the mentioned approach for quickly estimating this metric, together with variations of it. We also discuss another side effect of the sampling technique proposed in the text, the possibility of easily computing the sensitivities of these metrics with respect to the individual reliabilities of the components. We show that this can be done without a significant overhead of the procedure that estimates the resilience metric alone
Stochastic flow networks, rare events, dependent components and Splitting Monte Carlo techniques
International audienc
Using Machine Learning in communication network research
International audienceNowadays, Machine Learning (ML) tools are commonly used in every area of science or technology. Networking is not an exception, and we find ML all over the research activities in most fields composing the domain. In this talk, we will briefly describe a set of research activities we have developed along several years around several pretty different families of problems, using ML methods. They concern (i) the automatic and accurate real time measure of the Quality of Experience of an application or service built on top of the Internet around the transport of video or audio content (e.g. video streaming, IP telephony, video-conferencing, etc.), (ii) network tomography (measuring on the edges to infer values inside the network), (iii) time series forecasting in several contexts, in particular concept drift detection or anomalies detection, and (iv) service placements in Software Defined Networks, a central problem in 5G and B5G technologies. The corresponding ML tools are mainly Supervised Learning and Reinforcement Learning, even if we are currently using Unsupervised Learning in recent activities of point (i). After this global presentation we will make one or two zooms on some specific results we obtained with these powerful tools, and some of the current projects we are currently developing
La méthode de la matrice exponentielle-duale. Applications à l'analyse de chaßnes de Markov.
International audienceClassic performance evaluation using queueing theory is usually done assuming a stable model in equilibrium. However, there are situations where we are interested in the transient phase. In this case, the main metrics are built around the model's state distribution at an arbitrary point in time. In dependability, a significant part of the analysis is done in the transient phase. In previous work, we developed an approach to derive distributions of some continuous time Markovian models, built around uniformization (also called Jensen's method), transforming the problem into a discrete time one, and the concept of stochastic duality. This combination of tools provides significant simplifications in many cases. However, stochastic duality does not always exist. Recently, we discovered that an idea of algebraic duality, formally similar to stochastic duality, can be defined and applied to any linear differential system (or equivalently, to any matrix). In this case, there is no limitation, the transformation is always possible. We call it the exponential-dual matrix method. In the article, we describe the limitations of stochastic duality and how the exponential-dual matrix method operates for any system, stochastic or not. These concepts are illustrated throughout our article with specific examples, including the case of infinite matrices.L'Ă©valuation des performances classique avec la thĂ©orie des files d'attente est rĂ©alisĂ©e en gĂ©nĂ©ral en considĂ©rant un modĂšle stable en Ă©quilibre. Cependant, il y a des situations oĂč nous sommes intĂ©ressĂ©s dans la phase transitoire. Dans ce cas, les mĂ©triques principales sont construites autour des distributions d'Ă©tat dans un instant arbitraire. Dans des travaux passĂ©s, nous avons dĂ©veloppĂ© une approche pour obtenir les distributions de quelques processus de Markov en temps continu, construite autour de l'uniformisation (aussi appelĂ©e mĂ©thode de Jensen), qui transforme le problĂšme dans un autre en temps discret, plus le concept de dualitĂ© stochastique. Cette combinaison d'outils permet d'importantes simplifications dans beaucoup de cas. Cependant, la dualitĂ© stochastique n'existe pas toujours. RĂ©cemment, nous avons dĂ©couvert une idĂ©e de dualitĂ© algĂ©brique, formellement similaire Ă la dualitĂ© stochastique, qui peut s'appliquer Ă n'importe quel systĂšme diffĂ©rentiel (ou, de façon Ă©quivalente, Ă n'importe quelle matrice). Dans ce cas, il n'y a pas de limitation, la transformation est toujours possible. Nous l'appelons la mĂ©thode de la matrice exponentielle-duale. Dans l'article, nous dĂ©crivons les limites de la dualitĂ© stochastique et comment la mĂ©thode de la matrice exponentielle-duale opĂšre sur n'importe quel systĂšme, stochastique ou non. Ces concepts sont illustrĂ©s avec des exemples spĂ©cifiques, y compris dans le cas de matrices infinies
- âŠ