416 research outputs found

    Echo State Queueing Network: a new reservoir computing learning tool

    Full text link
    In the last decade, a new computational paradigm was introduced in the field of Machine Learning, under the name of Reservoir Computing (RC). RC models are neural networks which a recurrent part (the reservoir) that does not participate in the learning process, and the rest of the system where no recurrence (no neural circuit) occurs. This approach has grown rapidly due to its success in solving learning tasks and other computational applications. Some success was also observed with another recently proposed neural network designed using Queueing Theory, the Random Neural Network (RandNN). Both approaches have good properties and identified drawbacks. In this paper, we propose a new RC model called Echo State Queueing Network (ESQN), where we use ideas coming from RandNNs for the design of the reservoir. ESQNs consist in ESNs where the reservoir has a new dynamics inspired by recurrent RandNNs. The paper positions ESQNs in the global Machine Learning area, and provides examples of their use and performances. We show on largely used benchmarks that ESQNs are very accurate tools, and we illustrate how they compare with standard ESNs.Comment: Proceedings of the 10th IEEE Consumer Communications and Networking Conference (CCNC), Las Vegas, USA, 201

    Estimating the Probability of a Rare Event Over a Finite Time Horizon

    Get PDF
    We study an approximation for the zero-variance change of measure to estimate the probability of a rare event in a continuous-time Markov chain. The rare event occurs when the chain reaches a given set of states before some fixed time limit. The jump rates of the chain are expressed as functions of a rarity parameter in a way that the probability of the rare event goes to zero when the rarity parameter goes to zero, and the behavior of our estimators is studied in this asymptotic regime. After giving a general expression for the zero-variance change of measure in this situation, we develop an approximation of it via a power series and show that this approximation provides a bounded relative error when the rarity parameter goes to zero. We illustrate the performance of our approximation on small numerical examples of highly reliableMarkovian systems. We compare it to a previously proposed heuristic that combines forcing with balanced failure biaising. We also exhibit the exact zero-variance change of measure for these examples and compare it with these two approximations

    Random Neural Networks and applications

    Get PDF
    Context of the tutorial: the IEEE CIS Summer School on Computational Intelligence and Applications (IEEE CIS SSoCIA 2022), associated with the 8th IEEE Latin American Conference on Computational Intelligence (IEEE LA-CCI 2022).DoctoralRandom Neural Networks are a class of Neural Networks coming from Stochastic Processes and, in particular, from Queuing Models. They have some nice properties and they have reached good performances in several application areas. They are, in fact, queuing systems seen as Neural machines, and the two uses (probabilistic models for the performance evaluation of systems, or learning machines similar as the other more standard families of Neural Networks) refer to the same mathematical objects. They have the appealing that, as other special models that are unknown for most experts in Machine Learning, their testing in and/or adaptation to the many areas where standard Machine Learning techniques have obtained great successes is totally open.In the tutorial, we will introduce Random Neurons and the networks we can build with them, plus some details about the numerical techniques needed to learn with them. We will also underline the reasons that make them at least extremely interesting. We will also describe some of their successful applications, including our examples. We will focus on learning, but we will mention other uses of these models in performance evaluation, in the analysis of biological systems, and in optimization

    Network reliability, performability metrics, rare events and standard Monte Carlo

    Get PDF
    International audienceIn this paper we consider static models in network reliability, that cover a huge family of applications, going way beyond the case of networks of any kind. The analysis of these models is in general #P-complete, and Monte Carlo remains the only effective approach. We underline the interest in moving from the typical binary world where components and systems are either up or down, to a multi-variate one, where the up state is decomposed into several performance levels. This is also called a performability view of the system. The chapter then proposes a different view of Monte Carlo procedures, where instead of trying to reduce the variance of the estimators, we focus on their time complexities. This view allows a first straightforward way of exploring these metrics. The chapter focuses on the resilience, which is the expected number of pairs of nodes that are connected by at least one path in the model. We discuss the ability of the mentioned approach for quickly estimating this metric, together with variations of it. We also discuss another side effect of the sampling technique proposed in the text, the possibility of easily computing the sensitivities of these metrics with respect to the individual reliabilities of the components. We show that this can be done without a significant overhead of the procedure that estimates the resilience metric alone

    Using Machine Learning in communication network research

    Get PDF
    International audienceNowadays, Machine Learning (ML) tools are commonly used in every area of science or technology. Networking is not an exception, and we find ML all over the research activities in most fields composing the domain. In this talk, we will briefly describe a set of research activities we have developed along several years around several pretty different families of problems, using ML methods. They concern (i) the automatic and accurate real time measure of the Quality of Experience of an application or service built on top of the Internet around the transport of video or audio content (e.g. video streaming, IP telephony, video-conferencing, etc.), (ii) network tomography (measuring on the edges to infer values inside the network), (iii) time series forecasting in several contexts, in particular concept drift detection or anomalies detection, and (iv) service placements in Software Defined Networks, a central problem in 5G and B5G technologies. The corresponding ML tools are mainly Supervised Learning and Reinforcement Learning, even if we are currently using Unsupervised Learning in recent activities of point (i). After this global presentation we will make one or two zooms on some specific results we obtained with these powerful tools, and some of the current projects we are currently developing

    On the robustness of fishman's bound-based method for the network reliability problem

    Get PDF
    International audienceStatic network unreliability computation is an NP-hard problem, leading to the use of Monte Carlo techniques to estimate it. The latter, in turn, suffer from the rare event problem, in the frequent situation where the system's unreliability is a very small value. As a consequence, specific rare event event simulation techniques are relevant tools to provide this estimation. We focus here on a method proposed by Fishman making use of bounds on the structure function of the model. The bounds are based on the computation of (disjoint) mincuts disconnecting the set of nodes and (disjoint) minpaths ensuring that they are connected. We analyze the robustness of the method when the unreliability of links goes to zero. We show that the conditions provided by Fishman, based on a bound, are only sufficient, and we provide more insight and examples on the behavior of the method
    • 

    corecore