4 research outputs found

    Using Reinforcement Learning to Improve Network Reliability through Optimal Resource Allocation

    Get PDF
    Networks provide a variety of critical services to society (e.g. power grid, telecommunication, water, transportation) but are prone to disruption. With this motivation, we study a sequential decision problem in which an initial network is improved over time (e.g., by adding or increasing the reliability of edges) and rewards are gained over time as a function of the network’s all-terminal reliability. The actions during each time period are limited due to availability of resources such as time, money, or labor. To solve this problem, we utilized a Deep Reinforcement Learning (DRL) approach implemented within OpenAI-Gym using Stable Baselines. A Proximal Policy Optimization (PPO) was used to identify the edge to be improved or a new edge to be added based on the current state of the network and the available budget. To calculate the network’s all-terminal reliability, a reliability polynomial was employed. To understand how the model behaves under a variety of conditions, we explored numerous network configurations with different initial link reliability, added link reliability, number of nodes, and budget structures. We conclude with a discussion of insights gained from our set of designed experiments

    Using Reinforcement Learning to Improve Network Reliability through Optimal Resource Allocation

    Get PDF
    Networks provide a variety of critical services to society (e.g. power grid, telecommunication, water, transportation) but are prone to disruption. With this motivation, we study a sequential decision problem in which an initial network is improved over time (e.g., by adding or increasing the reliability of edges) and rewards are gained over time as a function of the network’s all-terminal reliability. The actions during each time period are limited due to availability of resources such as time, money, or labor. To solve this problem, we utilized a Deep Reinforcement Learning (DRL) approach implemented within OpenAI-Gym using Stable Baselines. A Proximal Policy Optimization (PPO) was used to identify the edge to be improved or a new edge to be added based on the current state of the network and the available budget. To calculate the network’s all-terminal reliability, a reliability polynomial was employed. To understand how the model behaves under a variety of conditions, we explored numerous network configurations with different initial link reliability, added link reliability, number of nodes, and budget structures. We conclude with a discussion of insights gained from our set of designed experiments

    Debugging-workflow-aware software reliability growth analysis

    No full text
    Software reliability growth models support the prediction/assessment of product quality, release time, and testing/debugging cost. Several software reliability growth model exten- sions take into account the bug correction process. However, their estimates may be significantly inaccurate when debugging fails to fully fit modelling assumptions. This paper proposes debugging-workflow-aware software reliability growth method (DWA-SRGM), a method for reliability growth analysis leveraging the debugging data usually managed by companies in bug tracking systems. On the basis of a characterization of the debugging workflow within the soft- ware project under consideration (in terms of bug features and treatment phases), DWA-SRGM pinpoints the factors impacting the estimates and to spot bottlenecks, thus supporting process improvement decisions. Two industrial case studies are presented, a customer relationship man- agement system and an enterprise resource planning system, whose defects span a period of about 17 and 13 months, respectively. DWA-SRGM revealed effective to obtain more realistic estimates and to capitalize on the awareness of critical factors for improving debugging
    corecore