31 research outputs found

    Slicenet: a Simple and Scalable Flow-Level Simulator for Network Slice Provisioning and Management

    Full text link
    Network slicing plays a crucial role in the progression of 5G and beyond, facilitating dedicated logical networks to meet diverse and specific service requirements. The principle of End-to-End (E2E) slice includes not only a service chain of physical or virtual functions for the radio and core of 5G/6G networks but also the full path to the application servers that might be running at some edge computing or at central cloud. Nonetheless, the development and optimization of E2E network slice management systems necessitate a reliable simulation tool for evaluating different aspects at large-scale network topologies such as resource allocation and function placement models. This paper introduces Slicenet, a mininetlike simulator crafted for E2E network slicing experimentation at the flow level. Slicenet aims at facilitating the investigation of a wide range of slice optimization techniques, delivering measurable, reproducible results without the need for physical resources or complex integration tools. It provides a well-defined process for conducting experiments, which includes the creation and implementation of policies for various components such as edge and central cloud resources, network functions of multiple slices of different characteristics. Furthermore, Slicenet effortlessly produces meaningful visualizations from simulation results, aiding in comprehensive understanding. Utilizing Slicenet, service providers can derive invaluable insights into resource optimization, capacity planning, Quality of Service (QoS) assessment, cost optimization, performance comparison, risk mitigation, and Service Level Agreement (SLA) compliance, thereby fortifying network resource management and slice orchestration

    Efficient Low Cost Range-Based Localization Algorithm for Ad-hoc Wireless Sensors Networks

    Get PDF
    Revised version submitted to Ad Hoc NetworksBuilding an efficient node localization system in wireless sensor networks is facing several challenges. For example, calculating the square root consumes computational resources and utilizing flooding techniques to broadcast nodes location wastes bandwidth and energy. Reducing computational complexity and communication overhead is essential in order to reduce power consumption, extend the life time of the battery operated nodes, and improve the performance of the limited computational resources of these sensor nodes. In this paper, we revise the mathematical model,the analysis and the simulation experiments of the Trigonometric based Ad-hoc Localiza-tion System (TALS), a range-based localization system presented previously. Furthermore, the study is extended, and a new technique to optimize the system is proposed. An analysis and an extensive simulation for the optimized TALS (OTALS) is presented showing its cost, accuracy, and efficiency, thus deducing the impact of its parameters on performance. Hence, the contribution of this work can be summarized as follows: 1) Proposing and employing a novel modified Manhattan distance norm in the TALS localization process. 2) Analyzing and simulating of OTALS showing its computational cost and accuracy and comparing them with other related work. 3) Studying the impacts of different parameters like anchor density, node density, noisy measurements, transmission range, and non-convex network areas. 4) Extending our previous joint work, TALS, to consider base anchors to be located in positions other than the origin and analyzing this work to illustrate the possibility of selecting a wrong quadrant at the first iteration and how this problem is overcome. Through mathematical analysis and intensive simulation, OTALS proved to be iterative , distributed, and computationally simple. It presented superior performance compared to other localization techniques

    Download Process in Distributed Systems, Flow-level vs. Packet-level Simulation Analysis

    Get PDF
    Parallelism in the download process of large files is an efficient mechanism for distributed systems. In such systems, some peers (clients) exploit the power of parallelism to download blocks of data stored in a distributed way over some other peers (servers). Determining response times in parallel downloading with capacity constraints on both the client downloads and server uploads necessitates understanding the instantaneous shares of the bandwidths of each client/server is devoted to each data transfer flow. In this report, we explore the practical relevance of the hypothesis that flows share the network bandwidth according to the max-min fairness paradigm. We have implemented into a flow-level simulator a version of the algorithm which calculates such a bandwidth allocation, which we have called the ``progressive-filling flow-level algorithm''. We have programmed a similar model over NS2 and compared the empirical distributions resulting from both simulations. Our results indicate that flow-level predictions are very accurate in symmetric networks and good in asymmetric networks. Therefore, PFFLA would be extremely useful to build flow-level simulators and, possibly, to perform probabilistic performance calculations in general P2P networks

    Flow-Level Modeling of Parallel Download in Distributed Systems

    Get PDF
    International audienceResponse time is the primary Quality of Service metric for parallel download systems, where pieces of large files can be simultaneously downloaded from several servers. Determining response times in such systems is still a difficult issue, because the way the network bandwidth is shared between flows is as yet not well understood. We address the issue by exploring the practical relevance of the hypothesis that flows share the network bandwidth according to the max- min fairness paradigm. We have implemented into a flow-level simulator a version of the algorithm, which calculates such a bandwidth allocation, which we have called the "progressive- filling flow-level algorithm" (PFFLA). We have programmed a similar model over NS2 and compared the empirical distri- butions resulting from both simulations. Our results indicate that flow-level predictions are very accurate in symmetric networks and good in asymmetric networks. Therefore, PFFLA would be extremely useful to build flow-level simulators and, possibly, to perform probabilistic QoS calculations in general P2P networks

    Simulation analysis of download and recovery processes in P2P storage systems

    Get PDF
    International audiencePeer-to-peer storage systems rely on data fragmentation and distributed storage. Unreachable fragments are continuously recovered, requiring multiple fragments of data (constituting a ldquoblockrdquo) to be downloaded in parallel. Recent modeling efforts have assumed the recovery process to follow an exponential distribution, an assumption made mainly in the absence of studies characterizing the ldquorealrdquo distribution of the recovery process. This work aims at filling this gap through a simulation study. To that end, we implement the distributed storage protocol in the NS-2 network simulator and run a total of seven experiments covering a large variety of scenarios. We show that the fragment download time follows approximately an exponential distribution. We also show that the block download time and the recovery time essentially follow a hypo-exponential distribution with many distinct phases (maximum of as many exponentials). We use expectation maximization and least square estimation algorithms to fit the empirical distributions. We also provide a good approximation of the number of phases of the hypo-exponential distribution that applies in all scenarios considered. Last, we test the goodness of our fits using statistical (Kolmogorov-Smirnov test) and graphical methods

    Lifetime and availability of data stored on a P2P system: Evaluation of redundancy and recovery schemes

    Get PDF
    International audienceThis paper studies the performance of Peer-to-Peer storage and backup systems (P2PSS). These systems are based on three pillars: data fragmentation and dissemination among the peers, redundancy mechanisms to cope with peers churn and repair mechanisms to recover lost or temporarily unavailable data. Usually, redundancy is achieved either by using replication or by using erasure codes. A new class of network coding (regenerating codes) has been proposed recently. Therefore, we will adapt our work to these three redundancy schemes. We introduce two mechanisms for recovering lost data and evaluate their performance by modeling them through absorbing Markov chains. Specifically, we evaluate the quality of service provided to users in terms of durability and availability of stored data for each recovery mechanism and deduce the impact of its parameters on the system performance. The first mechanism is centralized and based on the use of a single server that can recover multiple losses at once. The second mechanism is distributed: reconstruction of lost fragments is iterated sequentially on many peers until that the required level of redundancy is attained. The key assumptions made in this work, in particular, the assumptions made on the recovery process and peer on-times distribution, are in agreement with the analysis in [1] and in [2] respectively. The models are thereby general enough to be applicable to many distributed environments as shown through numerical computations. We find that, in stable environments such as local area or research institute networks where machines are usually highly available, the distributed-repair scheme in erasure-coded systems offers a reliable, scalable and cheap storage/backup solution. For the case of highly dynamic environments, in general, the distributed-repair scheme is inefficient, in particular to maintain high data availability, unless the data redundancy is high. Using regenerating codes overcomes this limitation of the distributed-repair scheme. P2PSS with centralized-repair scheme are efficient in any environment but have the disadvantage of relying on a centralized authority. However, the analysis of the overhead cost (e.g. computation, bandwidth and complexity cost) resulting from the different redundancy schemes with respect to their advantages (e.g. simplicity), is left for future work

    Simulation Analysis of Download and Recovery Processes in P2P Storage Systems

    Get PDF
    Peer-to-peer storage systems rely on data fragmentation and distributed storage. Unreachable fragments are continuously recovered, requiring multiple fragments of data (constituting a "block") to be downloaded in parallel. Recent modeling efforts have assumed the recovery process to follow an exponential distribution, an assumption made mainly in the absence of studies characterizing the "real" distribution of the recovery process. This report aims at filling this gap through an empirical study. To that end, we implement the distributed storage protocol in the NS-2 network simulator and run a total of six experiments covering a large variety of scenarios. We show that the fragment download time follows approximately an exponential distribution. We also show that the block download time and the recovery time essentially follow a hypo-exponential distribution with many distinct phases (maximum of as many exponentials). We use expectation maximization and least square estimation algorithms to fit the empirical distributions. We also provide a good approximation of the number of phases of the hypo-exponential distribution that applies in all scenarios considered. Last, we test the goodness of our fits using statistical (Kolmogorov-Smirnov test) and graphical methods

    Download Process in Distributed Systems, Flow-level Algorithm vs. Packet-level Simulation Model

    Get PDF
    Parallelism in the download process of large files is an efficient mechanism for distributed systems. In such systems, some peers (clients) exploit the power of parallelism to download blocks of data stored in a distributed way over some other peers (servers). Determining response times in parallel downloading with capacity constraints on both the client downloads and server uploads necessitates understanding the instantaneous shares of the bandwidths of each client/server is devoted to each data transfer flow. In this report, we explore the practical relevance of the hypothesis that flows share the network bandwidth according to the max-min fairness paradigm. We have implemented into a flow-level simulator a version of the algorithm which calculates such a bandwidth allocation, which we have called the ''progressive-filling flow-level algorithm''. We have programmed a similar model over NS2 and compared the empirical distributions resulting from both simulations. Our results indicate that flow-level predictions are very accurate in symmetric networks and good in asymmetric networks. Therefore, PFFLA would be extremely useful to build flow-level simulators and, possibly, to perform probabilistic performance calculations in general P2P networks

    Lifetime and availability of data stored on a P2P system: Evaluation of recovery schemes

    Get PDF
    This report studies the performance of Peer-to-Peer storage and backup systems (P2PSS). These systems are based on three pillars: data fragmentation and dissemination among the peers, redundancy mechanisms to cope with peers churn, and repair mechanisms to recover lost or temporarily unavailable data. Usually, redundancy is achieved either by using replication or by using erasure codes. A new class of network coding (regenerating codes) has been proposed recently. Therefore, we will adapt our work to these three redundancy schemes. We introduce two mechanisms for recovering lost data and evaluate their performance by modeling them through absorbing Markov chains. Specifically, we evaluate the quality of service provided to users in terms of durability and availability of stored data for each recovery mechanism and deduce the impact of its parameters on the system performance. The first mechanism is centralized and based on the use of a single server that can recover multiple losses at once. The second mechanism is distributed: reconstruction of lost fragments is iterated sequentially on many peers until the required level of redundancy is attained. The key assumptions made in this work, in particular, the assumptions made on the recovery process and peer on-times distribution, are in agreement with the analysis in [11] and in [20] respectively. The models are thereby general enough to be applicable to many distributed environments as shown through numerical computations. We find that, in stable environments such as local area or research institute networks where machines are usually highly available, the distributed-repair scheme in erasure-coded systems offers a reliable, scalable and cheap storage/backup solution. For the case of highly dynamic environments, in general, the distributed-repair scheme is inefficient, in particular to maintain high data availability, unless the data redundancy is high. Using regenerating codes overcomes this limitation of the distributed-repair scheme. P2PSS with centralized-repair scheme are efficient in any environment but have the disadvantage of relying on a centralized authority. However, the analysis of the overhead cost (e.g. computation, bandwidth cost and complexity) resulting from the different redundancy schemes with respect to their advantages (e.g. simplicity), is left for future work

    ns-3 Based Framework for Simulating Communication Based Train Control (CBTC) Systems

    Get PDF
    International audienceIn a Communication Based Train Control System (CBTC), a central zone controller server (ZC) exchanges signaling messages with on-board carborne controllers (CC) inside the trains through a wireless technology. The ZC calculates and sends periodically to each train its Limit of Movement Authority (LMA), i.e. how far the train can proceed. A CC triggers an emergency break (EB) if no message is received within a certain time interval to avoid collision. Clearly, it is not desired to have an EB due to signaling messages losses (called spurious EB) and not to real risks for the trains. Quantifying the rate of spurious EBs and predicting correctly CBTC system performance are hard tasks with important industrial relevance.This work aims at filling this gap using simulation to better predict CBTC system performance and avoid extra provisioning before deployment. A typical CBTC system implementation for metro by Alstom Transport is considered. New ns-3 modules (CBTC protocol, Video traffic generator, multi-channel scanning mechanism, 3D antennas patterns) are developed and a piece of existing code is enhanced. The simulation is also used to investigate the dimension of the radio access networks in a realistic environment (specific modems and access point antennas, radio frequencies, train and track models), another aspect also ignored in the previous literature. Last, our approach can be useful to validate some analytical works
    corecore