96 research outputs found

    A resequencing model for high speed networks

    Get PDF
    In this paper, we propose a framework to study the resequencing mechanism in high speed networks. This framework allows us to estimate the packet resequencing delay, the total packet delay, and the resequencing buffer occupancy distributions when data traffic is dispersed on multiple disjoint paths. In contrast to most of the existing work, the estimation of the end-to-end path delay distribution is decoupled from the queueing model for resequencing. This leads to a simple yet general model, which can be used with other measurement-based tools for estimating the end-to-end path delay distribution to find an optimal split of traffic. We consider a multiple-node M/M/1 tandem network as a path model. When end-to-end path delays are Gaussian distributed, our results show that the packet resequencing delay, the total packet delay, and the resequencing buffer occupancy drop when the traffic is spread over a larger number of homogeneous paths, although the network performance improvement quickly saturates when the number of paths used increases. We find that the number of paths used in multipath routing should be small, say up to three. Besides, an optimal split of traffic occurs at paths with equal loads.published_or_final_versio

    Performance Modelling and Optimisation of Multi-hop Networks

    Get PDF
    A major challenge in the design of large-scale networks is to predict and optimise the total time and energy consumption required to deliver a packet from a source node to a destination node. Examples of such complex networks include wireless ad hoc and sensor networks which need to deal with the effects of node mobility, routing inaccuracies, higher packet loss rates, limited or time-varying effective bandwidth, energy constraints, and the computational limitations of the nodes. They also include more reliable communication environments, such as wired networks, that are susceptible to random failures, security threats and malicious behaviours which compromise their quality of service (QoS) guarantees. In such networks, packets traverse a number of hops that cannot be determined in advance and encounter non-homogeneous network conditions that have been largely ignored in the literature. This thesis examines analytical properties of packet travel in large networks and investigates the implications of some packet coding techniques on both QoS and resource utilisation. Specifically, we use a mixed jump and diffusion model to represent packet traversal through large networks. The model accounts for network non-homogeneity regarding routing and the loss rate that a packet experiences as it passes successive segments of a source to destination route. A mixed analytical-numerical method is developed to compute the average packet travel time and the energy it consumes. The model is able to capture the effects of increased loss rate in areas remote from the source and destination, variable rate of advancement towards destination over the route, as well as of defending against malicious packets within a certain distance from the destination. We then consider sending multiple coded packets that follow independent paths to the destination node so as to mitigate the effects of losses and routing inaccuracies. We study a homogeneous medium and obtain the time-dependent properties of the packet’s travel process, allowing us to compare the merits and limitations of coding, both in terms of delivery times and energy efficiency. Finally, we propose models that can assist in the analysis and optimisation of the performance of inter-flow network coding (NC). We analyse two queueing models for a router that carries out NC, in addition to its standard packet routing function. The approach is extended to the study of multiple hops, which leads to an optimisation problem that characterises the optimal time that packets should be held back in a router, waiting for coding opportunities to arise, so that the total packet end-to-end delay is minimised

    On Call Migration

    Get PDF
    In an environment where network resources are reserved e.g, telephone networks, the path with smallest number of hops is preferred and other alternate paths are used only when there the shortest path is full. However if the alternate path is longer more network resources are devoted to the circuit and this in turn could worsen the situation. Circuit migration is a solution to reduce the amount of resources inefficiently used due to alternate routing in connection oriented networks. By rerouting a circuit when its shortest path becomes available, one can smooth out the congestion and increases the utilization of the network. The overhead of circuit migration is comparable to call set up and the tradeoff of circuit migration is improvement in performance vs. some additional call processing capacity. In this report we will focus on the above tradeoff, evaluating it analytically and by simulation on a completely connected topology. Our initial results indicate that migration could improve the performance of the network at high load but it has to be done very often. Such a large amount of overhead could be expensive enough to offset the gain in performance. On further investigation, we discover that threshing can also occur in circuit migration. We proposed two solutions to the problem. The first solution is to migrate only when the shortest path is no longer highly utilized. The second solution migrates a circuit only if its path is congested. A hybrid solution using the two above is also examined. We will also address the reordering problem that could occur when a circuit is transferred to a new path

    Performance analysis of queueing systems with resequencing

    Get PDF
    2014 - 2015The service sector lies at the heart of industrialized nations and continues to serve as a major contributor to the world economy. Over the years, the service industry has given rise to an enor- mous amount of technological, scienti c, and managerial chal- lenges. Among all challenges, operational service quality, service efficiency, and the tradeoffs between the two have always been at the center of service managers' attention and are likely to be so more in the future. Queueing theory attempts to address these challenges from a mathematical perspective. Every service station of a queueing network is characterized by two major components: the external arrival process and the service process. The external arrival process governs the timing of service request arrivals to that station from outside, and the service process concerns the duration of service transactions in that station... [edited by author]XIV n.s

    Estimation of Network Disordering Eff ects by In-depth Analysis of the Resequencing Bu ffer Contents in Steady-state, Journal of Telecommunications and Information Technology, 2016, nr 1

    Get PDF
    The paper is devoted to the analytic analysis of resequencing issue, which is common in packet networks, using queueing-theoretic approach. The authors propose the mathematical model, which describes the simplest setting of packet resequencing, but which allows one to make the first step in the in-depth-analysis of the queues dynamics in the resequencing buffer. Specifically consideration is given to N-server queueing system (N > 3) with single infinite capacity buffer and resequencing, which may serve as a model of packet reordering in packet networks. Customers arrive at the system according to Poisson flow, occupy one place in the buffer and receive service from one of the servers, which is exponentially distributed with the same parameter. The order of customers upon arrival has to be preserved upon departure. Customers, which violated the order are kept in resequencing buffer which also has infinite capacity. It is shown that the resequencing buffer can be considered as consisting of n, 1 ≤ n ≤ N −1, interconnected queues, depending on the number of busy servers, with i-th queue containing customers, which have to wait for i service completions before they can leave the system. Recursive algorithm for computation of the joint stationary distribution of the number of customers in the buffer and servers, and each queue in resequencing buffer are being obtained. Numerical examples, which show the dynamics of the characteristics of the queues in resequencing buffer are given

    Manufacturing System and Supply Chain Analyses Related to Product Complexity and Sequenced Parts Delivery

    Get PDF
    Mixed model assembly has been widely used in many industries. It is applied in order to effectively deal with increasing product complexity. Sequencing and resequencing on a mixed-model assembly line is also complicated by high product complexity. To improve the performance of a mixed-model assembly system and the supply chain, one can develop efficient sequencing rules to address sequencing problems, and manage product complexity to reduce its negative impact on the production system. This research addresses aspects of sequence alteration and restoration on a mixed-model assembly line for the purpose of improving the performance of a manufacturing system and its supply chain, and addresses product complexity analysis. This dissertation is organized into Parts 1, 2, and 3 based on three submitted journal papers. Part 1. On a mixed-model assembly line, sequence alteration is generally used to intentionally change the sequence to the one desired by the downstream department; and sequence restoration is generally applied to achieve sequence compliance by restoring to the original sequence that has been unintentionally changed due to unexpected reasons such as rework. Rules and methods for sequence alteration using shuffling lines or sorting lines were developed to accommodate the sequence considerations of the downstream department. A spare units system based on queuing analysis was proposed to restore the unintentionally altered sequence in order to facilitate sequenced parts delivery. A queuing model for the repairs of defective units in the spare units system was developed to estimate the number of spare units needed in this system. Part 2. Research was conducted on product complexity analysis. Data envelopment analysis (DEA) was first applied to compare product complexity related to product variety among similar products in the same market, two DEA models including their respective illustrative models considering various product complexity factors and different comparison objectives were developed. One of these models compared the product complexity factors in conjunction with sales volume. The third DEA model was developed to identify product complexity reduction opportunities by ranking various product attributes. A further incremental economic analysis considering the changes in costs and market impact by an intended complexity change was presented in order to justify a product complexity reduction opportunity identified by the DEA model. Part 3. Two extended DEA models were developed to compare the relative complexity levels of similar products specifically in automobile manufacturing companies. Some automobile product attributes that have significant cost impact on manufacturing and the supply chain were considered as inputs in the two extended DEA models. An incremental cost estimation approach was developed to estimate the specific cost change in various categories of production activities associated with a product complexity change. A computational tool was developed to accomplish the cost estimation. In each of the above stated parts, a case study was included to demonstrate how these developed rules, models, or methods could be applied at an automobile assembly plant. These case studies showed that the methodologies developed in this research were useful for better managing mixed-model assembly and product complexity in an automobile manufacturing system and supply chain

    A paracasting model for concurrent access to replicated content

    Get PDF
    We propose a framework to study how to download effectively a copy of the same document from a set of replicated servers. A generalized application-layer anycasting, known as paracasting, has been proposed to advocate concurrent access of a subset of replicated servers to satisfy cooperatively a client's request. Each participating server satisfies the request in part by transmitting a subset of the requested file to the client. The client can recover the complete file when different parts of the file sent from the participating servers are received. This framework allows us to estimate the average time to download a file from the set of homogeneous replicated servers, and the request blocking probability when each server can accept and serve a finite number of concurrent. requests. Our results show that the file download time drops when a request is served concurrently by a larger number of homogeneous replicated servers, although the performance improvement quickly saturates when the number of servers used increases. If the total number of requests that a server can handle simultaneously is finite, the request blocking probability increases with the number of replicated servers used to serve a request concurrently. Therefore, paracasting is effective in using a small number of servers, say, up to four, to serve a request concurrently.published_or_final_versio
    corecore