5,673 research outputs found

    Point queue models: a unified approach

    Full text link
    In transportation and other types of facilities, various queues arise when the demands of service are higher than the supplies, and many point and fluid queue models have been proposed to study such queueing systems. However, there has been no unified approach to deriving such models, analyzing their relationships and properties, and extending them for networks. In this paper, we derive point queue models as limits of two link-based queueing model: the link transmission model and a link queue model. With two definitions for demand and supply of a point queue, we present four point queue models, four approximate models, and their discrete versions. We discuss the properties of these models, including equivalence, well-definedness, smoothness, and queue spillback, both analytically and with numerical examples. We then analytically solve Vickrey's point queue model and stationary states in various models. We demonstrate that all existing point and fluid queue models in the literature are special cases of those derived from the link-based queueing models. Such a unified approach leads to systematic methods for studying the queueing process at a point facility and will also be helpful for studies on stochastic queues as well as networks of queues.Comment: 25 pages, 6 figure

    Performance Modelling and Optimisation of Multi-hop Networks

    Get PDF
    A major challenge in the design of large-scale networks is to predict and optimise the total time and energy consumption required to deliver a packet from a source node to a destination node. Examples of such complex networks include wireless ad hoc and sensor networks which need to deal with the effects of node mobility, routing inaccuracies, higher packet loss rates, limited or time-varying effective bandwidth, energy constraints, and the computational limitations of the nodes. They also include more reliable communication environments, such as wired networks, that are susceptible to random failures, security threats and malicious behaviours which compromise their quality of service (QoS) guarantees. In such networks, packets traverse a number of hops that cannot be determined in advance and encounter non-homogeneous network conditions that have been largely ignored in the literature. This thesis examines analytical properties of packet travel in large networks and investigates the implications of some packet coding techniques on both QoS and resource utilisation. Specifically, we use a mixed jump and diffusion model to represent packet traversal through large networks. The model accounts for network non-homogeneity regarding routing and the loss rate that a packet experiences as it passes successive segments of a source to destination route. A mixed analytical-numerical method is developed to compute the average packet travel time and the energy it consumes. The model is able to capture the effects of increased loss rate in areas remote from the source and destination, variable rate of advancement towards destination over the route, as well as of defending against malicious packets within a certain distance from the destination. We then consider sending multiple coded packets that follow independent paths to the destination node so as to mitigate the effects of losses and routing inaccuracies. We study a homogeneous medium and obtain the time-dependent properties of the packet’s travel process, allowing us to compare the merits and limitations of coding, both in terms of delivery times and energy efficiency. Finally, we propose models that can assist in the analysis and optimisation of the performance of inter-flow network coding (NC). We analyse two queueing models for a router that carries out NC, in addition to its standard packet routing function. The approach is extended to the study of multiple hops, which leads to an optimisation problem that characterises the optimal time that packets should be held back in a router, waiting for coding opportunities to arise, so that the total packet end-to-end delay is minimised

    On generalized processor sharing and objective functions: analytical framework

    Get PDF
    Today, telecommunication networks host a wide range of heterogeneous services. Some demand strict delay minima, while others only need a best-effort kind of service. To achieve service differentiation, network traffic is partitioned in several classes which is then transmitted according to a flexible and fair scheduling mechanism. Telecommunication networks can, for instance, use an implementation of Generalized Processor Sharing (GPS) in its internal nodes to supply an adequate Quality of Service to each class. GPS is flexible and fair, but also notoriously hard to study analytically. As a result, one has to resort to simulation or approximation techniques to optimize GPS for some given objective function. In this paper, we set up an analytical framework for two-class discrete-time probabilistic GPS which allows to optimize the scheduling for a generic objective function in terms of the mean unfinished work of both classes without the need for exact results or estimations/approximations for these performance characteristics. This framework is based on results of strict priority scheduling, which can be regarded as a special case of GPS, and some specific unfinished-work properties in two-class GPS. We also apply our framework on a popular type of objective functions, i.e., convex combinations of functions of the mean unfinished work. Lastly, we incorporate the framework in an algorithm to yield a faster and less computation-intensive result for the optimum of an objective function
    • …
    corecore