15,799 research outputs found

    Low-Latency Millimeter-Wave Communications: Traffic Dispersion or Network Densification?

    Full text link
    This paper investigates two strategies to reduce the communication delay in future wireless networks: traffic dispersion and network densification. A hybrid scheme that combines these two strategies is also considered. The probabilistic delay and effective capacity are used to evaluate performance. For probabilistic delay, the violation probability of delay, i.e., the probability that the delay exceeds a given tolerance level, is characterized in terms of upper bounds, which are derived by applying stochastic network calculus theory. In addition, to characterize the maximum affordable arrival traffic for mmWave systems, the effective capacity, i.e., the service capability with a given quality-of-service (QoS) requirement, is studied. The derived bounds on the probabilistic delay and effective capacity are validated through simulations. These numerical results show that, for a given average system gain, traffic dispersion, network densification, and the hybrid scheme exhibit different potentials to reduce the end-to-end communication delay. For instance, traffic dispersion outperforms network densification, given high average system gain and arrival rate, while it could be the worst option, otherwise. Furthermore, it is revealed that, increasing the number of independent paths and/or relay density is always beneficial, while the performance gain is related to the arrival rate and average system gain, jointly. Therefore, a proper transmission scheme should be selected to optimize the delay performance, according to the given conditions on arrival traffic and system service capability

    To boldly go:an occam-π mission to engineer emergence

    Get PDF
    Future systems will be too complex to design and implement explicitly. Instead, we will have to learn to engineer complex behaviours indirectly: through the discovery and application of local rules of behaviour, applied to simple process components, from which desired behaviours predictably emerge through dynamic interactions between massive numbers of instances. This paper describes a process-oriented architecture for fine-grained concurrent systems that enables experiments with such indirect engineering. Examples are presented showing the differing complex behaviours that can arise from minor (non-linear) adjustments to low-level parameters, the difficulties in suppressing the emergence of unwanted (bad) behaviour, the unexpected relationships between apparently unrelated physical phenomena (shown up by their separate emergence from the same primordial process swamp) and the ability to explore and engineer completely new physics (such as force fields) by their emergence from low-level process interactions whose mechanisms can only be imagined, but not built, at the current time

    On Time Synchronization Issues in Time-Sensitive Networks with Regulators and Nonideal Clocks

    Full text link
    Flow reshaping is used in time-sensitive networks (as in the context of IEEE TSN and IETF Detnet) in order to reduce burstiness inside the network and to support the computation of guaranteed latency bounds. This is performed using per-flow regulators (such as the Token Bucket Filter) or interleaved regulators (as with IEEE TSN Asynchronous Traffic Shaping). Both types of regulators are beneficial as they cancel the increase of burstiness due to multiplexing inside the network. It was demonstrated, by using network calculus, that they do not increase the worst-case latency. However, the properties of regulators were established assuming that time is perfect in all network nodes. In reality, nodes use local, imperfect clocks. Time-sensitive networks exist in two flavours: (1) in non-synchronized networks, local clocks run independently at every node and their deviations are not controlled and (2) in synchronized networks, the deviations of local clocks are kept within very small bounds using for example a synchronization protocol (such as PTP) or a satellite based geo-positioning system (such as GPS). We revisit the properties of regulators in both cases. In non-synchronized networks, we show that ignoring the timing inaccuracies can lead to network instability due to unbounded delay in per-flow or interleaved regulators. We propose and analyze two methods (rate and burst cascade, and asynchronous dual arrival-curve method) for avoiding this problem. In synchronized networks, we show that there is no instability with per-flow regulators but, surprisingly, interleaved regulators can lead to instability. To establish these results, we develop a new framework that captures industrial requirements on clocks in both non-synchronized and synchronized networks, and we develop a toolbox that extends network calculus to account for clock imperfections.Comment: ACM SIGMETRICS 2020 Boston, Massachusetts, USA June 8-12, 202

    A method to assess demand growth vulnerability of travel times on road network links

    No full text
    Many national governments around the world have turned their recent focus to monitoring the actual reliability of their road networks. In parallel there have been major research efforts aimed at developing modelling approaches for predicting the potential vulnerability of such networks, and in forecasting the future impact of any mitigating actions. In practice-whether monitoring the past or planning for the future-a confounding factor may arise, namely the potential for systematic growth in demand over a period of years. As this growth occurs the networks will operate in a regime closer to capacity, in which they are more sensitive to any variation in flow or capacity. Such growth will be partially an explanation for trends observed in historic data, and it will have an impact in forecasting too, where we can interpret this as implying that the networks are vulnerable to demand growth. This fact is not reflected in current vulnerability methods which focus almost exclusively on vulnerability to loss in capacity. In the paper, a simple, moment-based method is developed to separate out this effect of demand growth on the distribution of travel times on a network link, the aim being to develop a simple, tractable, analytic method for medium-term planning applications. Thus the impact of demand growth on the mean, variance and skewness in travel times may be isolated. For given critical changes in these summary measures, we are thus able to identify what (location-specific) level of demand growth would cause these critical values to be exceeded, and this level is referred to as Demand Growth Reliability Vulnerability (DGRV). Computing the DGRV index for each link of a network also allows the planner to identify the most vulnerable locations, in terms of their ability to accommodate growth in demand. Numerical examples are used to illustrate the principles and computation of the DGRV measure

    A new approach to service provisioning in ATM networks

    Get PDF
    The authors formulate and solve a problem of allocating resources among competing services differentiated by user traffic characteristics and maximum end-to-end delay. The solution leads to an alternative approach to service provisioning in an ATM network, in which the network offers directly for rent its bandwidth and buffers and users purchase freely resources to meet their desired quality. Users make their decisions based on their own traffic parameters and delay requirements and the network sets prices for those resources. The procedure is iterative in that the network periodically adjusts prices based on monitored user demand, and is decentralized in that only local information is needed for individual users to determine resource requests. The authors derive the network's adjustment scheme and the users' decision rule and establish their optimality. Since the approach does not require the network to know user traffic and delay parameters, it does not require traffic policing on the part of the network

    Queue Dynamics With Window Flow Control

    Get PDF
    This paper develops a new model that describes the queueing process of a communication network when data sources use window flow control. The model takes into account the burstiness in sub-round-trip time (RTT) timescales and the instantaneous rate differences of a flow at different links. It is generic and independent of actual source flow control algorithms. Basic properties of the model and its relation to existing work are discussed. In particular, for a general network with multiple links, it is demonstrated that spatial interaction of oscillations allows queue instability to occur even when all flows have the same RTTs and maintain constant windows. The model is used to study the dynamics of delay-based congestion control algorithms. It is found that the ratios of RTTs are critical to the stability of such systems, and previously unknown modes of instability are identified. Packet-level simulations and testbed measurements are provided to verify the model and its predictions
    corecore