508 research outputs found

    Uncertainty in Multi-Commodity Routing Networks: When does it help?

    Full text link
    We study the equilibrium behavior in a multi-commodity selfish routing game with many types of uncertain users where each user over- or under-estimates their congestion costs by a multiplicative factor. Surprisingly, we find that uncertainties in different directions have qualitatively distinct impacts on equilibria. Namely, contrary to the usual notion that uncertainty increases inefficiencies, network congestion actually decreases when users over-estimate their costs. On the other hand, under-estimation of costs leads to increased congestion. We apply these results to urban transportation networks, where drivers have different estimates about the cost of congestion. In light of the dynamic pricing policies aimed at tackling congestion, our results indicate that users' perception of these prices can significantly impact the policy's efficacy, and "caution in the face of uncertainty" leads to favorable network conditions.Comment: Currently under revie

    A Parameterisation of Algorithms for Distributed Constraint Optimisation via Potential Games

    No full text
    This paper introduces a parameterisation of learning algorithms for distributed constraint optimisation problems (DCOPs). This parameterisation encompasses many algorithms developed in both the computer science and game theory literatures. It is built on our insight that when formulated as noncooperative games, DCOPs form a subset of the class of potential games. This result allows us to prove convergence properties of algorithms developed in the computer science literature using game theoretic methods. Furthermore, our parameterisation can assist system designers by making the pros and cons of, and the synergies between, the various DCOP algorithm components clear

    ATP: a Datacenter Approximate Transmission Protocol

    Full text link
    Many datacenter applications such as machine learning and streaming systems do not need the complete set of data to perform their computation. Current approximate applications in datacenters run on a reliable network layer like TCP. To improve performance, they either let sender select a subset of data and transmit them to the receiver or transmit all the data and let receiver drop some of them. These approaches are network oblivious and unnecessarily transmit more data, affecting both application runtime and network bandwidth usage. On the other hand, running approximate application on a lossy network with UDP cannot guarantee the accuracy of application computation. We propose to run approximate applications on a lossy network and to allow packet loss in a controlled manner. Specifically, we designed a new network protocol called Approximate Transmission Protocol, or ATP, for datacenter approximate applications. ATP opportunistically exploits available network bandwidth as much as possible, while performing a loss-based rate control algorithm to avoid bandwidth waste and re-transmission. It also ensures bandwidth fair sharing across flows and improves accurate applications' performance by leaving more switch buffer space to accurate flows. We evaluated ATP with both simulation and real implementation using two macro-benchmarks and two real applications, Apache Kafka and Flink. Our evaluation results show that ATP reduces application runtime by 13.9% to 74.6% compared to a TCP-based solution that drops packets at sender, and it improves accuracy by up to 94.0% compared to UDP

    Re-feedback: freedom with accountability for causing congestion in a connectionless internetwork

    Get PDF
    This dissertation concerns adding resource accountability to a simplex internetwork such as the Internet, with only necessary but sufficient constraint on freedom. That is, both freedom for applications to evolve new innovative behaviours while still responding responsibly to congestion; and freedom for network providers to structure their pricing in any way, including flat pricing. The big idea on which the research is built is a novel feedback arrangement termed ‘re-feedback’. A general form is defined, as well as a specific proposal (re-ECN) to alter the Internet protocol so that self-contained datagrams carry a metric of expected downstream congestion. Congestion is chosen because of its central economic role as the marginal cost of network usage. The aim is to ensure Internet resource allocation can be controlled either by local policies or by market selection (or indeed local lack of any control). The current Internet architecture is designed to only reveal path congestion to end-points, not networks. The collective actions of self-interested consumers and providers should drive Internet resource allocations towards maximisation of total social welfare. But without visibility of a cost-metric, network operators are violating the architecture to improve their customer’s experience. The resulting fight against the architecture is destroying the Internet’s simplicity and ability to evolve. Although accountability with freedom is the goal, the focus is the congestion metric, and whether an incentive system is possible that assures its integrity as it is passed between parties around the system, despite proposed attacks motivated by self-interest and malice. This dissertation defines the protocol and canonical examples of accountability mechanisms. Designs are all derived from carefully motivated principles. The resulting system is evaluated by analysis and simulation against the constraints and principles originally set. The mechanisms are proven to be agnostic to specific transport behaviours, but they could not be made flow-ID-oblivious

    Development of dynamic recursive models for freeway travel time predictions

    Get PDF
    Traffic congestion has been a major problem in metropolitan areas, which is caused by either insufficient roadway capacity or unforesceable incidents. In order to promote the efficiency of the existing roadway networks and mitigate the impact of traffic congestion, the development of a sound prediction model for travel times is desirable. A comprehensive literature review about existing prediction models was conducted by investigating the advantages, disadvantages, and limitations of each model. Based on the features and properties of previous models, the base models including exponential smoothing model (ESM), moving average model (MAM), and Kalman filtering model (KFM) are developed to capture stochastic properties of traffic behavior for travel time prediction. By incorporating KFM into ESM and MAM, three dynamic recursive prediction models including dynamic exponential smoothing model (DESM), improved dynamic exponential smoothing model (JDESM), and dynamic moving average model (DMAM) are developed, in which the time-varying weight parameters are optimized based on the most recent observation. Model evaluation has been conducted to analyze prediction accuracy under various traffic conditions (e.g., free-flow condition, recurrent and non-recurrent congested traffic conditions). Results show that the IDESM in general outperforms other models developed in this study in prediction accuracy and stability. In addition, the feature and logic of the IDESM lead to its high transferability and adaptability, which could enable the prediction model to perform well at multiple locations and deal with complicated traffic conditions. Besides the proficient capability, the IDESM is easy to implement in the real world transportation network. Thus, the IDESM is proven an appealing approach for short-time travel time prediction under various traffic conditions. The application scope of the IDESM is identified, while the optimal prediction intervals are also suggested in this study

    Reducing Internet Latency : A Survey of Techniques and their Merit

    Get PDF
    Bob Briscoe, Anna Brunstrom, Andreas Petlund, David Hayes, David Ros, Ing-Jyh Tsang, Stein Gjessing, Gorry Fairhurst, Carsten Griwodz, Michael WelzlPeer reviewedPreprin
    corecore