12 research outputs found

    Scalable Traffic Engineering for Higher Throughput in Heavily-loaded Software Defined Networks

    Full text link
    Existing traffic engineering (TE) solutions performs well for software defined network (SDN) in average cases. However, during peak hours, bursty traffic spikes are challenging to handle, because it is difficult to react in time and guarantee high performance even after failures with limited flow entries. Instead of leaving some capacity empty to guarantee no congestion happens due to traffic rerouting after failures or path updating after demand or topology changes, we decide to make full use of the network capacity to satisfy the demands for heavily-loaded peak hours. The TE system also needs to react to failures quickly and utilize the priority queue to guarantee the transmission of loss and delay sensitive traffic. We propose TED, a scalable TE system that can guarantee high throughput in peak hours. TED can quickly compute a group of maximum number of edge-disjoint paths for each ingress-egress switch pair. We design two methods to select paths under the flow entry limit. We then input the selected paths to our TE to minimize the maximum link utilization. In case of large traffic matrix making the maximum link utilization larger than 1, we input the utilization and the traffic matrix to the optimization of maximizing overall throughput under a new constrain. Thus we obtain a realistic traffic matrix, which has the maximum overall throughput and guarantees no traffic starvation for each switch pair. Experiments show that TED has much better performance for heavily-loaded SDN and has 10% higher probability to satisfy all (> 99.99%) the traffic after a single link failure for G-Scale topology than Smore under the same flow entry limit.Comment: The 8 pages double column version of the paper is submitted to NOMS 2020 technical sessio

    Lexicographically Fair Learning: Algorithms and Generalization

    Get PDF
    We extend the notion of minimax fairness in supervised learning problems to its natural conclusion: lexicographic minimax fairness (or lexifairness for short). Informally, given a collection of demographic groups of interest, minimax fairness asks that the error of the group with the highest error be minimized. Lexifairness goes further and asks that amongst all minimax fair solutions, the error of the group with the second highest error should be minimized, and amongst all of those solutions, the error of the group with the third highest error should be minimized, and so on. Despite its naturalness, correctly defining lexifairness is considerably more subtle than minimax fairness, because of inherent sensitivity to approximation error. We give a notion of approximate lexifairness that avoids this issue, and then derive oracle-efficient algorithms for finding approximately lexifair solutions in a very general setting. When the underlying empirical risk minimization problem absent fairness constraints is convex (as it is, for example, with linear and logistic regression), our algorithms are provably efficient even in the worst case. Finally, we show generalization bounds - approximate lexifairness on the training sample implies approximate lexifairness on the true distribution with high probability. Our ability to prove generalization bounds depends on our choosing definitions that avoid the instability of naive definitions

    FFMRA: A Fully Fair Multi-Resource Allocation Algorithm in Cloud Environments

    Get PDF
    The need for effective and fair resource allocation in cloud computing has been identified in the literature and in industrial contexts for some time now. Cloud computing, as a promising technology, offers usage-based payment, ondemand computing resources. However, in the recent decade, the growing complexity of the IT world resulted in making Quality of Service (QoS) in the cloud a challenging subject and an NP-hard problem. Specifically, fair allocation of resources in the cloud is one of the most important aspects of QoS that becomes more interesting especially when many users submit their tasks and requests include multiple resources. Research in this area has been considered since 2012 by introducing Dominant Resource Fairness (DRF) algorithm as an initial attempt to solve the resource fair allocation problem in the cloud. Although DRF has some good features in terms of fairness, it has been proven inefficient in some conditions. Remarkably, DRF and other works in its extension are not proven intuitively fair after all. These implementations have been unable to utilize all the resources in the system and more specifically, they leave the system in an imbalanced situation with respect to each specific resource. To tackle those problems, in this paper we propose a novel algorithm namely FFMRA inspired by DRF which allocate resources in a fully fair way considering both dominant and non-dominant shares. The results from the experiments show that our proposed method provides approximately 100% utilization of resources and distributes them fairly among the users and meets good fairness properties

    Resource Sharing and Fairness in Network Control

    Get PDF
    要求される通信性能や同時利用数が際限なく増大していく中で,複数のユーザやアプリケーションによるネットワーク資源の効率的共有が重要である.本論文では,システム効率やユーザ間公平性の観点で「妥当な」資源共有の考え方として提案されているものを概説するとともに,例として,筆者らが扱っている,一対多ファイル転送において異なる部分を複数経路で同時転送しながらマルチキャストにより無駄な重複転送を減らす時間・空間スケジューリングを紹介し,スケジュール(解)の妥当性や妥当な解の探索方法について検討する

    Fairness in dynamic networks

    Get PDF
    The main focus of this research is directed towards fairness in a dynamic network. Two specific applications are mathematically formulated: heating, ventilation, and air conditioning (HVAC), and code division multiple access (CDMA). In the first problem, namely, fair power allocation for temperature regulation of a multi-unit building, the temperature of each unit is described by a discrete-time dynamic equation. The formulation considers the effect of outside temperature and heat transfer between the adjacent rooms. Temperature regulation is then described as a constrained optimization problem, where the objective is to maintain the temperature of each unit within a prescribed thermal comfort zone with a limited amount of power. An optimal control strategy is presented to minimize the maximum mutual temperature difference between different units (long-term fairness) while maintaining the temperature of each unit in the comfort zone or close to it at all times, as much as possible (short-term fairness). Simulations demonstrate the effectiveness of the proposed control strategy in regulating the temperature of every unit in a building. Regarding the second application, an optimization-based fair reverse-link rate assignment strategy is proposed for fair resource allocation in a CDMA network. The network is modeled in a star topology, where the nodes represent either the base station (BS) or access terminals (ATs). The BS at every instant computes the fair rate for each AT by minimizing the maximum disparity in users' rates. Then, the BS sends a single bit to all ATs at every instant. It is shown that if each AT could compute a specific variable, called the coordinating variable, it can find its fair rate, which means the decision-making strategy is distributed. The proposed method is computationally efficient, and simulations confirm its efficacy in different scenarios

    Coded-MPMC: One-to-Many Transfer Using Multipath Multicast With Sender Coding

    Get PDF
    One-to-many transfers in a fast and efficient manner are essential to meet the growing need for duplicating, migrating, or sharing bulk data among servers in a datacenter and across geographically distributed datacenters. Some existing works utilize multiple multicast trees for a one-to-many transfer request to increase network link utilization and its transfer throughput. However, since those schemes do not fully utilize the max-flow value of transmission from a single sender to each recipient, there is room for each recipient to retrieve data more quickly. Therefore, assuming fully-controlled networks with full-duplex links, we pose a problem to find a set of multicast flows with an allocation of block-wise transmissions by which each of multiple recipients with diverse max-flow values from the sender can utilize its own max-flow value. Based on that, assuming a sender-side coding capability on file blocks, we design a schedule of block transmissions over multiple phases by which each recipient can achieve a lower-bound of its file retrieval completion time, i.e., the file size divided by its own max-flow value. This paper presents the coded Multipath Multicast (Coded-MPMC) for one-to-many transfers with heuristic procedures to find a desired set of multicast flows on which block transmissions are scheduled. Through extensive simulations on large-scale real-world network topologies and different types of randomly-generated synthetic topologies, the proposed method is shown to design a desired schedule efficiently. A preliminary implementation on OpenFlow is also reported to show the fundamental feasibility of Coded-MPMC

    Ethical Machine Learning: Fairness, Privacy, And The Right To Be Forgotten

    Get PDF
    Large-scale algorithmic decision making has increasingly run afoul of various social norms, laws, and regulations. A prominent concern is when a learned model exhibits discrimination against some demographic group, perhaps based on race or gender. Concerns over such algorithmic discrimination have led to a recent flurry of research on fairness in machine learning, which includes new tools for designing fair models, and studies the tradeoffs between predictive accuracy and fairness. We address algorithmic challenges in this domain. Preserving privacy of data when performing analysis on it is not only a basic right for users, but it is also required by laws and regulations. How should one preserve privacy? After about two decades of fruitful research in this domain, differential privacy (DP) is considered by many the gold standard notion of data privacy. We focus on how differential privacy can be useful beyond preserving data privacy. In particular, we study the connection between differential privacy and adaptive data analysis. Users voluntarily provide huge amounts of personal data to businesses such as Facebook, Google, and Amazon, in exchange for useful services. But a basic principle of data autonomy asserts that users should be able to revoke access to their data if they no longer find the exchange of data for services worthwhile. The right for users to request the erasure of personal data appears in regulations such as the Right to be Forgotten of General Data Protection Regulation (GDPR), and the California Consumer Privacy Act (CCPA). We provide algorithmic solutions to the the problem of removing the influence of data points from machine learning models

    Energy-Aware Traffic Engineering for Wired IP Networks

    Get PDF
    RÉSUMÉ Même si l'Internet est souvent considéré comme un moyen formidable pour réduire l'impact des activités humaines sur l'environnement, sa consommation d'énergie est en train de devenir un problème en raison de la croissance exponentielle du trafic et de l'expansion rapide des infrastructures de communication dans le monde entier. En 2007, il a été estimé que les équipements de réseau (sans tenir compte de serveurs dans les centres de données) étaient responsables d'une consommation d'énergie de 22 GW, alors qu'en 2010 la consommation annuelle des plus grands fournisseurs de services Internet (par exemple AT$T) a dépassé 10 TWh par an. En raison de cette tendance alarmante, la réduction de la consommation d'énergie dans les réseaux de télécommunication, et en particulier dans les réseaux IP, est récemment devenue une priorité. Une des stratégies les plus prometteuses pour rendre plus vert l'Internet est le sleep-based energy-aware network management (SEANM), selon lequel la configuration de réseau est adaptée aux niveaux de trafic afin d'endormir tous les éléments redondantes du réseau. Dans cette thèse nous développons plusieurs approches centralisées de SEANM, afin d'optimiser la configuration de réseaux IP qui utilisent différents protocoles (OSPF or MPLS) ou transportent différents types de trafic (élastique or inélastique). Le choix d'adresser le problème d'une manière centralisée, avec une plate-forme de gestion unique qui est responsable de la configuration et de la surveillance de l'ensemble du réseau, est motivée par la nécessité d'opérateurs de maintenir en tout temps le contrôle complet sur le réseau. Visant à mettre en œuvre les approches proposées dans un environnement réaliste du réseau, nous présentons aussi un nouveau cadre de gestion de réseau entièrement configurable que nous avons appelé JNetMan. JNetMan a été exploité pour tester une version dynamique de la procédure SEANM développée pour les réseaux utilisant OSPF.----------ABSTRACT Even if the Internet is commonly considered a formidable means to reduce the impact of human activities on the environment, its energy consumption is rapidly becoming an issue due to the exponential traffic growth and the rapid expansion of communication infrastructures worldwide. Estimated consumption of the network equipment, excluding servers in data centers, in 2007 was 22 GW, while in 2010 the yearly consumption of the largest Internet Service Providers, e.g., AT&T, exceeded 10 TWh per year. The growing energy trend has motivated the development of new strategies to reduce the consumption of telecommunication networks, with particular focus on IP networks. In addition to the development of a new generation of green network equipment, a second possible strategy to optimize the IP network consumption is represented by sleep-based energy-aware network management (SEANM), which aims at adapting the whole network power consumption to the traffic levels by optimizing the network configuration and putting to sleep the redundant network elements. Device sleeping represents the main potential source of saving because the consumption of current network devices is not proportional to the utilization level: so that, the overall network consumption is constantly close to maximum. In current IP networks, quality of service (QoS) and network resilience to failures are typically guaranteed by substantially over-dimensioning the whole network infrastructure: therefore, also during peak hours, it could be possible to put to sleep a non-negligible subset of redundant network devices. Due to the heterogeneity of current network technologies, in this thesis, we focus our efforts to develop centralized SEANM approaches for IP networks operated with different configurations and protocols. More precisely, we consider networks operated with different routing schemes, namely shortest path (OSPF), flow-based (MPLS) and take into account different types of traffic, i.e., elastic or inelastic. The centralized approach, with a single management platform responsible for configuring and monitoring the whole network, is motivated by the need of network operators to be constantly in control of the network dynamics. To fully guarantee network stability, we investigate the impact of SEANM on network reliability to failures and robustness to traffic variations. Ad hoc modeling techniques are integrated within the proposed SEANM frameworks to explicitly consider resilience and robustness as network constraints. Finally, to implement the proposed procedures in a realistic network environment, we propose a novel, fully configurable network management framework, called JNetMan. We use JNetMan to develop and test a dynamic version of the SEANM procedure for IP networks operated with shortest path routing protocols
    corecore