90 research outputs found

    Cluster-Aided Mobility Predictions

    Full text link
    Predicting the future location of users in wireless net- works has numerous applications, and can help service providers to improve the quality of service perceived by their clients. The location predictors proposed so far estimate the next location of a specific user by inspecting the past individual trajectories of this user. As a consequence, when the training data collected for a given user is limited, the resulting prediction is inaccurate. In this paper, we develop cluster-aided predictors that exploit past trajectories collected from all users to predict the next location of a given user. These predictors rely on clustering techniques and extract from the training data similarities among the mobility patterns of the various users to improve the prediction accuracy. Specifically, we present CAMP (Cluster-Aided Mobility Predictor), a cluster-aided predictor whose design is based on recent non-parametric bayesian statistical tools. CAMP is robust and adaptive in the sense that it exploits similarities in users' mobility only if such similarities are really present in the training data. We analytically prove the consistency of the predictions provided by CAMP, and investigate its performance using two large-scale datasets. CAMP significantly outperforms existing predictors, and in particular those that only exploit individual past trajectories

    Adaptive Replication in Distributed Content Delivery Networks

    Full text link
    We address the problem of content replication in large distributed content delivery networks, composed of a data center assisted by many small servers with limited capabilities and located at the edge of the network. The objective is to optimize the placement of contents on the servers to offload as much as possible the data center. We model the system constituted by the small servers as a loss network, each loss corresponding to a request to the data center. Based on large system / storage behavior, we obtain an asymptotic formula for the optimal replication of contents and propose adaptive schemes related to those encountered in cache networks but reacting here to loss events, and faster algorithms generating virtual events at higher rate while keeping the same target replication. We show through simulations that our adaptive schemes outperform significantly standard replication strategies both in terms of loss rates and adaptation speed.Comment: 10 pages, 5 figure

    Fast Mixing of Parallel Glauber Dynamics and Low-Delay CSMA Scheduling

    Full text link
    Glauber dynamics is a powerful tool to generate randomized, approximate solutions to combinatorially difficult problems. It has been used to analyze and design distributed CSMA (Carrier Sense Multiple Access) scheduling algorithms for multi-hop wireless networks. In this paper we derive bounds on the mixing time of a generalization of Glauber dynamics where multiple links are allowed to update their states in parallel and the fugacity of each link can be different. The results can be used to prove that the average queue length (and hence, the delay) under the parallel Glauber dynamics based CSMA grows polynomially in the number of links for wireless networks with bounded-degree interference graphs when the arrival rate lies in a fraction of the capacity region. We also show that in specific network topologies, the low-delay capacity region can be further improved.Comment: 12 page

    Double Hashing Thresholds via Local Weak Convergence

    Get PDF
    International audienceA lot of interest has recently arisen in the analysis of multiple-choice "cuckoo hashing" schemes. In this context, a main performance criterion is the load threshold under which the hashing scheme is able to build a valid hashtable with high probability in the limit of large systems; various techniques have successfully been used to answer this question (differential equations, combinatorics, cavity method) for increasing levels of generality of the model. However, the hashing scheme analysed so far is quite utopic in that it requires to generate a lot of independent, fully random choices. Schemes with reduced randomness exists, such as "double hashing", which is expected to provide similar asymptotic results as the ideal scheme, yet they have been more resistant to analysis so far. In this paper, we point out that the approach via the cavity method extends quite naturally to the analysis of double hashing and allows to compute the corresponding threshold. The path followed is to show that the graph induced by the double hashing scheme has the same local weak limit as the one obtained with full randomness

    Routage résilient dans les réseaux SDN

    Get PDF
    International audienceLes réseaux SDN (software-defined networking) permettentàpermettent`permettentà un contrôleur centralisé de décider du routage. Afin d'´ etablir des routes fiables, il est souvent nécessaire de trouver plusieurs chemins dans le réseau ne partageant pas les mêmes ressouresàressoures`ressouresà risque, communément appelé SRLG pour Shared Risk Link Group. Tout en assurant cette fiabilité, l'objectif est aussi de minimiser un coût, qui intègre des indicateurs de congestion ou de latence. Ceprobì eme peutêtrepeutˆpeutêtre modélisé par un programme linéaire en nombres entiers (ILP). Nous proposons ici une méthode efficace pour sa résolution qui utilise une relaxation fractionnaire bien choisie, dont nous montrerons qu'elle m` ene en fait la plupart du tempsàtemps`tempsà une solutionentì ere. La résolution de ceprobì eme relaxé utilise une méthode de génération de colonnes (CG), o` u chaque colonne représente un chemin dans le réseau avec une notion de coût modifié prenant en compte les SRLGs ; les nouvelles colonnes peuventêtrepeuventˆpeuventêtre obtenues par un algorithme efficace de programmation dynamique quí etend les algorithmes classiques de calcul de plus court chemin. Afin de limiter l'explosion combinatoire potentielle, nous présentonsprésentons´présentonségalement une heuristique qui accélére le calcul d'une solution résiliente tout en préservant de très bonnes performances. Les résultats numériques montrent que notre approche donne une solution de très bonne qualité dans un temps de calcul raisonnable sur des instances de réseau réalistes

    Convergence of multivariate belief propagation, with applications to cuckoo hashing and load balancing.

    Get PDF
    International audienceThis paper is motivated by two applications, namely i) generalizations of cuckoo hashing, a computationally simple approach to assigning keys to objects, and ii) load balancing in content distribution networks, where one is interested in determining the impact of content replication on performance. These two problems admit a common abstraction: in both scenarios, performance is characterized by the maximum weight of a generalization of a matching in a bipartite graph, featuring node and edge capacities. Our main result is a law of large numbers characterizing the asymptotic maximum weight matching in the limit of large bipartite random graphs, when the graphs admit a local weak limit that is a tree. This result specializes to the two application scenarios, yielding new results in both contexts. In contrast with previous results, the key novelty is the ability to handle edge capacities with arbitrary integer values. An analysis of belief propagation algorithms (BP) with multivariate belief vectors underlies the proof. In particular, we show convergence of the corresponding BP by exploiting monotonicity of the belief vectors with respect to the so-called upshifted likelihood ratio stochastic order. This auxiliary result can be of independent interest, providing a new set of structural conditions which ensure convergence of BP

    A resource allocation framework for network slicing

    Get PDF
    International audienceTelecommunication networks are converging to a massively distributed cloud infrastructure interconnected with software defined networks. In the envisioned architecture, services will be deployed flexibly and quickly as network slices. Our paper addresses a major bottleneck in this context, namely the challenge of computing the best resource provisioning for network slices in a robust and efficient manner. With tractability in mind, we propose a novel optimization framework which allows fine-grained resource allocation for slices both in terms of network bandwidth and cloud processing. The slices can be further provisioned and auto-scaled optimally based on a large class of utility functions in real-time. Furthermore, by tuning a slice-specific parameter, system designers can trade off traffic-fairness with computing-fairness to provide a mixed fairness strategy. We also propose an iterative algorithm based on the alternating direction method of multipliers (ADMM) that provably converges to the optimal resource allocation and we demonstrate the method’s fast convergence in a wide range of quasi-stationary and dynamic settings

    Bipartite graph structures for efficient balancing of heterogeneous loads

    Get PDF
    International audienceThis paper considers large scale distributed content service platforms, such as peer-to-peer video-on-demand systems. Such systems feature two basic resources, namely storage and bandwidth. Their efficiency critically depends on two factors: (i) content replication within servers, and (ii) how incoming service requests are matched to servers holding requested content. To inform the corresponding design choices, we make the following contributions. We first show that, for underloaded systems, so-called proportional content placement with a simple greedy strategy for matching requests to servers ensures full system efficiency provided storage size grows logarithmically with the system size. However, for constant storage size, this strategy undergoes a phase transition with severe loss of efficiency as system load approaches criticality. To better understand the role of the matching strategy in this performance degradation, we characterize the asymptotic system efficiency under an optimal matching policy. Our analysis shows that -in contrast to greedy matching- optimal matching incurs an inefficiency that is exponentially small in the server storage size, even at critical system loads. It further allows a characterization of content replication policies that minimize the inefficiency. These optimal policies, which differ markedly from proportional placement, have a simple structure which makes them implementable in practice. On the methodological side, our analysis of matching performance uses the theory of local weak limits of random graphs, and highlights a novel characterization of matching numbers in bipartite graphs, which may both be of independent interest

    Experimental Comparative Study between Conventional and Green Parking Lots: Analysis of Subsurface Thermal Behavior under Warm and Dry Summer Conditions

    Get PDF
    Green infrastructure has a role to play in climate change adaptation strategies in cities. Alternative urban spaces should be designed considering new requirements in terms of urban microclimate and thermal comfort. Pervious pavements such as green parking lots can contribute to this goal through solar evaporative cooling. However, the cooling benefits of such systems remain under debate during dry and warm periods. The aim of this study was to compare experimentally the thermal behavior of different parking lot types (PLTs) with vegetated urban soil. Four parking lots were instrumented, with temperature probes buried at different depths. Underground temperatures were measured during summer 2019, and the hottest days of the period were analyzed. Results show that the less mineral used in the surface coating, the less it warms up. The temperature difference at the upper layer can reach 10 °C between mineral and non-mineral PLTs. PLTs can be grouped into three types: (i) high surface temperature during daytime and nighttime, important heat transfer toward the sublayers, and low time shift (asphalt system); (ii) high (resp. low) surface temperature during daytime (resp. nighttime), weak heat transfer toward the sublayers, and important time shift (paved stone system); and (iii) low surface temperature during daytime and nighttime, weak heat transfer toward the sublayers, and important time shift (vegetation and substrate system, wood chips system, vegetated urban soil). The results of this study underline that pervious pavements demonstrate thermal benefits under warm and dry summer conditions compared to conventional parking lot solutions. The results also indicate that the hygrothermal properties of urban materials are crucial for urban heat island mitigation
    • …
    corecore