26 research outputs found
Server Placement with Shared Backups for Disaster-Resilient Clouds
A key strategy to build disaster-resilient clouds is to employ backups of
virtual machines in a geo-distributed infrastructure. Today, the continuous and
acknowledged replication of virtual machines in different servers is a service
provided by different hypervisors. This strategy guarantees that the virtual
machines will have no loss of disk and memory content if a disaster occurs, at
a cost of strict bandwidth and latency requirements. Considering this kind of
service, in this work, we propose an optimization problem to place servers in a
wide area network. The goal is to guarantee that backup machines do not fail at
the same time as their primary counterparts. In addition, by using
virtualization, we also aim to reduce the amount of backup servers required.
The optimal results, achieved in real topologies, reduce the number of backup
servers by at least 40%. Moreover, this work highlights several characteristics
of the backup service according to the employed network, such as the
fulfillment of latency requirements.Comment: Computer Networks 201
A study of the LoRa signal propagation in forest, urban, and suburban environments
International audienceSensing is an activity of paramount importance for smart cities. The coverage of large areas based on reduced infrastructure and low energy consumption is desirable. In this context, Low Power Wide Area Network (LPWAN) plays an important role. In this paper, we investigate LoRa, a low-power technology offering large coverage, but low transmission rates. Radio range and data rate are tunable by using different spreading factors and coding rates, which are configuration parameters of the LoRa physical layer. LoRa can cover large areas but variations in the environment affect link quality. This work studies the propagation of LoRa signals in forest, urban, and suburban vehicular environments. Besides being environments with variable propagation conditions, we evaluate scenarios with node mobility. To characterize the communication link, we mainly use the Received Signal Strength Indicator (RSSI), Signal to Noise Ratio (SNR), and Packet Delivery Ratio (PDR). As for node mobility, speeds are chosen according to prospective applications. Our results show that the link reaches up to 250 m in the forest scenario, while in the vehicular scenario it reaches up to 2 km. In contrast, in scenarios with high-density buildings and human activity, the maximum range of the link reaches up to 200 m in the urban scenario
Routing in the Internet (quality of service and group communication)
PARIS-BIUSJ-Thèses (751052125) / SudocCentre Technique Livre Ens. Sup. (774682301) / SudocPARIS-BIUSJ-Mathématiques rech (751052111) / SudocSudocFranceF
Reducing latency and overhead of route repair with controlled flooding
International audienceAd hoc routing protocols that use broadcast for route discovery may be inefficient if the path between any source-destination pair is frequently broken. We propose and evaluate a simple mechanism that allows fast route repair in on demand ad hoc routing protocols. We apply our proposal to the Ad hoc On-demand Distance Vector (AODV) routing protocol. The proposed system is based on the Controlled Flooding (CF) framework, where alternative routes are established around the main original path between source-destination pairs. With alternative routing, data packets are forwarded through a secondary path without requiring the source to re-flood the whole network, as may be the case in AODV. We are interested in one-level alternative routing. We show that our proposal reduces the connection disruption probability as well as the frequency of broadcasts
Enabling the Progressive Multicast Service Deployment
International audienceThe IP multicast architecture was not widely deployed because multicast address allocation is difficult and there is no scalable solution to inter-domain multicast routing. Hence, there is an interest in developing protocols that allow the progressive deployment of the multicast service by supporting unicast clouds. This paper proposes HBH (hop-by-hop multicast routing protocol). HBH adopts the source-specific channel abstraction to simplify address allocation and implements multicast distribution using recursive unicast trees. In this model, data packets have unicast destination addresses. Therefore, HBH supports pure unicast routers transparently. The branching-nodes recursively create packet copies to implement the distribution. HBH constructs a shortest-path tree even in the presence of asymmetric unicast routing. Consequently, HBH provides best routes in asymmetric networks, and is suitable for an eventual implementation of QoS-based routing. Additionally HBH reduces tree bandwidth consumption in asymmetric networks when compared to other approaches. The results obtained from simulation support our statements
Un Algorithme Résistant au Facteur d'Échelle pour le Routage avec Trois Métriques de QoS
National audienc
EPICS: Fair Opportunistic Multi-Content Dissemination
International audienceAchieving efficient content dissemination in mobile opportunistic networks becomes a big challenge when content sizes are large and require more capacity than what contact opportunities between nodes may offer. Content fragmentation solves only part of the problem, as nodes still need to decide which fragment to send when a contact happens. To address this problem, we propose EPICS, a protocol designed to quickly exchange large contents in opportunistic networks. Using grey relational analysis, EPICS is able to balance the distribution of contents that have different sizes and creation times, providing fairer delay distribution and faster dissemination. We implemented and evaluated EPICS through real experimentation using Android devices. Results show that EPICS significantly reduces content dissemination delays when compared to classic approaches
Network Design Requirements for Disaster Resilience in IaaS Clouds
International audienceMany corporations rely on disaster recovery schemes to keep their computing and network services running after unexpected situations, such as natural disasters and attacks. As corporations migrate their infrastructure to the cloud using the infrastructure as a service model, cloud providers need to offer disaster-resilient services. This article provides guidelines to design a data center network infrastructure to support a disaster-resilient infrastructure as a service cloud. These guidelines describe design requirements, such as the time to recover from disasters, and allow the identification of important domains that deserve further research efforts, such as the choice of data center site locations and disaster-resilient virtual machine placement
Communications
International audienceMulticast communication is by definition greedier in bandwidth than unicast communications within the same number of receivers. The design of a multicast congestion control algorithm is then an important and useful task. There are two potential approaches for congestion control: within the network (it involves routers and distribution trees rather than simple paths as in the unicast case), and end-to-end