1,676 research outputs found
Efficient and adaptive congestion control for heterogeneous delay-tolerant networks
Detecting and dealing with congestion in delay-tolerant networks (DTNs) is an important and challenging problem. Current DTN forwarding algorithms typically direct traffic towards more central nodes in order to maximise delivery ratios and minimise delays, but as traffic demands increase these nodes may become saturated and unusable. We pro- pose CafRep, an adaptive congestion aware protocol that detects and reacts to congested nodes and congested parts of the network by using implicit hybrid contact and resources congestion heuristics. CafRep exploits localised relative utility based approach to offload the traffic from more to less congested parts of the network, and to replicate at adaptively lower rate in different parts of the network with non-uniform congestion levels. We extensively evaluate our work against benchmark and competitive protocols across a range of metrics over three real connectivity and GPS traces such as Sassy [44], San Francisco Cabs [45] and Infocom 2006 [33]. We show that CafRep performs well, independent of network connectivity and mobility patterns, and consistently outperforms the state-of-the-art DTN forwarding algorithms in the face of increasing rates of congestion. CafRep maintains higher availability and success ratios while keeping low delays, packet loss rates and delivery cost. We test CafRep in the presence of two application scenarios, with fixed rate traffic and with real world Facebook application traffic demands, showing that regardless of the type of traffic CafRep aims to deliver, it reduces congestion and improves forwarding performance
Infrastructure-less D2D Communications through Opportunistic Networks
MenciĂłn Internacional en el tĂtulo de doctorIn recent years, we have experienced several social media blackouts, which have
shown how much our daily experiences depend on high-quality communication services.
Blackouts have occurred because of technical problems, natural disasters, hacker attacks
or even due to deliberate censorship actions undertaken by governments. In all cases,
the spontaneous reaction of people consisted in finding alternative channels and media so
as to reach out to their contacts and partake their experiences. Thus, it has clearly
emerged that infrastructured networks—and cellular networks in particular—are well
engineered and have been extremely successful so far, although other paradigms should
be explored to connect people. The most promising of today’s alternative paradigms
is Device-to-Device (D2D) because it allows for building networks almost freely, and
because 5G standards are (for the first time) seriously addressing the possibility of using
D2D communications.
In this dissertation I look at opportunistic D2D networking, possibly operating in an
infrastructure-less environment, and I investigate several schemes through modeling and
simulation, deriving metrics that characterize their performance. In particular, I consider
variations of the Floating Content (FC) paradigm, that was previously proposed in the
technical literature.
Using FC, it is possible to probabilistically store information over a given restricted
local area of interest, by opportunistically spreading it to mobile users while in the area.
In more detail, a piece of information which is injected in the area by delivering it to one
or more of the mobile users, is opportunistically exchanged among mobile users whenever
they come in proximity of one another, progressively reaching most (ideally all) users in
the area and thus making the information dwell in the area of interest, like in a sort of
distributed storage.
While previous works on FC almost exclusively concentrated on the communication
component, in this dissertation I look at the storage and computing components of FC,
as well as its capability of transferring information from one area of interest to another.
I first present background work, including a brief review of my Master Thesis activity,
devoted to the design, implementation and validation of a smartphone opportunistic
information sharing application. The goal of the app was to collect experimental data that permitted a detailed analysis of the occurring events, and a careful assessment of
the performance of opportunistic information sharing services. Through experiments, I
showed that many key assumptions commonly adopted in analytical and simulation works
do not hold with current technologies. I also showed that the high density of devices and
the enforcement of long transmission ranges for links at the edge might counter-intuitively
impair performance.
The insight obtained during my Master Thesis work was extremely useful to devise
smart operating procedures for the opportunistic D2D communications considered in this
dissertation. In the core of this dissertation, initially I propose and study a set of schemes
to explore and combine different information dissemination paradigms along with real
users mobility and predictions focused on the smart diffusion of content over disjoint
areas of interest. To analyze the viability of such schemes, I have implemented a Python
simulator to evaluate the average availability and lifetime of a piece of information, as
well as storage usage and network utilization metrics. Comparing the performance of
these predictive schemes with state-of-the-art approaches, results demonstrate the need
for smart usage of communication opportunities and storage. The proposed algorithms
allow for an important reduction in network activity by decreasing the number of data
exchanges by up to 92%, requiring the use of up to 50% less of on-device storage,
while guaranteeing the dissemination of information with performance similar to legacy
epidemic dissemination protocols.
In a second step, I have worked on the analysis of the storage capacity of probabilistic
distributed storage systems, developing a simple yet powerful information theoretical
analysis based on a mean field model of opportunistic information exchange. I have
also extended the previous simulator to compare the numerical results generated by the
analytical model to the predictions of realistic simulations under different setups, showing
in this way the accuracy of the analytical approach, and characterizing the properties of
the system storage capacity.
I conclude from analysis and simulated results that when the density of contents seeded
in a floating system is larger than the maximum amount which can be sustained by the
system in steady state, the mean content availability decreases, and the stored information
saturates due to the effects of resource contention. With the presence of static nodes, in
a system with infinite host memory and at the mean field limit, there is no upper bound
to the amount of injected contents which a floating system can sustain. However, as with
no static nodes, by increasing the injected information, the amount of stored information
eventually reaches a saturation value which corresponds to the injected information at
which the mean amount of time spent exchanging content during a contact is equal to
the mean duration of a contact.
As a final step of my dissertation, I have also explored by simulation the computing
and learning capabilities of an infrastructure-less opportunistic communication, storage and computing system, considering an environment that hosts a distributed Machine
Learning (ML) paradigm that uses observations collected in the area over which the FC
system operates to infer properties of the area. Results show that the ML system can
operate in two regimes, depending on the load of the FC scheme. At low FC load, the ML
system in each node operates on observations collected by all users and opportunistically
shared among nodes. At high FC load, especially when the data to be opportunistically
exchanged becomes too large to be transmitted during the average contact time between
nodes, the ML system can only exploit the observations endogenous to each user, which
are much less numerous. As a result, I conclude that such setups are adequate to support
general instances of distributed ML algorithms with continuous learning, only under the
condition of low to medium loads of the FC system. While the load of the FC system
induces a sort of phase transition on the ML system performance, the effect of computing
load is more progressive. When the computing capacity is not sufficient to train all
observations, some will be skipped, and performance progressively declines.
In summary, with respect to traditional studies of the FC opportunistic information
diffusion paradigm, which only look at the communication component over one area of
interest, I have considered three types of extensions by looking at the performance of FC:
over several disjoint areas of interest;
in terms of information storage capacity;
in terms of computing capacity that supports distributed learning.
The three topics are treated respectively in Chapters 3 to 5.This work has been supported by IMDEA Networks InstitutePrograma de Doctorado en IngenierĂa Telemática por la Universidad Carlos III de MadridPresidente: Claudio Ettori Casetti.- Secretario: Antonio de la Oliva Delgado.- Vocal: Christoph Somme
Security of 5G-V2X: Technologies, Standardization and Research Directions
Cellular-Vehicle to Everything (C-V2X) aims at resolving issues pertaining to
the traditional usability of Vehicle to Infrastructure (V2I) and Vehicle to
Vehicle (V2V) networking. Specifically, C-V2X lowers the number of entities
involved in vehicular communications and allows the inclusion of
cellular-security solutions to be applied to V2X. For this, the evolvement of
LTE-V2X is revolutionary, but it fails to handle the demands of high
throughput, ultra-high reliability, and ultra-low latency alongside its
security mechanisms. To counter this, 5G-V2X is considered as an integral
solution, which not only resolves the issues related to LTE-V2X but also
provides a function-based network setup. Several reports have been given for
the security of 5G, but none of them primarily focuses on the security of
5G-V2X. This article provides a detailed overview of 5G-V2X with a
security-based comparison to LTE-V2X. A novel Security Reflex Function
(SRF)-based architecture is proposed and several research challenges are
presented related to the security of 5G-V2X. Furthermore, the article lays out
requirements of Ultra-Dense and Ultra-Secure (UD-US) transmissions necessary
for 5G-V2X.Comment: 9 pages, 6 figures, Preprin
Content Replication in Mobile Networks
Performance and reliability of content access in mobile networks is conditioned by the number and location of content replicas deployed at the network nodes. In this work, we design a practical, distributed solution to content replication that is suitable for dynamic environments and achieves load balancing. Simulation results show that our mechanism, which uses local measurements only, approximates well an optimal solution while being robust against network and demand dynamics. Also, our scheme outperforms alternative approaches in terms of both content access delay and access congestio
Airborne Network Data Availability Using Peer to Peer Database Replication on a Distributed Hash Table
The concept of distributing one complex task to several smaller, simpler Unmanned Aerial Vehicles (UAVs) as opposed to one complex UAV is the way of the future for a vast number of surveillance and data collection tasks. One objective for this type of application is to be able to maintain an operational picture of the overall environment. Due to high bandwidth costs, centralizing all data may not be possible, necessitating a distributed storage system such as mobile Distributed Hash Table (DHT). A difficulty with this maintenance is that for an Airborne Network (AN), nodes are vehicles and travel at high rates of speed. Since the nodes travel at high speeds they may be out of contact with other nodes and their data becomes unavailable. To address this the DHT must include a data replication strategy to ensure data availability. This research investigates the percentage of data available throughout the network by balancing data replication and network bandwidth. The DHT used is Pastry with data replication using Beehive, running over an 802.11 wireless environment, simulated in Network Simulator 3. Results show that high levels of replication perform well until nodes are too tightly packed inside a given area which results in too much contention for limited bandwidth
- …