79 research outputs found
Performance evaluation of caching techniques for video on demand workload in named data network
The rapid growing use of the Internet in the contemporary context is mainly for content
distribution. This is derived primarily due to the emergence of Information-Centric Networking (ICN) in the wider domains of academia and industry. Named Data Network (NDN) is one of ICN architectures. In addition, the NDN has been emphasized as the video traffic architecture that ensures smooth communication between the request and receiver of online video. The concise research problem of the current study is the issue of congestion in Video on Demand (VoD) workload caused by frequent storing of signed content object in the local repositories, which leads to buffering problems and data packet loss. The study will assess the NDN cache techniques to select the preferable cache replacement technique suitable for dealing with the congestion issues, and evaluate its performance. To do that, the current study adopts a research process based on the Design Research Methodology (DRM) and VoD approach in order to explain the main activities that produced an increase in the expected findings at the end of the activities or research. Datasets, as well as Internet2 network topology and the statistics of video views were gathered from the PPTV platform. Actually, a total of 221 servers is connected to the network from the same access points as in the real deployment
of PPTV. In addition, an NS3 analysis the performance metrics of caching replacement
technique (LRU, LFU, and FIFO) for VoD in Named Data Network (NDN) in terms of cache hit ratio, throughput, and server load results in reasonable outcomes that appears to serve as a potential replacement with the current implementation of the Internet2 topology, where nodes are distributed randomly. Based on the results, LFU technique gives the preferable result for congestion from among the presented
techniques. Finally, the research finds that the performance metrics of cache hit ratio,
throughput, and server load for the LFU that produces the lowest congestion rate which
is sufficient. Therefore, the researcher concluded that the efficiency of the different replacement techniques needs to be well investigated in order to provide the insights
necessary to implement these techniques in certain context. However, this result enriches
the current understanding of replacement techniques in handling different cache sizes. After having addressed the different replacement techniques and examined their
performances, the performance characteristics along with their expected performance were also found to stimulate a cache model for providing a relatively fast running time of across a broad range of embedded applications
Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks
Future wireless networks have a substantial potential in terms of supporting
a broad range of complex compelling applications both in military and civilian
fields, where the users are able to enjoy high-rate, low-latency, low-cost and
reliable information services. Achieving this ambitious goal requires new radio
techniques for adaptive learning and intelligent decision making because of the
complex heterogeneous nature of the network structures and wireless services.
Machine learning (ML) algorithms have great success in supporting big data
analytics, efficient parameter estimation and interactive decision making.
Hence, in this article, we review the thirty-year history of ML by elaborating
on supervised learning, unsupervised learning, reinforcement learning and deep
learning. Furthermore, we investigate their employment in the compelling
applications of wireless networks, including heterogeneous networks (HetNets),
cognitive radios (CR), Internet of things (IoT), machine to machine networks
(M2M), and so on. This article aims for assisting the readers in clarifying the
motivation and methodology of the various ML algorithms, so as to invoke them
for hitherto unexplored services as well as scenarios of future wireless
networks.Comment: 46 pages, 22 fig
Novel applications and contexts for the cognitive packet network
Autonomic communication, which is the development of self-configuring, self-adapting, self-optimising and self-healing communication systems, has gained much attention in the network research community. This can be explained by the increasing demand for more sophisticated networking technologies with physical realities that possess computation capabilities and can operate successfully with minimum human intervention. Such systems are driving innovative applications and services that improve the quality of life of citizens both socially and economically. Furthermore, autonomic communication, because of its decentralised approach to communication, is also being explored by the research community as an alternative to centralised control infrastructures for efficient management of large networks. This thesis studies one of the successful contributions in the autonomic communication research, the Cognitive Packet Network (CPN). CPN is a highly scalable adaptive routing protocol that
allows for decentralised control in communication. Consequently, CPN has achieved significant successes, and because of the direction of research, we expect it to continue to find relevance. To investigate this hypothesis, we research new applications and contexts for CPN. This thesis first studies Information-Centric Networking (ICN), a future Internet architecture
proposal. ICN adopts a data-centric approach such that contents are directly addressable at the network level and in-network caching is easily supported. An optimal caching strategy for an information-centric network is first analysed, and approximate solutions are developed and evaluated. Furthermore, a CPN inspired forwarding strategy for directing requests in such a way that exploits the in-network caching capability of ICN is proposed. The proposed strategy is evaluated via discrete event simulations and shown to be more effective in its search for local cache hits compared to the conventional methods. Finally, CPN is proposed to implement the routing system of an Emergency Cyber-Physical System for guiding evacuees in confined spaces in emergency situations. By exploiting CPNâs QoS capabilities, different paths are assigned to evacuees based on their ongoing health conditions using well-defined path metrics. The proposed system is evaluated via discrete-event simulations and shown to improve survival chances compared to a static system that treats evacuees in the same way.Open Acces
On the Performance of Pedestrian Content Distribution
Mobile communication devices may be used for spreading multimedia data without
support of an infrastructure. Such a scheme, where the data is carried by people walking around and
relayed from device to device by means of short range radio, could potentially form a public content
distribution system that spans vast urban areas. There are basically only three system parameters that can
be determined in the design: the transmission range of the nodes, the setup time when nodes make a
contact, and their storage capacity. The transport mechanism is the flow of people and it can be studied
but not engineered. The question addressed in this paper is how well pedestrian content distribution may
work. We answer this question by modeling the mobility of people moving around in a city, constrained
by a given topology. The model is supplemented by simulation of similar or related scenarios for
validation and extension. Our conclusion is that contents spread well with pedestrian speeds already at
low arrival rates into a studied region. Our contributions are both the results on the feasibility of
pedestrian content distribution and the queuing analytic model that captures the flow of people
EUROPEAN CONFERENCE ON QUEUEING THEORY 2016
International audienceThis booklet contains the proceedings of the second European Conference in Queueing Theory (ECQT) that was held from the 18th to the 20th of July 2016 at the engineering school ENSEEIHT, Toulouse, France. ECQT is a biannual event where scientists and technicians in queueing theory and related areas get together to promote research, encourage interaction and exchange ideas. The spirit of the conference is to be a queueing event organized from within Europe, but open to participants from all over the world. The technical program of the 2016 edition consisted of 112 presentations organized in 29 sessions covering all trends in queueing theory, including the development of the theory, methodology advances, computational aspects and applications. Another exciting feature of ECQT2016 was the institution of the TakĂĄcs Award for outstanding PhD thesis on "Queueing Theory and its Applications"
Infrastructure-less D2D Communications through Opportunistic Networks
MenciĂłn Internacional en el tĂtulo de doctorIn recent years, we have experienced several social media blackouts, which have
shown how much our daily experiences depend on high-quality communication services.
Blackouts have occurred because of technical problems, natural disasters, hacker attacks
or even due to deliberate censorship actions undertaken by governments. In all cases,
the spontaneous reaction of people consisted in finding alternative channels and media so
as to reach out to their contacts and partake their experiences. Thus, it has clearly
emerged that infrastructured networksâand cellular networks in particularâare well
engineered and have been extremely successful so far, although other paradigms should
be explored to connect people. The most promising of todayâs alternative paradigms
is Device-to-Device (D2D) because it allows for building networks almost freely, and
because 5G standards are (for the first time) seriously addressing the possibility of using
D2D communications.
In this dissertation I look at opportunistic D2D networking, possibly operating in an
infrastructure-less environment, and I investigate several schemes through modeling and
simulation, deriving metrics that characterize their performance. In particular, I consider
variations of the Floating Content (FC) paradigm, that was previously proposed in the
technical literature.
Using FC, it is possible to probabilistically store information over a given restricted
local area of interest, by opportunistically spreading it to mobile users while in the area.
In more detail, a piece of information which is injected in the area by delivering it to one
or more of the mobile users, is opportunistically exchanged among mobile users whenever
they come in proximity of one another, progressively reaching most (ideally all) users in
the area and thus making the information dwell in the area of interest, like in a sort of
distributed storage.
While previous works on FC almost exclusively concentrated on the communication
component, in this dissertation I look at the storage and computing components of FC,
as well as its capability of transferring information from one area of interest to another.
I first present background work, including a brief review of my Master Thesis activity,
devoted to the design, implementation and validation of a smartphone opportunistic
information sharing application. The goal of the app was to collect experimental data that permitted a detailed analysis of the occurring events, and a careful assessment of
the performance of opportunistic information sharing services. Through experiments, I
showed that many key assumptions commonly adopted in analytical and simulation works
do not hold with current technologies. I also showed that the high density of devices and
the enforcement of long transmission ranges for links at the edge might counter-intuitively
impair performance.
The insight obtained during my Master Thesis work was extremely useful to devise
smart operating procedures for the opportunistic D2D communications considered in this
dissertation. In the core of this dissertation, initially I propose and study a set of schemes
to explore and combine different information dissemination paradigms along with real
users mobility and predictions focused on the smart diffusion of content over disjoint
areas of interest. To analyze the viability of such schemes, I have implemented a Python
simulator to evaluate the average availability and lifetime of a piece of information, as
well as storage usage and network utilization metrics. Comparing the performance of
these predictive schemes with state-of-the-art approaches, results demonstrate the need
for smart usage of communication opportunities and storage. The proposed algorithms
allow for an important reduction in network activity by decreasing the number of data
exchanges by up to 92%, requiring the use of up to 50% less of on-device storage,
while guaranteeing the dissemination of information with performance similar to legacy
epidemic dissemination protocols.
In a second step, I have worked on the analysis of the storage capacity of probabilistic
distributed storage systems, developing a simple yet powerful information theoretical
analysis based on a mean field model of opportunistic information exchange. I have
also extended the previous simulator to compare the numerical results generated by the
analytical model to the predictions of realistic simulations under different setups, showing
in this way the accuracy of the analytical approach, and characterizing the properties of
the system storage capacity.
I conclude from analysis and simulated results that when the density of contents seeded
in a floating system is larger than the maximum amount which can be sustained by the
system in steady state, the mean content availability decreases, and the stored information
saturates due to the effects of resource contention. With the presence of static nodes, in
a system with infinite host memory and at the mean field limit, there is no upper bound
to the amount of injected contents which a floating system can sustain. However, as with
no static nodes, by increasing the injected information, the amount of stored information
eventually reaches a saturation value which corresponds to the injected information at
which the mean amount of time spent exchanging content during a contact is equal to
the mean duration of a contact.
As a final step of my dissertation, I have also explored by simulation the computing
and learning capabilities of an infrastructure-less opportunistic communication, storage and computing system, considering an environment that hosts a distributed Machine
Learning (ML) paradigm that uses observations collected in the area over which the FC
system operates to infer properties of the area. Results show that the ML system can
operate in two regimes, depending on the load of the FC scheme. At low FC load, the ML
system in each node operates on observations collected by all users and opportunistically
shared among nodes. At high FC load, especially when the data to be opportunistically
exchanged becomes too large to be transmitted during the average contact time between
nodes, the ML system can only exploit the observations endogenous to each user, which
are much less numerous. As a result, I conclude that such setups are adequate to support
general instances of distributed ML algorithms with continuous learning, only under the
condition of low to medium loads of the FC system. While the load of the FC system
induces a sort of phase transition on the ML system performance, the effect of computing
load is more progressive. When the computing capacity is not sufficient to train all
observations, some will be skipped, and performance progressively declines.
In summary, with respect to traditional studies of the FC opportunistic information
diffusion paradigm, which only look at the communication component over one area of
interest, I have considered three types of extensions by looking at the performance of FC:
over several disjoint areas of interest;
in terms of information storage capacity;
in terms of computing capacity that supports distributed learning.
The three topics are treated respectively in Chapters 3 to 5.This work has been supported by IMDEA Networks InstitutePrograma de Doctorado en IngenierĂa TelemĂĄtica por la Universidad Carlos III de MadridPresidente: Claudio Ettori Casetti.- Secretario: Antonio de la Oliva Delgado.- Vocal: Christoph Somme
Radio Communications
In the last decades the restless evolution of information and communication technologies (ICT) brought to a deep transformation of our habits. The growth of the Internet and the advances in hardware and software implementations modiïŹed our way to communicate and to share information. In this book, an overview of the major issues faced today by researchers in the ïŹeld of radio communications is given through 35 high quality chapters written by specialists working in universities and research centers all over the world. Various aspects will be deeply discussed: channel modeling, beamforming, multiple antennas, cooperative networks, opportunistic scheduling, advanced admission control, handover management, systems performance assessment, routing issues in mobility conditions, localization, web security. Advanced techniques for the radio resource management will be discussed both in single and multiple radio technologies; either in infrastructure, mesh or ad hoc networks
- âŠ