2,113 research outputs found
Distributed and Load-Adaptive Self Configuration in Sensor Networks
Proactive self-configuration is crucial for MANETs such as sensor networks, as these are often deployed in hostile environments and are ad hoc in nature. The dynamic architecture of the network is monitored by exchanging so-called Network State Beacons (NSBs) between key network nodes. The Beacon Exchange rate and the network state define both the time and nature of a proactive action to combat network performance degradation at a time of crisis. It is thus essential to optimize these parameters for the dynamic load profile of the network. This paper presents a novel distributed adaptive optimization Beacon Exchange selection model which considers distributed network load for energy efficient monitoring and proactive reconfiguration of the network. The results show an improvement of 70% in throughput, while maintaining a guaranteed quality-of- service for a small control-traffic overhead
Recommended from our members
Integrating information and knowledge for enterprise innovation
It has widely been accepted that enterprise integration, can be a source of socio-technical and cultural problems within organisations wishing to provide a focussed end-to-end business service. This can cause possible “straitjacketing” of business process architectures, thus suppressing responsive business re-engineering and competitive advantage for some companies. Accordingly, the current typology and emergent forms of Enterprise Resource Planning (ERP) and Enterprise Application Integration (EAI) technologies are set in the context of understanding information and knowledge integration philosophies. As such, key influences and trends in emerging IS integration choices, for end-to-end, cost-effective and flexible knowledge integration, are examined. As touch points across and outside organisations proliferate, via work-flow and relationship management-driven value innovation, aspects of knowledge refinement and knowledge integration pose challenges to maximising the potential of innovation and sustainable success, within enterprises. This is in terms of the increasing propensity for data fragmentation and the lack of effective information management, in the light of information overload. Furthermore, the nature of IS mediation which is inherent within decision making and workflow-based business processes, provides the basis for evaluation of the effects of information and knowledge integration. Hence, the authors propose a conceptual, holistic evaluation framework which encompasses these ideas. It is thus argued that such trends, and their implications regarding enterprise IS integration to engender sustainable competitive advantage, require fundamental re-thinking
CERN openlab Whitepaper on Future IT Challenges in Scientific Research
This whitepaper describes the major IT challenges in scientific research at CERN and several other European and international research laboratories and projects. Each challenge is exemplified through a set of concrete use cases drawn from the requirements of large-scale scientific programs. The paper is based on contributions from many researchers and IT experts of the participating laboratories and also input from the existing CERN openlab industrial sponsors. The views expressed in this document are those of the individual contributors and do not necessarily reflect the view of their organisations and/or affiliates
Infrastructure-less D2D Communications through Opportunistic Networks
MenciĂłn Internacional en el tĂtulo de doctorIn recent years, we have experienced several social media blackouts, which have
shown how much our daily experiences depend on high-quality communication services.
Blackouts have occurred because of technical problems, natural disasters, hacker attacks
or even due to deliberate censorship actions undertaken by governments. In all cases,
the spontaneous reaction of people consisted in finding alternative channels and media so
as to reach out to their contacts and partake their experiences. Thus, it has clearly
emerged that infrastructured networks—and cellular networks in particular—are well
engineered and have been extremely successful so far, although other paradigms should
be explored to connect people. The most promising of today’s alternative paradigms
is Device-to-Device (D2D) because it allows for building networks almost freely, and
because 5G standards are (for the first time) seriously addressing the possibility of using
D2D communications.
In this dissertation I look at opportunistic D2D networking, possibly operating in an
infrastructure-less environment, and I investigate several schemes through modeling and
simulation, deriving metrics that characterize their performance. In particular, I consider
variations of the Floating Content (FC) paradigm, that was previously proposed in the
technical literature.
Using FC, it is possible to probabilistically store information over a given restricted
local area of interest, by opportunistically spreading it to mobile users while in the area.
In more detail, a piece of information which is injected in the area by delivering it to one
or more of the mobile users, is opportunistically exchanged among mobile users whenever
they come in proximity of one another, progressively reaching most (ideally all) users in
the area and thus making the information dwell in the area of interest, like in a sort of
distributed storage.
While previous works on FC almost exclusively concentrated on the communication
component, in this dissertation I look at the storage and computing components of FC,
as well as its capability of transferring information from one area of interest to another.
I first present background work, including a brief review of my Master Thesis activity,
devoted to the design, implementation and validation of a smartphone opportunistic
information sharing application. The goal of the app was to collect experimental data that permitted a detailed analysis of the occurring events, and a careful assessment of
the performance of opportunistic information sharing services. Through experiments, I
showed that many key assumptions commonly adopted in analytical and simulation works
do not hold with current technologies. I also showed that the high density of devices and
the enforcement of long transmission ranges for links at the edge might counter-intuitively
impair performance.
The insight obtained during my Master Thesis work was extremely useful to devise
smart operating procedures for the opportunistic D2D communications considered in this
dissertation. In the core of this dissertation, initially I propose and study a set of schemes
to explore and combine different information dissemination paradigms along with real
users mobility and predictions focused on the smart diffusion of content over disjoint
areas of interest. To analyze the viability of such schemes, I have implemented a Python
simulator to evaluate the average availability and lifetime of a piece of information, as
well as storage usage and network utilization metrics. Comparing the performance of
these predictive schemes with state-of-the-art approaches, results demonstrate the need
for smart usage of communication opportunities and storage. The proposed algorithms
allow for an important reduction in network activity by decreasing the number of data
exchanges by up to 92%, requiring the use of up to 50% less of on-device storage,
while guaranteeing the dissemination of information with performance similar to legacy
epidemic dissemination protocols.
In a second step, I have worked on the analysis of the storage capacity of probabilistic
distributed storage systems, developing a simple yet powerful information theoretical
analysis based on a mean field model of opportunistic information exchange. I have
also extended the previous simulator to compare the numerical results generated by the
analytical model to the predictions of realistic simulations under different setups, showing
in this way the accuracy of the analytical approach, and characterizing the properties of
the system storage capacity.
I conclude from analysis and simulated results that when the density of contents seeded
in a floating system is larger than the maximum amount which can be sustained by the
system in steady state, the mean content availability decreases, and the stored information
saturates due to the effects of resource contention. With the presence of static nodes, in
a system with infinite host memory and at the mean field limit, there is no upper bound
to the amount of injected contents which a floating system can sustain. However, as with
no static nodes, by increasing the injected information, the amount of stored information
eventually reaches a saturation value which corresponds to the injected information at
which the mean amount of time spent exchanging content during a contact is equal to
the mean duration of a contact.
As a final step of my dissertation, I have also explored by simulation the computing
and learning capabilities of an infrastructure-less opportunistic communication, storage and computing system, considering an environment that hosts a distributed Machine
Learning (ML) paradigm that uses observations collected in the area over which the FC
system operates to infer properties of the area. Results show that the ML system can
operate in two regimes, depending on the load of the FC scheme. At low FC load, the ML
system in each node operates on observations collected by all users and opportunistically
shared among nodes. At high FC load, especially when the data to be opportunistically
exchanged becomes too large to be transmitted during the average contact time between
nodes, the ML system can only exploit the observations endogenous to each user, which
are much less numerous. As a result, I conclude that such setups are adequate to support
general instances of distributed ML algorithms with continuous learning, only under the
condition of low to medium loads of the FC system. While the load of the FC system
induces a sort of phase transition on the ML system performance, the effect of computing
load is more progressive. When the computing capacity is not sufficient to train all
observations, some will be skipped, and performance progressively declines.
In summary, with respect to traditional studies of the FC opportunistic information
diffusion paradigm, which only look at the communication component over one area of
interest, I have considered three types of extensions by looking at the performance of FC:
over several disjoint areas of interest;
in terms of information storage capacity;
in terms of computing capacity that supports distributed learning.
The three topics are treated respectively in Chapters 3 to 5.This work has been supported by IMDEA Networks InstitutePrograma de Doctorado en IngenierĂa Telemática por la Universidad Carlos III de MadridPresidente: Claudio Ettori Casetti.- Secretario: Antonio de la Oliva Delgado.- Vocal: Christoph Somme
AMUSE: autonomic management of ubiquitous e-Health systems
Future e-Health systems will consist of low-power on-body wireless sensors attached to mobile users that interact with an ubiquitous computing environment to monitor the health and well being of patients in hospitals or at home. Patients or health practitioners have very little technical computing expertise so these systems need to be self-configuring and self-managing with little or no user input. More importantly, they should adapt autonomously to changes resulting from user activity, device failure, and the addition or loss of services. We propose the Self-Managed Cell (SMC) as an architectural pattern for all such types of ubiquitous computing applications and use an e-Health application in which on-body sensors are used to monitor a patient living in their home as an exemplar. We describe the services comprising the SMC and discuss cross-SMC interactions as well as the composition of SMCs into larger structures
Optimising Network Control Traffic in Resource Constrained MANETS
The exchange of Network State Beacons (NSBs) is crucial to monitoring the dynamic state of MANETs like sensor networks. The rate of beacon exchange (FX) and the network state define both the time and nature of a proactive action to reconfigure the network in order to combat network performance degradation at a time of crisis. It is thus essential to select the FX within optimized bounds, so that minimal control traffic is incurred due to state maintenance and reconfiguration activities. This paper presents a novel distributed model that selects optimized bounds for FX selection and adapts dynamically to the load profile of the network for energy efficient monitoring and proactive reconfiguration
From MANET to people-centric networking: Milestones and open research challenges
In this paper, we discuss the state of the art of (mobile) multi-hop ad hoc networking with the aim to present the current status of the research activities and identify the consolidated research areas, with limited research opportunities, and the hot and emerging research areas for which further research is required. We start by briefly discussing the MANET paradigm, and why the research on MANET protocols is now a cold research topic. Then we analyze the active research areas. Specifically, after discussing the wireless-network technologies, we analyze four successful ad hoc networking paradigms, mesh networks, opportunistic networks, vehicular networks, and sensor networks that emerged from the MANET world. We also present an emerging research direction in the multi-hop ad hoc networking field: people centric networking, triggered by the increasing penetration of the smartphones in everyday life, which is generating a people-centric revolution in computing and communications
- …