138,016 research outputs found
Social-aware Forwarding in Opportunistic Wireless Networks: Content Awareness or Obliviousness?
With the current host-based Internet architecture, networking faces
limitations in dynamic scenarios, due mostly to host mobility. The ICN paradigm
mitigates such problems by releasing the need to have an end-to-end transport
session established during the life time of the data transfer. Moreover, the
ICN concept solves the mismatch between the Internet architecture and the way
users would like to use it: currently a user needs to know the topological
location of the hosts involved in the communication when he/she just wants to
get the data, independently of its location. Most of the research efforts aim
to come up with a stable ICN architecture in fixed networks, with few examples
in ad-hoc and vehicular networks. However, the Internet is becoming more
pervasive with powerful personal mobile devices that allow users to form
dynamic networks in which content may be exchanged at all times and with low
cost. Such pervasive wireless networks suffer with different levels of
disruption given user mobility, physical obstacles, lack of cooperation,
intermittent connectivity, among others. This paper discusses the combination
of content knowledge (e.g., type and interested parties) and social awareness
within opportunistic networking as to drive the deployment of ICN solutions in
disruptive networking scenarios. With this goal in mind, we go over few
examples of social-aware content-based opportunistic networking proposals that
consider social awareness to allow content dissemination independently of the
level of network disruption. To show how much content knowledge can improve
social-based solutions, we illustrate by means of simulation some
content-oblivious/oriented proposals in scenarios based on synthetic mobility
patterns and real human traces.Comment: 7 pages, 6 figure
Towards Data Mining in Large and Fully Distributed Peer-To-Peer Overlay Networks
The Internet, which is becoming a more and more dynamic, extremely heterogeneous network has recently became a platform for huge fully distributed peer-to-peer overlay networks containing millions of nodes typically for the purpose of information dissemination and file sharing. This paper targets the problem of analyzing data which are scattered over a such huge and dynamic set of nodes, where each node is storing possibly very little data but where the total amount of data is immense due to the large number of nodes. We present distributed algorithms for effectively calculating basic statistics of data using the recently introduced newscast model of computation and we demonstrate how to implement basic data mining algorithms based on these techniques. We will argue that the suggested techniques are efficient, robust and scalable and that they preserve the privacy of data
Towars web site user's profile: log file analysis.
The Internet is a remote, innovative, extremely dynamic and widely accessible communication medium. As in all other human communication formats, we observe the development and adoption of its own language, inherent to its multimedia aspects. The Embrapa Satellite Monitoring is using the Internet as a dissemination medium of its research results and interaction with clients, partners and web site users for more than one decade. In order to evaluate the web site usage and performance of the e-communication system the Webalizer software has been used to track and to calculate statistics based on web server log file analysis. The objective of the study is to analyze the data and evaluate the indicators related to requests origin (search string, country, time), actions performed by users (entry pages, agents) and system performance (error messages). It will help to remodel the web site design to improve the interaction dynamics and also develop a customized log file analyser. This tool would retrieve coherent and real information
Over-the-air software updates in the internet of things : an overview of key principles
Due to the fast pace at which IoT is evolving, there is an increasing need to support over-theair software updates for security updates, bug fixes, and software extensions. To this end, multiple over-the-air techniques have been proposed, each covering a specific aspect of the update process, such as (partial) code updates, data dissemination, and security. However, each technique introduces overhead, especially in terms of energy consumption, thereby impacting the operational lifetime of the battery constrained devices. Until now, a comprehensive overview describing the different update steps and quantifying the impact of each step is missing in the scientific literature, making it hard to assess the overall feasibility of an over-the-air update. To remedy this, our article analyzes which parts of an IoT operating system are most updated after device deployment, proposes a step-by-step approach to integrate software updates in IoT solutions, and quantifies the energy cost of each of the involved steps. The results show that besides the obvious dissemination cost, other phases such as security also introduce a significant overhead. For instance, a typical firmware update requires 135.026 mJ, of which the main portions are data dissemination (63.11 percent) and encryption (5.29 percent). However, when modular updates are used instead, the energy cost (e.g., for a MAC update) is reduced to 26.743 mJ (48.69 percent for data dissemination and 26.47 percent for encryption)
Simulation of Hybrid Edge Computing Architectures
Dealing with a growing amount of data is a crucial challenge for the future of information and communication technologies. More and more devices are expected to transfer data through the Internet, therefore new solutions have to be designed in order to guarantee low latency and efficient traffic management. In this paper, we propose a solution that combines the edge computing paradigm with a decentralized communication approach based on Peer-to-Peer (P2P). According to the proposed scheme, participants to the system are employed to relay messages of other devices, so as to reach a destination (usually a server at the edge of the network) even in absence of an Internet connection. This approach can be useful in dynamic and crowded environments, allowing the system to outsource part of the traffic management from the Cloud servers to end-devices. To evaluate our proposal, we carry out some experiments with the help of LUNES, an open source discrete events simulator specifically designed for distributed environments. In our simulations, we tested several system configurations in order to understand the impact of the algorithms involved in the data dissemination and some possible network arrangement
SPAD: a distributed middleware architecture for QoS enhanced alternate path discovery
In the next generation Internet, the network will evolve from a plain communication medium into one that provides endless services to the users. These services will be composed of multiple cooperative distributed application elements. We name these services overlay applications. The cooperative application elements within an overlay application will build a dynamic communication mesh, namely an overlay association. The Quality of Service (QoS) perceived by the users of an overlay application greatly depends on the QoS experienced on the communication paths of the corresponding overlay association. In this paper, we present SPAD (Super-Peer Alternate path Discovery), a distributed middleware architecture that aims at providing enhanced QoS between end-points within an overlay association. To achieve this goal, SPAD provides a complete scheme to discover and utilize composite alternate end-to end paths with better QoS than the path given by the default IP routing mechanisms
Recommended from our members
Distributed simulation and the grid: Position statements
The Grid provides a new and unrivaled technology for large scale distributed simulation as it enables collaboration and the use of distributed computing resources. This panel paper presents the views of four researchers in the area of Distributed Simulation and the Grid. Together we try to identify the main research issues involved in applying Grid technology to distributed simulation and the key future challenges that need to be solved to achieve this goal. Such challenges include not only technical challenges, but also political ones such as management methodology for the Grid and the development of standards. The benefits of the Grid to end-user simulation modelers also are discussed
- …