55,029 research outputs found
Social-aware Forwarding in Opportunistic Wireless Networks: Content Awareness or Obliviousness?
With the current host-based Internet architecture, networking faces
limitations in dynamic scenarios, due mostly to host mobility. The ICN paradigm
mitigates such problems by releasing the need to have an end-to-end transport
session established during the life time of the data transfer. Moreover, the
ICN concept solves the mismatch between the Internet architecture and the way
users would like to use it: currently a user needs to know the topological
location of the hosts involved in the communication when he/she just wants to
get the data, independently of its location. Most of the research efforts aim
to come up with a stable ICN architecture in fixed networks, with few examples
in ad-hoc and vehicular networks. However, the Internet is becoming more
pervasive with powerful personal mobile devices that allow users to form
dynamic networks in which content may be exchanged at all times and with low
cost. Such pervasive wireless networks suffer with different levels of
disruption given user mobility, physical obstacles, lack of cooperation,
intermittent connectivity, among others. This paper discusses the combination
of content knowledge (e.g., type and interested parties) and social awareness
within opportunistic networking as to drive the deployment of ICN solutions in
disruptive networking scenarios. With this goal in mind, we go over few
examples of social-aware content-based opportunistic networking proposals that
consider social awareness to allow content dissemination independently of the
level of network disruption. To show how much content knowledge can improve
social-based solutions, we illustrate by means of simulation some
content-oblivious/oriented proposals in scenarios based on synthetic mobility
patterns and real human traces.Comment: 7 pages, 6 figure
Social-aware Opportunistic Routing Protocol based on User's Interactions and Interests
Nowadays, routing proposals must deal with a panoply of heterogeneous
devices, intermittent connectivity, and the users' constant need for
communication, even in rather challenging networking scenarios. Thus, we
propose a Social-aware Content-based Opportunistic Routing Protocol, SCORP,
that considers the users' social interaction and their interests to improve
data delivery in urban, dense scenarios. Through simulations, using synthetic
mobility and human traces scenarios, we compare the performance of our solution
against other two social-aware solutions, dLife and Bubble Rap, and the
social-oblivious Spray and Wait, in order to show that the combination of
social awareness and content knowledge can be beneficial when disseminating
data in challenging networks
Exploring heterogeneity of unreliable machines for p2p backup
P2P architecture is a viable option for enterprise backup. In contrast to
dedicated backup servers, nowadays a standard solution, making backups directly
on organization's workstations should be cheaper (as existing hardware is
used), more efficient (as there is no single bottleneck server) and more
reliable (as the machines are geographically dispersed).
We present the architecture of a p2p backup system that uses pairwise
replication contracts between a data owner and a replicator. In contrast to
standard p2p storage systems using directly a DHT, the contracts allow our
system to optimize replicas' placement depending on a specific optimization
strategy, and so to take advantage of the heterogeneity of the machines and the
network. Such optimization is particularly appealing in the context of backup:
replicas can be geographically dispersed, the load sent over the network can be
minimized, or the optimization goal can be to minimize the backup/restore time.
However, managing the contracts, keeping them consistent and adjusting them in
response to dynamically changing environment is challenging.
We built a scientific prototype and ran the experiments on 150 workstations
in the university's computer laboratories and, separately, on 50 PlanetLab
nodes. We found out that the main factor affecting the quality of the system is
the availability of the machines. Yet, our main conclusion is that it is
possible to build an efficient and reliable backup system on highly unreliable
machines (our computers had just 13% average availability)
A Taxonomy of Data Grids for Distributed Data Sharing, Management and Processing
Data Grids have been adopted as the platform for scientific communities that
need to share, access, transport, process and manage large data collections
distributed worldwide. They combine high-end computing technologies with
high-performance networking and wide-area storage management techniques. In
this paper, we discuss the key concepts behind Data Grids and compare them with
other data sharing and distribution paradigms such as content delivery
networks, peer-to-peer networks and distributed databases. We then provide
comprehensive taxonomies that cover various aspects of architecture, data
transportation, data replication and resource allocation and scheduling.
Finally, we map the proposed taxonomy to various Data Grid systems not only to
validate the taxonomy but also to identify areas for future exploration.
Through this taxonomy, we aim to categorise existing systems to better
understand their goals and their methodology. This would help evaluate their
applicability for solving similar problems. This taxonomy also provides a "gap
analysis" of this area through which researchers can potentially identify new
issues for investigation. Finally, we hope that the proposed taxonomy and
mapping also helps to provide an easy way for new practitioners to understand
this complex area of research.Comment: 46 pages, 16 figures, Technical Repor
BitTorrent Sync: Network Investigation Methodology
The volume of personal information and data most Internet users find
themselves amassing is ever increasing and the fast pace of the modern world
results in most requiring instant access to their files. Millions of these
users turn to cloud based file synchronisation services, such as Dropbox,
Microsoft Skydrive, Apple iCloud and Google Drive, to enable "always-on" access
to their most up-to-date data from any computer or mobile device with an
Internet connection. The prevalence of recent articles covering various
invasion of privacy issues and data protection breaches in the media has caused
many to review their online security practices with their personal information.
To provide an alternative to cloud based file backup and synchronisation,
BitTorrent Inc. released an alternative cloudless file backup and
synchronisation service, named BitTorrent Sync to alpha testers in April 2013.
BitTorrent Sync's popularity rose dramatically throughout 2013, reaching over
two million active users by the end of the year. This paper outlines a number
of scenarios where the network investigation of the service may prove
invaluable as part of a digital forensic investigation. An investigation
methodology is proposed outlining the required steps involved in retrieving
digital evidence from the network and the results from a proof of concept
investigation are presented.Comment: 9th International Conference on Availability, Reliability and
Security (ARES 2014
DEPAS: A Decentralized Probabilistic Algorithm for Auto-Scaling
The dynamic provisioning of virtualized resources offered by cloud computing
infrastructures allows applications deployed in a cloud environment to
automatically increase and decrease the amount of used resources. This
capability is called auto-scaling and its main purpose is to automatically
adjust the scale of the system that is running the application to satisfy the
varying workload with minimum resource utilization. The need for auto-scaling
is particularly important during workload peaks, in which applications may need
to scale up to extremely large-scale systems.
Both the research community and the main cloud providers have already
developed auto-scaling solutions. However, most research solutions are
centralized and not suitable for managing large-scale systems, moreover cloud
providers' solutions are bound to the limitations of a specific provider in
terms of resource prices, availability, reliability, and connectivity.
In this paper we propose DEPAS, a decentralized probabilistic auto-scaling
algorithm integrated into a P2P architecture that is cloud provider
independent, thus allowing the auto-scaling of services over multiple cloud
infrastructures at the same time. Our simulations, which are based on real
service traces, show that our approach is capable of: (i) keeping the overall
utilization of all the instantiated cloud resources in a target range, (ii)
maintaining service response times close to the ones obtained using optimal
centralized auto-scaling approaches.Comment: Submitted to Springer Computin
- …