5,397 research outputs found
CASPR: Judiciously Using the Cloud for Wide-Area Packet Recovery
We revisit a classic networking problem -- how to recover from lost packets
in the best-effort Internet. We propose CASPR, a system that judiciously
leverages the cloud to recover from lost or delayed packets. CASPR supplements
and protects best-effort connections by sending a small number of coded packets
along the highly reliable but expensive cloud paths. When receivers detect
packet loss, they recover packets with the help of the nearby data center, not
the sender, thus providing quick and reliable packet recovery for
latency-sensitive applications. Using a prototype implementation and its
deployment on the public cloud and the PlanetLab testbed, we quantify the
benefits of CASPR in providing fast, cost effective packet recovery. Using
controlled experiments, we also explore how these benefits translate into
improvements up and down the network stack
A HOLISTIC REDUNDANCY- AND INCENTIVE-BASED FRAMEWORK TO IMPROVE CONTENT AVAILABILITY IN PEER-TO-PEER NETWORKS
Peer-to-Peer (P2P) technology has emerged as an important alternative to the traditional client-server communication paradigm to build large-scale distributed systems. P2P enables the creation, dissemination and access to information at low cost and without the need of dedicated coordinating entities. However, existing P2P systems fail to provide high-levels of content availability, which limit their applicability and adoption. This dissertation takes a holistic approach to device mechanisms to improve content availability in large-scale P2P systems.
Content availability in P2P can be impacted by hardware failures and churn. Hardware failures, in the form of disk or node failures, render information inaccessible. Churn, an inherent property of P2P, is the collective effect of the users’ uncoordinated behavior, which occurs when a large percentage of nodes join and leave frequently. Such a behavior reduces content availability significantly. Mitigating the combined effect of hardware failures and churn on content availability in P2P requires new and innovative solutions that go beyond those applied in existing distributed systems. To addresses this challenge, the thesis proposes two complementary, low cost mechanisms, whereby nodes self-organize to overcome failures and improve content availability. The first mechanism is a low complexity and highly flexible hybrid redundancy scheme, referred to as Proactive Repair (PR). The second mechanism is an incentive-based scheme that promotes cooperation and enforces fair exchange of resources among peers. These mechanisms provide the basis for the development of distributed self-organizing algorithms to automate PR and, through incentives, maximize their effectiveness in realistic P2P environments.
Our proposed solution is evaluated using a combination of analytical and experimental methods. The analytical models are developed to determine the availability and repair cost properties of PR. The results indicate that PR’s repair cost outperforms other redundancy schemes. The experimental analysis was carried out using simulation and the development of a testbed. The simulation results confirm that PR improves content availability in P2P. The proposed mechanisms are implemented and tested using a DHT-based P2P application environment. The experimental results indicate that the incentive-based mechanism can promote fair exchange of resources and limits the impact of uncooperative behaviors such as “free-riding”
Towards a Novel Cooperative Logistics Information System Framework
Supply Chains and Logistics have a growing importance in global economy.
Supply Chain Information Systems over the world are heterogeneous and each one
can both produce and receive massive amounts of structured and unstructured
data in real-time, which are usually generated by information systems,
connected objects or manually by humans. This heterogeneity is due to Logistics
Information Systems components and processes that are developed by different
modelling methods and running on many platforms; hence, decision making process
is difficult in such multi-actor environment. In this paper we identify some
current challenges and integration issues between separately designed Logistics
Information Systems (LIS), and we propose a Distributed Cooperative Logistics
Platform (DCLP) framework based on NoSQL, which facilitates real-time
cooperation between stakeholders and improves decision making process in a
multi-actor environment. We included also a case study of Hospital Supply Chain
(HSC), and a brief discussion on perspectives and future scope of work
Shortcuts through Colocation Facilities
Network overlays, running on top of the existing Internet substrate, are of
perennial value to Internet end-users in the context of, e.g., real-time
applications. Such overlays can employ traffic relays to yield path latencies
lower than the direct paths, a phenomenon known as Triangle Inequality
Violation (TIV). Past studies identify the opportunities of reducing latency
using TIVs. However, they do not investigate the gains of strategically
selecting relays in Colocation Facilities (Colos). In this work, we answer the
following questions: (i) how Colo-hosted relays compare with other relays as
well as with the direct Internet, in terms of latency (RTT) reductions; (ii)
what are the best locations for placing the relays to yield these reductions.
To this end, we conduct a large-scale one-month measurement of inter-domain
paths between RIPE Atlas (RA) nodes as endpoints, located at eyeball networks.
We employ as relays Planetlab nodes, other RA nodes, and machines in Colos. We
examine the RTTs of the overlay paths obtained via the selected relays, as well
as the direct paths. We find that Colo-based relays perform the best and can
achieve latency reductions against direct paths, ranging from a few to 100s of
milliseconds, in 76% of the total cases; 75% (58% of total cases) of these
reductions require only 10 relays in 6 large Colos.Comment: In Proceedings of the ACM Internet Measurement Conference (IMC '17),
London, GB, 201
Internet-scale storage systems under churn - a study of the steady state using markov models
Markov model
Command & Control: Understanding, Denying and Detecting - A review of malware C2 techniques, detection and defences
In this survey, we first briefly review the current state of cyber attacks,
highlighting significant recent changes in how and why such attacks are
performed. We then investigate the mechanics of malware command and control
(C2) establishment: we provide a comprehensive review of the techniques used by
attackers to set up such a channel and to hide its presence from the attacked
parties and the security tools they use. We then switch to the defensive side
of the problem, and review approaches that have been proposed for the detection
and disruption of C2 channels. We also map such techniques to widely-adopted
security controls, emphasizing gaps or limitations (and success stories) in
current best practices.Comment: Work commissioned by CPNI, available at c2report.org. 38 pages.
Listing abstract compressed from version appearing in repor
- …