726 research outputs found
I Know Where You are and What You are Sharing: Exploiting P2P Communications to Invade Users' Privacy
In this paper, we show how to exploit real-time communication applications to
determine the IP address of a targeted user. We focus our study on Skype,
although other real-time communication applications may have similar privacy
issues. We first design a scheme that calls an identified targeted user
inconspicuously to find his IP address, which can be done even if he is behind
a NAT. By calling the user periodically, we can then observe the mobility of
the user. We show how to scale the scheme to observe the mobility patterns of
tens of thousands of users. We also consider the linkability threat, in which
the identified user is linked to his Internet usage. We illustrate this threat
by combining Skype and BitTorrent to show that it is possible to determine the
file-sharing usage of identified users. We devise a scheme based on the
identification field of the IP datagrams to verify with high accuracy whether
the identified user is participating in specific torrents. We conclude that any
Internet user can leverage Skype, and potentially other real-time communication
systems, to observe the mobility and file-sharing usage of tens of millions of
identified users.Comment: This is the authors' version of the ACM/USENIX Internet Measurement
Conference (IMC) 2011 pape
A Multi-perspective Analysis of Carrier-Grade NAT Deployment
As ISPs face IPv4 address scarcity they increasingly turn to network address
translation (NAT) to accommodate the address needs of their customers.
Recently, ISPs have moved beyond employing NATs only directly at individual
customers and instead begun deploying Carrier-Grade NATs (CGNs) to apply
address translation to many independent and disparate endpoints spanning
physical locations, a phenomenon that so far has received little in the way of
empirical assessment. In this work we present a broad and systematic study of
the deployment and behavior of these middleboxes. We develop a methodology to
detect the existence of hosts behind CGNs by extracting non-routable IP
addresses from peer lists we obtain by crawling the BitTorrent DHT. We
complement this approach with improvements to our Netalyzr troubleshooting
service, enabling us to determine a range of indicators of CGN presence as well
as detailed insights into key properties of CGNs. Combining the two data
sources we illustrate the scope of CGN deployment on today's Internet, and
report on characteristics of commonly deployed CGNs and their effect on end
users
Application acceleration for wireless and mobile data networks
This work studies application acceleration for wireless and mobile data networks. The problem of accelerating application can be addressed along multiple dimensions. The first dimension is advanced network protocol design, i.e., optimizing underlying network
protocols, particulary transport layer protocol and link layer protocol.
Despite advanced network protocol design, in this work we observe that certain application behaviors can fundamentally limit the performance achievable when operating over wireless and mobile data networks. The performance difference is caused by the complex
application behaviors of these non-FTP applications. Explicitly dealing with application behaviors can improve application performance for new environments. Along this overcoming application behavior dimension, we accelerate applications by studying specific types of applications including Client-server, Peer-to-peer and Location-based applications. In exploring along this dimension, we identify a set of application behaviors that significantly affect application performance. To accommodate these application behaviors, we firstly extract general design principles that can apply to any applications whenever possible. These
design principles can also be integrated into new application designs. We also consider specific applications by applying these design principles and build prototypes to demonstrate the effectiveness of the solutions.
In the context of application acceleration, even though all the challenges belong to the two aforementioned dimensions of advanced network protocol design and overcoming application behavior are addressed, application performance can still be limited by the underlying network capability, particularly physical bandwidth. In this work, we study the possibility of speeding up data delivery by eliminating traffic redundancy present in application traffics. Specifically, we first study the traffic redundancy along multiple dimensions using traces obtained from multiple real wireless network deployments. Based on the insights obtained from the analysis, we propose Wireless Memory (WM), a two-ended AP-client solution to effectively exploit traffic redundancy in wireless and mobile environments. Application acceleration can be achieved along two other dimensions: network provision ing and quality of service (QoS). Network provisioning allocates network resources such as physical bandwidth or wireless spectrum, while QoS provides different priority to different applications, users, or data flows. These two dimensions have their respective limitations in the context of application acceleration.
In this work, we focus on the two dimensions of overcoming application behavior and Eliminating traffic redundancy to improve application performance. The contribution of this work is as follows. First, we study the problem of application acceleration for wireless and mobile data networks, and we characterize the dimensions along which to address the problem. Second, we identify that application behaviors can significantly affect application performance, and we propose a set of design principles to deal with the behaviors. We also build prototypes to conduct system research. Third, we consider traffic redundancy elimination and propose a wireless memory approach.Ph.D.Committee Chair: Sivakumar, Raghupathy; Committee Member: Ammar, Mostafa; Committee Member: Fekri, Faramarz; Committee Member: Ji, Chuanyi; Committee Member: Ramachandran, Umakishor
Survey of End-to-End Mobile Network Measurement Testbeds, Tools, and Services
Mobile (cellular) networks enable innovation, but can also stifle it and lead
to user frustration when network performance falls below expectations. As
mobile networks become the predominant method of Internet access, developer,
research, network operator, and regulatory communities have taken an increased
interest in measuring end-to-end mobile network performance to, among other
goals, minimize negative impact on application responsiveness. In this survey
we examine current approaches to end-to-end mobile network performance
measurement, diagnosis, and application prototyping. We compare available tools
and their shortcomings with respect to the needs of researchers, developers,
regulators, and the public. We intend for this survey to provide a
comprehensive view of currently active efforts and some auspicious directions
for future work in mobile network measurement and mobile application
performance evaluation.Comment: Submitted to IEEE Communications Surveys and Tutorials. arXiv does
not format the URL references correctly. For a correctly formatted version of
this paper go to
http://www.cs.montana.edu/mwittie/publications/Goel14Survey.pd
Doctor of Philosophy
dissertationWe propose a collective approach for harnessing the idle resources (cpu, storage, and bandwidth) of nodes (e.g., home desktops) distributed across the Internet. Instead of a purely peer-to-peer (P2P) approach, we organize participating nodes to act collectively using collective managers (CMs). Participating nodes provide idle resources to CMs, which unify these resources to run meaningful distributed services for external clients. We do not assume altruistic users or employ a barter-based incentive model; instead, participating nodes provide resources to CMs for long durations and are compensated in proportion to their contribution. In this dissertation we discuss the challenges faced by collective systems, present a design that addresses these challenges, and study the effect of selfish nodes. We believe that the collective service model is a useful alternative to the dominant pure P2P and centralized work queue models. It provides more effective utilization of idle resources, has a more meaningful economic model, and is better suited for building legal and commercial distributed services. We demonstrate the value of our work by building two distributed services using the collective approach. These services are a collective content distribution service and a collective data backup service
Peer-to-peer collaboration in content delivery networks
A low-cost collaboration architecture for web content distribution, that aims to improve all stakeholder's interests, is presented. A peer-to-peer (P2P) contribution among the end users layer is suggested, in order to increase download rates and reduce server traffic and resource usage. In addition, the Internet Service Providers (ISPs) concerns are also considered, with an ISP-aware connection strategy in the P2P protocol. Collaboration among publisher's web server resources is also proposed, in order to improve the CDN architecture performance. All the elements of this architecture have been developed and have been successfully tested in 5 different scenarios, within the PlanetLab large-scale overlay network testbed. Results show that download speed increases after implementing P2P collaboration on a content delivery scenario, with a strong reduction of data transferred via HTTP servers. The ISP-aware approach reduces inter-ISP traffic, with an increase of download speeds. This implementation is fairer as the content popularity grows because end-users extreme download rates tend to approach to the average.info:eu-repo/semantics/acceptedVersio
A Survey of Green Networking Research
Reduction of unnecessary energy consumption is becoming a major concern in
wired networking, because of the potential economical benefits and of its
expected environmental impact. These issues, usually referred to as "green
networking", relate to embedding energy-awareness in the design, in the devices
and in the protocols of networks. In this work, we first formulate a more
precise definition of the "green" attribute. We furthermore identify a few
paradigms that are the key enablers of energy-aware networking research. We
then overview the current state of the art and provide a taxonomy of the
relevant work, with a special focus on wired networking. At a high level, we
identify four branches of green networking research that stem from different
observations on the root causes of energy waste, namely (i) Adaptive Link Rate,
(ii) Interface proxying, (iii) Energy-aware infrastructures and (iv)
Energy-aware applications. In this work, we do not only explore specific
proposals pertaining to each of the above branches, but also offer a
perspective for research.Comment: Index Terms: Green Networking; Wired Networks; Adaptive Link Rate;
Interface Proxying; Energy-aware Infrastructures; Energy-aware Applications.
18 pages, 6 figures, 2 table
Independent comparison of popular DPI tools for traffic classification
Deep Packet Inspection (DPI) is the state-of-the-art technology for traffic classification. According to the conventional wisdom, DPI is the most accurate classification technique. Consequently, most popular products, either commercial or open-source, rely on some sort of DPI for traffic classification. However, the actual performance of DPI is still unclear to the research community, since the lack of public datasets prevent the comparison and reproducibility of their results. This paper presents a comprehensive comparison of 6 well-known DPI tools, which are commonly used in the traffic classification literature. Our study includes 2 commercial products (PACE and NBAR) and 4 open-source tools (OpenDPI, L7-filter, nDPI, and Libprotoident). We studied their performance in various scenarios (including packet and flow truncation) and at different classification levels (application protocol, application and web service). We carefully built a labeled dataset with more than 750 K flows, which contains traffic from popular applications. We used the Volunteer-Based System (VBS), developed at Aalborg University, to guarantee the correct labeling of the dataset. We released this dataset, including full packet payloads, to the research community. We believe this dataset could become a common benchmark for the comparison and validation of network traffic classifiers. Our results present PACE, a commercial tool, as the most accurate solution. Surprisingly, we find that some open-source tools, such as nDPI and Libprotoident, also achieve very high accuracy.Peer ReviewedPostprint (author’s final draft
- …