761 research outputs found
No Place to Hide that Bytes won't Reveal: Sniffing Location-Based Encrypted Traffic to Track a User's Position
News reports of the last few years indicated that several intelligence
agencies are able to monitor large networks or entire portions of the Internet
backbone. Such a powerful adversary has only recently been considered by the
academic literature. In this paper, we propose a new adversary model for
Location Based Services (LBSs). The model takes into account an unauthorized
third party, different from the LBS provider itself, that wants to infer the
location and monitor the movements of a LBS user. We show that such an
adversary can extrapolate the position of a target user by just analyzing the
size and the timing of the encrypted traffic exchanged between that user and
the LBS provider. We performed a thorough analysis of a widely deployed
location based app that comes pre-installed with many Android devices:
GoogleNow. The results are encouraging and highlight the importance of devising
more effective countermeasures against powerful adversaries to preserve the
privacy of LBS users.Comment: 14 pages, 9th International Conference on Network and System Security
(NSS 2015
When satellite is all you have: watching the internet from 550 ms
Satellite Communication(SatCom) offers internet connectivity where traditional infrastructures are too expensive to deploy. When using satellites in a geostationary orbit, the distance from Earth forces a round trip time higher than 550ms. Coupled with the limited and shared capacity of the physical link, this poses a challenge to the traditional internet access quality we are used to. In this paper, we present the first passive characterization of the traffic carried by an operational SatCom network. With this unique vantage point, we observe the performance of the SatCom technology, as well as the usage habits of subscribers in different countries in Europe and Africa. We highlight the implications of such technology on Internet usage and functioning, and we pinpoint technical challenges due to the CDN and DNS resolution issues, while discussing possible optimizations that the ISP could implement to improve the service offered to SatCom subscribers
Recent Trends on Privacy-Preserving Technologies under Standardization at the IETF
End-users are concerned about protecting the privacy of their sensitive
personal data that are generated while working on information systems. This
extends to both the data they actively provide including personal
identification in exchange for products and services as well as its related
metadata such as unnecessary access to their location. This is when certain
privacy-preserving technologies come into a place where Internet Engineering
Task Force (IETF) plays a major role in incorporating such technologies at the
fundamental level. Thus, this paper offers an overview of the
privacy-preserving mechanisms for layer 3 (i.e. IP) and above that are
currently under standardization at the IETF. This includes encrypted DNS at
layer 5 classified as DNS-over-TLS (DoT), DNS-over-HTTPS (DoH), and
DNS-over-QUIC (DoQ) where the underlying technologies like QUIC belong to layer
4. Followed by that, we discuss Privacy Pass Protocol and its application in
generating Private Access Tokens and Passkeys to replace passwords for
authentication at the application layer (i.e. end-user devices). Lastly, to
protect user privacy at the IP level, Private Relays and MASQUE are discussed.
This aims to make designers, implementers, and users of the Internet aware of
privacy-related design choices.Comment: 9 pages, 5 figures, 1 tabl
Characterization of ISP Traffic: Trends, User Habits, and Access Technology Impact
In the recent years, the research community has increased its focus on network monitoring which is seen as a key tool to understand the Internet and the Internet users. Several studies have presented a deep characterization of a particular application, or a particular network, considering the point of view of either the ISP, or the Internet user. In this paper, we take a different perspective. We focus on three European countries where we have been collecting traffic for more than a year and a half through 5 vantage points with different access technologies. This humongous amount of information allows us not only to provide precise, multiple, and quantitative measurements of "What the user do with the Internet" in each country but also to identify common/uncommon patterns and habits across different countries and nations. Considering different time scales, we start presenting the trend of application popularity; then we focus our attention to a one-month long period, and further drill into a typical daily characterization of users activity. Results depict an evolving scenario due to the consolidation of new services as Video Streaming and File Hosting and to the adoption of new P2P technologies. Despite the heterogeneity of the users, some common tendencies emerge that can be leveraged by the ISPs to improve their servic
Five Years at the Edge: Watching Internet from the ISP Network
The Internet and the way people use it are constantly changing. Knowing traffic is crucial for operating the network, understanding users' need, and ultimately improving applications. Here, we provide an in-depth longitudinal view of Internet traffic in the last 5 years (from 2013 to 2017). We take the point of the view of a national-wide ISP and analyze flow-level rich measurements to pinpoint and quantify trends. We evaluate the providers' costs in terms of traffic consumption by users and services. We show that an ordinary broadband subscriber nowadays downloads more than twice as much as they used to do 5 years ago. Bandwidth hungry video services drive this change, while social messaging applications boom (and vanish) at incredible pace. We study how protocols and service infrastructures evolve over time, highlighting unpredictable events that may hamper traffic management policies. In the rush to bring servers closer and closer to users, we witness the birth of the sub-millisecond Internet, with caches located directly at ISP edges. The picture we take shows a lively Internet that always evolves and suddenly changes
Recommended from our members
Reducing Third Parties in the Network through Client-Side Intelligence
The end-to-end argument describes the communication between a client and server using functionality that is located at the end points of a distributed system. From a security and privacy perspective, clients only need to trust the server they are trying to reach instead of intermediate system nodes and other third-party entities. Clients accessing the Internet today and more specifically the World Wide Web have to interact with a plethora of network entities for name resolution, traffic routing and content delivery. While individual communications with those entities may some times be end to end, from the user's perspective they are intermediaries the user has to trust in order to access the website behind a domain name. This complex interaction lacks transparency and control and expands the attack surface beyond the server clients are trying to reach directly. In this dissertation, we develop a set of novel design principles and architectures to reduce the number of third-party services and networks a client's traffic is exposed to when browsing the web. Our proposals bring additional intelligence to the client and can be adopted without changes to the third parties.
Websites can include content, such as images and iframes, located on third-party servers. Browsers loading an HTML page will contact these additional servers to satisfy external content dependencies. Such interaction has privacy implications because it includes context related to the user's browsing history. For example, the widespread adoption of "social plugins" enables the respective social networking services to track a growing part of its members' online activity. These plugins are commonly implemented as HTML iframes originating from the domain of the respective social network. They are embedded in sites users might visit, for instance to read the news or do shopping. Facebook's Like button is an example of a social plugin. While one could prevent the browser from connecting to third-party servers, it would break existing functionality and thus be unlikely to be widely adopted. We propose a novel design for privacy-preserving social plugins that decouples the retrieval of user-specific content from the loading of third-party content. Our approach can be adopted by web browsers without the need for server-side changes. Our design has the benefit of avoiding the transmission of user-identifying information to the third-party server while preserving the original functionality of the plugins.
In addition, we propose an architecture which reduces the networks involved when routing traffic to a website. Users then have to trust fewer organizations with their traffic. Such trust is necessary today because for example we observe that only 30% of popular web servers offer HTTPS. At the same time there is evidence that network adversaries carry out active and passive attacks against users. We argue that if end-to-end security with a server is not available the next best thing is a secure link to a network that is close to the server and will act as a gateway. Our approach identifies network vantage points in the cloud, enables a client to establish secure tunnels to them and intelligently routes traffic based on its destination. The proliferation of infrastructure-as-a-service platforms makes it practical for users to benefit from the cloud. We determine that our architecture is practical because our proposed use of the cloud aligns with existing ways end-user devices leverage it today. Users control both endpoints of the tunnel and do not depend on the cooperation of individual websites. We are thus able to eliminate third-party networks for 20% of popular web servers, reduce network paths to 1 hop for an additional 20% and shorten the rest.
We hypothesize that user privacy on the web can be improved in terms of transparency and control by reducing the systems and services that are indirectly and automatically involved. We also hypothesize that such reduction can be achieved unilaterally through client-side initiatives and without affecting the operation of individual websites
- …