58 research outputs found
Padding Ain't Enough: Assessing the Privacy Guarantees of Encrypted DNS
DNS over TLS (DoT) and DNS over HTTPS (DoH) encrypt DNS to guard user privacy
by hiding DNS resolutions from passive adversaries. Yet, past attacks have
shown that encrypted DNS is still sensitive to traffic analysis. As a
consequence, RFC 8467 proposes to pad messages prior to encryption, which
heavily reduces the characteristics of encrypted traffic. In this paper, we
show that padding alone is insufficient to counter DNS traffic analysis. We
propose a novel traffic analysis method that combines size and timing
information to infer the websites a user visits purely based on encrypted and
padded DNS traces. To this end, we model DNS sequences that capture the
complexity of websites that usually trigger dozens of DNS resolutions instead
of just a single DNS transaction. A closed world evaluation based on the Alexa
top-10k websites reveals that attackers can deanonymize at least half of the
test traces in 80.2% of all websites, and even correctly label all traces for
32.0% of the websites. Our findings undermine the privacy goals of
state-of-the-art message padding strategies in DoT/DoH. We conclude by showing
that successful mitigations to such attacks have to remove the entropy of
inter-arrival timings between query responses
Measuring DoH with web ads
In this paper we present a large measurement study of the impact on the performance of the adoption of HTTPS as a transport for the DNS protocol (DoH) with public resolvers compared to the existent approach of using non-encrypted transport of DNS queries with the resolver services locally provided by ISPs. Using on web-ads as the mean to execute our tests, we perform over 42 million measurements from more than 4 million vantage points distributed in 32 countries and served by over 2,500 ISPs. We find that, the median resolution time increased 17 ms when using DoH with Cloudflare, 41 ms when using DoH with Quad9, 68 ms when using DoH with Google and 170 ms when using DoH with DNS.SB, compared to using Do53 with the local resolver for a non-cached name. We find similar increases even when using caching. The results presented in the paper contribute to the ongoing discussion of the tradeoffs involved in the combined adoption of public resolvers and DoH.This work has been partially funded by the Internet Society (ISOC), the EU through the 5G-VINNI project (GA- 815279) and the Madrid Government (Comunidad de Madrid-Spain) under the Multiannual Agreement with UC3M in the line of Excellence of University Professors (EPUC3M21), and in the context of the V PRICIT (Regional Programme of Research and Technological Innovation). Funding for APC: Universidad Carlos III de Madrid (Read & Publish Agreement CRUE-CSIC 2022). Approval of the version of the manuscript to be published
Measuring QoE Impact of DoE-based Filtering
In this paper, we analyse the impact of DNS-based filtering on Quality of Experience (QoE). We use three DNS standard protocols: regular DNS (Do53), DNS over HTTPS (DoH) and DNS over TLS (DoT) on Quality of Experience. We conduct measurements against four open public DNS service providers (Cloudflare, CleanBrowsing, Adguard and Quad9) under three network conditions; Campus wired network, Eduroam and 4G. We aim to establish whether the filters from the same provider have statistically significant differences. This information could be used by Internet users and Internet Service Providers to make sound decisions when choosing DNS privacy services. The results show significant DNS response time and page load time differences between non-filtered and filtered DNS recursive resolvers from Cloudflare, Adguard and Quad9. We do not observe significant differences in page load times when CleanBrowsing resolvers are used despite observing significant
differences in DNS response times. The results further show that some filters would provide better QoE than non-filtered counterparts
Analytics over Encrypted Traffic and Defenses
Encrypted traffic flows have been known to leak information about their underlying content through statistical properties such as packet lengths and timing. While traffic fingerprinting attacks exploit such information leaks and threaten user privacy by disclosing website visits, videos streamed, and user activity on messaging platforms, they can also be helpful in network management and intelligence services.
Most recent and best-performing such attacks are based on deep learning models. In this thesis, we identify multiple limitations in the currently available attacks and defenses against them. First, these deep learning models do not provide any insights into their decision-making process. Second, most attacks that have achieved very high accuracies are still limited by unrealistic assumptions that affect their practicality. For example, most attacks assume a closed world setting and focus on traffic classification after event completion. Finally, current state-of-the-art defenses still incur high overheads to provide reasonable privacy, which limits their applicability in real-world applications.
In order to address these limitations, we first propose an inline traffic fingerprinting attack based on variable-length sequence modeling to facilitate real-time analytics. Next, we attempt to understand the inner workings of deep learning-based attacks with the dual goals of further improving attacks and designing efficient defenses against such attacks. Then, based on the observations from this analysis, we propose two novel defenses against traffic fingerprinting attacks that provide privacy under more realistic constraints and at lower bandwidth overheads. Finally, we propose a robust framework for open set classification that targets network traffic with this added advantage of being more suitable for deployment in resource-constrained in-network devices
Attacking DoH and ECH: Does Server Name Encryption Protect Users’ Privacy?
Privacy on the Internet has become a priority, and several efforts have been devoted to limit the leakage of personal information. Domain names, both in the TLS Client Hello and DNS traffic, are among the last pieces of information still visible to an observer in the network. The Encrypted Client Hello extension for TLS, DNS over HTTPS or over QUIC protocols aim to further increase network confidentiality by encrypting the domain names of the visited servers. In this article, we check whether an attacker able to passively observe the traffic of users could still recover the domain name of websites they visit even if names are encrypted. By relying on large-scale network traces, we show that simplistic features and off-the-shelf machine learning models are sufficient to achieve surprisingly high precision and recall when recovering encrypted domain names. We consider three attack scenarios, i.e., recovering the per-flow name, rebuilding the set of visited websites by a user, and checking which users visit a given target website. We next evaluate the efficacy of padding-based mitigation, finding that all three attacks are still effective, despite resources wasted with padding. We conclude that current proposals for domain encryption may produce a false sense of privacy, and more robust techniques should be envisioned to offer protection to end users
Measuring DNS over TCP in the Era of Increasing DNS Response Sizes: A View from the Edge
The Domain Name System (DNS) is one of the most crucial parts of the
Internet. Although the original standard defined the usage of DNS over UDP
(DoUDP) as well as DNS over TCP (DoTCP), UDP has become the predominant
protocol used in the DNS. With the introduction of new Resource Records (RRs),
the sizes of DNS responses have increased considerably. Since this can lead to
truncation or IP fragmentation, the fallback to DoTCP as required by the
standard ensures successful DNS responses by overcoming the size limitations of
DoUDP. However, the effects of the usage of DoTCP by stub resolvers are not
extensively studied to this date. We close this gap by presenting a view at
DoTCP from the Edge, issuing 12.1M DNS requests from 2,500 probes toward Public
as well as Probe DNS recursive resolvers. In our measurement study, we observe
that DoTCP is generally slower than DoUDP, where the relative increase in
Response Time is less than 37% for most resolvers. While optimizations to DoTCP
can be leveraged to further reduce the response times, we show that support on
Public resolvers is still missing, hence leaving room for optimizations in the
future. Moreover, we also find that Public resolvers generally have comparable
reliability for DoTCP and DoUDP. However, Probe resolvers show a significantly
different behavior: DoTCP queries targeting Probe resolvers fail in 3 out of 4
cases, and, therefore, do not comply with the standard. This problem will only
aggravate in the future: As DNS response sizes will continue to grow, the need
for DoTCP will solidify.Comment: Published in ACM SIGCOMM Computer Communication Review Volume 52
Issue 2, April 202
How India Censors the Web
One of the primary ways in which India engages in online censorship is by
ordering Internet Service Providers (ISPs) operating in its jurisdiction to
block access to certain websites for its users. This paper reports the
different techniques Indian ISPs are using to censor websites, and investigates
whether website blocklists are consistent across ISPs. We propose a suite of
tests that prove more robust than previous work in detecting DNS and HTTP based
censorship. Our tests also discern the use of SNI inspection for blocking
websites, which is previously undocumented in the Indian context. Using
information from court orders, user reports, and public and leaked government
orders, we compile the largest known list of potentially blocked websites in
India. We pass this list to our tests and run them from connections of six
different ISPs, which together serve more than 98% of Internet users in India.
Our findings not only confirm that ISPs are using different techniques to block
websites, but also demonstrate that different ISPs are not blocking the same
websites
PowerQoPE: A Personal Quality of Internet Protection and Experience Configurator
Security configuration remains obscure for many Internet users, especially those with limited computing skills. This obscurity exposes such users to various Internet attacks.
Recently, there has been an increase in cyberattacks targeted at individuals due to the remote workforce imposed by the COVID 19 pandemic. These attacks have exposed the inefficiencies of the non-human-centric implementation of Internet security mechanisms and protocols. Security research usually positions users as the weakest link in the security ecosystem, making system and protocol developers exclude the users in the development process. This stereotypical approach has negatively affected users’ security uptake. Mostly, security systems are not comprehensible for an average user, negatively affecting performance and Quality of Experience. This causes the users to shun using security mechanisms. Building on human-centric cybersecurity research, we present a tool that aids in configuring Internet Quality of protection and Experience (referred to as PowerQoPE in this paper). We describe its architecture and design methodology and finally present evaluation results. Preliminary evaluation results show that user-centric and data-driven approaches in the design of Internet security systems improve users’ Quality of Experience. The controlled experiment results show that users are not really stupid; they know what they want and that given proper security configuration platforms with proper framing of components and information, they can make optimal security decisions
- …