795 research outputs found
Understanding the Role of Registrars in DNSSEC Deployment
The Domain Name System (DNS) provides a scalable, flexible name resolution service. Unfortunately, its unauthenticated architecture has become the basis for many security attacks. To address this, DNS Security Extensions (DNSSEC) were introduced in 1997. DNSSEC’s deployment requires support from the top-level domain (TLD) registries and registrars, as well as participation by the organization that serves as the DNS operator. Unfortunately, DNSSEC has seen poor deployment thus far: despite being proposed nearly two decades ago, only 1% of .com, .net, and .org domains are properly signed. In this paper, we investigate the underlying reasons why DNSSEC adoption has been remarkably slow. We focus on registrars, as most TLD registries already support DNSSEC and registrars often serve as DNS operators for their customers. Our study uses large-scale, longitudinal DNS measurements to study DNSSEC adoption, coupled with experiences collected by trying to deploy DNSSEC on domains we purchased from leading domain name registrars and resellers. Overall, we find that a select few registrars are responsible for the (small) DNSSEC deployment today, and that many leading registrars do not support DNSSEC at all, or require customers to take cumbersome steps to deploy DNSSEC. Further frustrating deployment, many of the mechanisms for conveying DNSSEC information to registrars are error-prone or present security vulnerabilities. Finally, we find that using DNSSEC with third-party DNS operators such as Cloudflare requires the domain owner to take a number of steps that 40% of domain owners do not complete. Having identified several operational challenges for full DNSSEC deployment, we make recommendations to improve adoption
Under and over the surface: a comparison of the use of leaked account credentials in the Dark and Surface Web
The world has seen a dramatic increase in cybercrime, in both the Surface Web, which is the portion of content on the World Wide Web that may be indexed by popular engines, and lately in the Dark Web, a portion that is not indexed by conventional search engines and is accessed through network overlays such as the Tor network. For instance, theft of online service credentials is an emerging problem, especially in the Dark Web, where the average price for someone\u2019s online identity is \ua3820. Previous research studied the modus operandi of criminals that obtain stolen account credentials through Surface Web outlets. As part of an effort to understand how the same crime unfolds in the Surface Web and the Dark Web, this study seeks to compare the modus operandi of criminals acting on both by leaking Gmail honey accounts in Dark Web outlets. The results are compared to a previous similar experiment performed in the Surface Web. Simulating operating activity of criminals, we posted 100 Gmail account credentials on hidden services on the Dark Web and monitored the activity that they attracted using a honeypot infrastructure. More specifically, we analysed the data generated by the two experiments to find differences in the activity observed with the aim of understanding how leaked credentials are used in both Web environments. We observed that different types of malicious activity happen on honey accounts depending on the Web environment they are released on. Our results can provide the research community with insights into how stolen accounts are being manipulated in the wild for different Web environments
Climbing China\u27s Great Firewall
Many different countries censor the internet within their state. Citizens frequently wish to avoid the state censorship. There are many different methods that have been developed to achieve this. Governments and citizens are in a constant arms race, with both developing opposing technologies. China in particular has the largest population of people on the planet, and the Chinese government attempts to censor the internet. This paper will investigate three methods of navigating around state censorship: Cachebrowser, INTANG and Tor. Cachebrowser and INTANG were developed specifically to navigate around state censorship while Tor was originally developed for anonymous browsing. This paper will analyze their effectiveness and viability to avoid censorship
Methods for revealing and reshaping the African Internet Ecosystem as a case study for developing regions: from isolated networks to a connected continent
MenciĂłn Internacional en el tĂtulo de doctorWhile connecting end-users worldwide, the Internet increasingly promotes local development
by making challenges much simpler to overcome, regardless of the field in which it is
used: governance, economy, education, health, etc. However, African Network Information Centre
(AfriNIC), the Regional Internet Registry (RIR) of Africa, is characterized by the lowest Internet
penetration: 28.6% as of March 2017 compared to an average of 49.7% worldwide according
to the International Telecommunication Union (ITU) estimates [139]. Moreover, end-users experience
a poor Quality of Service (QoS) provided at high costs. It is thus of interest to enlarge the
Internet footprint in such under-connected regions and determine where the situation can be improved.
Along these lines, this doctoral thesis thoroughly inspects, using both active and passive
data analysis, the critical aspects of the African Internet ecosystem and outlines the milestones of
a methodology that could be adopted for achieving similar purposes in other developing regions.
The thesis first presents our efforts to help build measurements infrastructures for alleviating
the shortage of a diversified range of Vantage Points (VPs) in the region, as we cannot improve
what we can not measure. It then unveils our timely and longitudinal inspection of the
African interdomain routing using the enhanced RIPE Atlas measurements infrastructure for filling
the lack of knowledge of both IPv4 and IPv6 topologies interconnecting local Internet Service
Providers (ISPs). It notably proposes reproducible data analysis techniques suitable for the treatment
of any set of similar measurements to infer the behavior of ISPs in the region. The results
show a large variety of transit habits, which depend on socio-economic factors such as the language,
the currency area, or the geographic location of the country in which the ISP operates.
They indicate the prevailing dominance of ISPs based outside Africa for the provision of intracontinental
paths, but also shed light on the efforts of stakeholders for traffic localization.
Next, the thesis investigates the causes and impacts of congestion in the African IXP substrate,
as the prevalence of this endemic phenomenon in local Internet markets may hinder their
growth. Towards this end, Ark monitors were deployed at six strategically selected local Internet
eXchange Points (IXPs) and used for collecting Time-Sequence Latency Probes (TSLP) measurements
during a whole year. The analysis of these datasets reveals no evidence of widespread
congestion: only 2.2% of the monitored links experienced noticeable indication of congestion,
thus promoting peering. The causes of these events were identified during IXP operator interviews,
showing how essential collaboration with stakeholders is to understanding the causes of performance degradations.
As part of the Internet Society (ISOC) strategy to allow the Internet community to profile
the IXPs of a particular region and monitor their evolution, a route-collector data analyzer was
then developed and afterward, it was deployed and tested in AfriNIC. This open source web
platform titled the “African” Route-collectors Data Analyzer (ARDA) provides metrics, which
picture in real-time the status of interconnection at different levels, using public routing information
available at local route-collectors with a peering viewpoint of the Internet. The results
highlight that a small proportion of Autonomous System Numbers (ASNs) assigned by AfriNIC
(17 %) are peering in the region, a fraction that remained static from April to September 2017
despite the significant growth of IXPs in some countries. They show how ARDA can help detect
the impact of a policy on the IXP substrate and help ISPs worldwide identify new interconnection
opportunities in Africa, the targeted region.
Since broadening the underlying network is not useful without appropriately provisioned services
to exploit it, the thesis then delves into the availability and utilization of the web infrastructure
serving the continent. Towards this end, a comprehensive measurement methodology
is applied to collect data from various sources. A focus on Google reveals that its content infrastructure
in Africa is, indeed, expanding; nevertheless, much of its web content is still served
from the United States (US) and Europe, although being the most popular content source in many
African countries. Further, the same analysis is repeated across top global and regional websites,
showing that even top African websites prefer to host their content abroad. Following that, the
primary bottlenecks faced by Content Providers (CPs) in the region such as the lack of peering
between the networks hosting our probes and poorly configured DNS resolvers are explored to
outline proposals for further ISP and CP deployments.
Considering the above, an option to enrich connectivity and incentivize CPs to establish a
presence in the region is to interconnect ISPs present at isolated IXPs by creating a distributed
IXP layout spanning the continent. In this respect, the thesis finally provides a four-step interconnection
scheme, which parameterizes socio-economic, geographical, and political factors using
public datasets. It demonstrates that this constrained solution doubles the percentage of continental
intra-African paths, reduces their length, and drastically decreases the median of their Round
Trip Times (RTTs) as well as RTTs to ASes hosting the top 10 global and top 10 regional Alexa
websites. We hope that quantitatively demonstrating the benefits of this framework will incentivize
ISPs to intensify peering and CPs to increase their presence, for enabling fast, affordable,
and available access at the Internet frontier.Programa Oficial de Doctorado en IngenierĂa TelemáticaPresidente: David Fernández Cambronero.- Secretario: Alberto GarcĂa MartĂnez.- Vocal: Cristel Pelsse
Deep Dive into NTP Pool's Popularity and Mapping
Time synchronization is of paramount importance on the Internet, with the Network Time Protocol (NTP) serving as the primary synchronization protocol. The NTP Pool, a volunteer-driven initiative launched two decades ago, facilitates connections between clients and NTP servers. Our analysis of root DNS queries reveals that the NTP Pool has consistently been the most popular time service. We further investigate the DNS component (GeoDNS) of the NTP Pool, which is responsible for mapping clients to servers. Our findings indicate that the current algorithm is heavily skewed, leading to the emergence of time monopolies for entire countries. For instance, clients in the US are served by 551 NTP servers, while clients in Cameroon and Nigeria are served by only one and two servers, respectively, out of the 4k+ servers available in the NTP Pool. We examine the underlying assumption behind GeoDNS for these mappings and discover that time servers located far away can still provide accurate clock time information to clients. We have shared our findings with the NTP Pool operators, who acknowledge them and plan to revise their algorithm to enhance security.</p
- …