26 research outputs found

    Don't Forget to Lock the Front Door! Inferring the Deployment of Source Address Validation of Inbound Traffic

    Full text link
    This paper concerns the problem of the absence of ingress filtering at the network edge, one of the main causes of important network security issues. Numerous network operators do not deploy the best current practice - Source Address Validation (SAV) that aims at mitigating these issues. We perform the first Internet-wide active measurement study to enumerate networks not filtering incoming packets by their source address. The measurement method consists of identifying closed and open DNS resolvers handling requests coming from the outside of the network with the source address from the range assigned inside the network under the test. The proposed method provides the most complete picture of the inbound SAV deployment state at network providers. We reveal that 32 673 Autonomous Systems (ASes) and 197 641 Border Gateway Protocol (BGP) prefixes are vulnerable to spoofing of inbound traffic. Finally, using the data from the Spoofer project and performing an open resolver scan, we compare the filtering policies in both directions

    Detection of Sparse Anomalies in High-Dimensional Network Telescope Signals

    Full text link
    Network operators and system administrators are increasingly overwhelmed with incessant cyber-security threats ranging from malicious network reconnaissance to attacks such as distributed denial of service and data breaches. A large number of these attacks could be prevented if the network operators were better equipped with threat intelligence information that would allow them to block or throttle nefarious scanning activities. Network telescopes or "darknets" offer a unique window into observing Internet-wide scanners and other malicious entities, and they could offer early warning signals to operators that would be critical for infrastructure protection and/or attack mitigation. A network telescope consists of unused or "dark" IP spaces that serve no users, and solely passively observes any Internet traffic destined to the "telescope sensor" in an attempt to record ubiquitous network scanners, malware that forage for vulnerable devices, and other dubious activities. Hence, monitoring network telescopes for timely detection of coordinated and heavy scanning activities is an important, albeit challenging, task. The challenges mainly arise due to the non-stationarity and the dynamic nature of Internet traffic and, more importantly, the fact that one needs to monitor high-dimensional signals (e.g., all TCP/UDP ports) to search for "sparse" anomalies. We propose statistical methods to address both challenges in an efficient and "online" manner; our work is validated both with synthetic data as well as real-world data from a large network telescope

    Herding Vulnerable Cats: A Statistical Approach to Disentangle Joint Responsibility for Web Security in Shared Hosting

    Full text link
    Hosting providers play a key role in fighting web compromise, but their ability to prevent abuse is constrained by the security practices of their own customers. {\em Shared} hosting, offers a unique perspective since customers operate under restricted privileges and providers retain more control over configurations. We present the first empirical analysis of the distribution of web security features and software patching practices in shared hosting providers, the influence of providers on these security practices, and their impact on web compromise rates. We construct provider-level features on the global market for shared hosting -- containing 1,259 providers -- by gathering indicators from 442,684 domains. Exploratory factor analysis of 15 indicators identifies four main latent factors that capture security efforts: content security, webmaster security, web infrastructure security and web application security. We confirm, via a fixed-effect regression model, that providers exert significant influence over the latter two factors, which are both related to the software stack in their hosting environment. Finally, by means of GLM regression analysis of these factors on phishing and malware abuse, we show that the four security and software patching factors explain between 10\% and 19\% of the variance in abuse at providers, after controlling for size. For web-application security for instance, we found that when a provider moves from the bottom 10\% to the best-performing 10\%, it would experience 4 times fewer phishing incidents. We show that providers have influence over patch levels--even higher in the stack, where CMSes can run as client-side software--and that this influence is tied to a substantial reduction in abuse levels

    Best Practices for Notification Studies for Security and Privacy Issues on the Internet

    Get PDF
    Researchers help operators of vulnerable and non-compliant internet services by individually notifying them about security and privacy issues uncovered in their research. To improve efficiency and effectiveness of such efforts, dedicated notification studies are imperative. As of today, there is no comprehensive documentation of pitfalls and best practices for conducting such notification studies, which limits validity of results and impedes reproducibility. Drawing on our experience with such studies and guidance from related work, we present a set of guidelines and practical recommendations, including initial data collection, sending of notifications, interacting with the recipients, and publishing the results. We note that future studies can especially benefit from extensive planning and automation of crucial processes, i.e., activities that take place well before the first notifications are sent.Comment: Accepted to the 3rd International Workshop on Information Security Methodology and Replication Studies (IWSMR '21), colocated with ARES '2

    Security research and learning environment based on scalable network emulation

    Get PDF
    Sigurnosni napadi postaju svakodnevni dio Interneta, a učestalost njihovog izvođenja u stalnom je porastu. Zbog toga je potrebno razviti metodu za učinkovito istraživanje i analizu takvih napada. Proučavanje napada potrebno je izvoditi u sprezi s procjenom sigurnosti računalnih sustava na kojima se u tom trenutku izvršavaju napadi. Procjena sigurnosti i proces istraživanja moraju se moći obaviti u kratkom vremenu zbog što brže zaštite od dolazećeg napada. Trenutno je to kompleksan i vremenski zahtjevan zadatak koji uključuje širok raspon sustava i alata. Također, budući da se učestalost napada povećava, novi sigurnosni stručnjaci moraju se obučavati na način koji im je razumljiv i standardiziran. Predlažemo novi pristup procjeni sigurnosti i istraživanju koji koristi skalabilnu emulaciju mreže zasnovanu na virtualizaciji korištenoj u alatu IMUNES. Ovakav pristup pruža ujedinjenu okolinu za testiranje koja je efikasna i jednostavna za korištenje. Emulirana okolina također može služiti kao prenosiv i intuitivan alat za podučavanje i vježbu. Kroz niz implementiranih i analiziranih scenarija, pokazali smo određene koncepte koji se mogu koristiti za novi pristup u procjeni i istraživanju sigurnosti.Security attacks are becoming a standard part of the Internet and their frequency is constantly increasing. Therefore, an efficient way to research and investigate attacks is needed. Studying attacks needs to be coupled with security evaluation of currently deployed systems that are affected by them. The security evaluation and research process needs to be completed quickly to counter the incoming attacks, but this is currently a complex and time-consuming procedure which includes a variety of systems and tools. Furthermore, as the attack frequency is increasing, new security specialists need to be trained in a comprehensible and standardized way. We propose a new approach to security evaluation and research that uses scalable network emulation based on lightweight virtualization implemented in IMUNES. This approach provides a unified testing environment that is efficient and straightforward to use. The emulated environment also couples as a portable and intuitive training tool. Through a series of implemented and evaluated scenarios we demonstrate several concepts that can be used for a novel approach in security evaluation and research

    The Closed Resolver Project: Measuring the Deployment of Source Address Validation of Inbound Traffic

    Full text link
    Source Address Validation (SAV) is a standard aimed at discarding packets with spoofed source IP addresses. The absence of SAV for outgoing traffic has been known as a root cause of Distributed Denial-of-Service (DDoS) attacks and received widespread attention. While less obvious, the absence of inbound filtering enables an attacker to appear as an internal host of a network and may reveal valuable information about the network infrastructure. Inbound IP spoofing may amplify other attack vectors such as DNS cache poisoning or the recently discovered NXNSAttack. In this paper, we present the preliminary results of the Closed Resolver Project that aims at mitigating the problem of inbound IP spoofing. We perform the first Internet-wide active measurement study to enumerate networks that filter or do not filter incoming packets by their source address, for both the IPv4 and IPv6 address spaces. To achieve this, we identify closed and open DNS resolvers that accept spoofed requests coming from the outside of their network. The proposed method provides the most complete picture of inbound SAV deployment by network providers. Our measurements cover over 55 % IPv4 and 27 % IPv6 Autonomous Systems (AS) and reveal that the great majority of them are fully or partially vulnerable to inbound spoofing. By identifying dual-stacked DNS resolvers, we additionally show that inbound filtering is less often deployed for IPv6 than it is for IPv4. Overall, we discover 13.9 K IPv6 open resolvers that can be exploited for amplification DDoS attacks - 13 times more than previous work. Furthermore, we enumerate uncover 4.25 M IPv4 and 103 K IPv6 vulnerable closed resolvers that could only be detected thanks to our spoofing technique, and that pose a significant threat when combined with the NXNSAttack.Comment: arXiv admin note: substantial text overlap with arXiv:2002.0044

    Per-host DDoS mitigation by direct-control reinforcement learning

    Get PDF
    DDoS attacks plague the availability of online services today, yet like many cybersecurity problems are evolving and non-stationary. Normal and attack patterns shift as new protocols and applications are introduced, further compounded by burstiness and seasonal variation. Accordingly, it is difficult to apply machine learning-based techniques and defences in practice. Reinforcement learning (RL) may overcome this detection problem for DDoS attacks by managing and monitoring consequences; an agent’s role is to learn to optimise performance criteria (which are always available) in an online manner. We advance the state-of-the-art in RL-based DDoS mitigation by introducing two agent classes designed to act on a per-flow basis, in a protocol-agnostic manner for any network topology. This is supported by an in-depth investigation of feature suitability and empirical evaluation. Our results show the existence of flow features with high predictive power for different traffic classes, when used as a basis for feedback-loop-like control. We show that the new RL agent models can offer a significant increase in goodput of legitimate TCP traffic for many choices of host density
    corecore