7 research outputs found

    Anomaly detection through information sharing under different topologies

    No full text
    Early detection of traffic anomalies in networks increases the probability of effective intervention/mitigation actions, thereby improving the stability of system function. Centralized methods of anomaly detection are subject to inherent constraints: (1) they create a communication burden on the system, (2) they impose a delay in detection while information is being gathered, and (3) they require some trust and/or sharing of traffic information patterns. On the other hand, truly parallel, distributed methods are fast and private but can observe only local information. These methods can easily fail to see the “big picture” as they focus on only one thread in a tapestry. A recently proposed algorithm, Distributed Intrusion/Anomaly Monitoring for Nonparametric Detection (DIAMoND), addressed these problems by using parallel surveillance that included dynamic detection thresholds. These thresholds were functions of nonparametric information shared among network neighbors. Here, we explore the influence of network topology and patterns in normal traffic flow on the performance of the DIAMoND algorithm. We contrast performance to a truly parallel, independent surveillance system. We show that incorporation of nonparametric data improves anomaly detection capabilities in most cases, without incurring the practical problems of fully parallel network surveillance.Organisation and Governanc

    Using loops observed in traceroute to infer the ability to spoof

    No full text
    Despite source IP address spoofing being a known vulnerability for at least 25 years, and despite many efforts to shed light on the problem, spoofing remains a popular attack method for redirection, amplification, and anonymity. To defeat these attacks requires operators to ensure their networks filter packets with spoofed source IP addresses, known as source address validation (SAV), best deployed at the edge of the network where traffic originates. In this paper, we present a new method using routing loops appearing in traceroute data to infer inadequate SAV at the transit provider edge, where a provider does not filter traffic that should not have come from the customer. Our method does not require a vantage point within the customer network. We present and validate an algorithm that identifies at Internet scale which loops imply a lack of ingress filtering by providers. We found 703 provider ASes that do not implement ingress filtering on at least one of their links for 1,780 customer ASes. Most of these observations are unique compared to the existing methods of the Spoofer and Open Resolver projects. By increasing the visibility of the networks that allow spoofing, we aim to strengthen the incentives for the adoption of SAV.Organisation and Governanc

    Make notifications great again: learning how to notify in the age of large-scale vulnerability scanning

    No full text
    As large-scale vulnerability detection becomes more feasible, it also increases the urgency to find effective largescale notification mechanisms to inform the affected parties. Researchers, CERTs, security companies and other organizations with vulnerability data have a variety of options to identify, contact and communicate with the actors responsible for the affected system or service. A lot of things can – and do – go wrong. It might be impossible to identify the appropriate recipient of the notification, the message might not be trusted by the recipient, it might be overlooked or ignored or misunderstood. Such problems multiply as the volume of notifications increases. In this paper, we undertake several large-scale notification campaigns for a vulnerable configuration of authoritative nameservers. We investigate three issues: What is the most effective way to reach the affected parties? What communication path mobilizes the strongest incentive for remediation? And finally, what is the impact of providing recipients a mechanism to actively demonstrate the vulnerability for their own system, rather than sending them the standard static notification message. We find that retrieving contact information at scale is highly problematic, though there are different degrees of failure for different mechanisms. For those parties who are reached, notification significantly increases remediation rates. Reaching out to nameserver operators directly had better results than going via their customers, the domain owners. While the latter, in principle, have a stronger incentive to care and their request for remediation would trigger the commercial incentive of the operator to keep its customers happy, this communication path turned out to have slightly worse remediation rates. Finally, we find no evidence that vulnerability demonstrations did better than static messages. In fact, few recipients engaged with the demonstration website.Accepted Author ManuscriptOrganisation and Governanc

    Evaluating the Impact of AbuseHUB on Botnet Mitigation

    No full text
    This documents presents the final report of a two-year project to evaluate the impact of AbuseHUB, a Dutch clearinghouse for acquiring and processing abuse data on infected machines. The report was commissioned by the Netherlands Ministry of Economic Affairs, a co-funder of the development of AbuseHUB. AbuseHUB is the initiative of 9 Internet Service Providers, SIDN (the registry for the .nl top-level domain) and Surfnet (the national research and education network operator). The key objective of AbuseHUB is to improve the mitigation of botnets by its members. We set out to assess whether this objective is being reached by analyzing malware infection levels in the networks of AbuseHUB members and comparing them to those of other Internet Service Providers (ISPs). Since AbuseHUB members together comprise over 90 percent of the broadband market in the Netherlands, it also makes sense to compare how the country as a whole has performed compared to other countries. This report complements the baseline measurement report produced in December 2013 and the interim report from March 2015. We are using the same data sources as in the interim report, which is an expanded set compared to the earlier baseline report and to our 2011 study into botnet mitigation in the Netherlands.Organisation and Governanc

    No domain left behind: Is Let's Encrypt democratizing encryption?

    No full text
    The 2013 National Security Agency revelations of pervasive monitoring have led to an "encryption rush" across the computer and Internet industry. To push back against massive surveillance and protect users' privacy, vendors, hosting and cloud providers have widely deployed encryption on their hardware, communication links, and applications. As a consequence, most web connections nowadays are encrypted. However, there is still a significant part of Internet traffic that is not encrypted. It has been argued that both costs and complexity associated with obtaining and deploying X.509 certificates are major barriers for widespread encryption, since these certificates are required to establish encrypted connections. To address these issues, the Electronic Frontier Foundation, Mozilla Foundation, the University of Michigan and a number of partners have set up Let's Encrypt (LE), a certificate authority that provides both free X.509 certificates and software that automates the deployment of these certificates. In this paper, we investigate if LE has been successful in democratizing encryption: we analyze certificate issuance in the first year of LE and show from various perspectives that LE adoption has an upward trend and it is in fact being successful in covering the lower-cost end of the hosting market.Organisation and GovernanceCyber SecurityInformation and Communication Technolog

    Herding Vulnerable Cats: A Statistical Approach to Disentangle Joint Responsibility for Web Security in Shared Hosting

    No full text
    Hosting providers play a key role in fighting web compromise, but their ability to prevent abuse is constrained by the security practices of their own customers. Shared hosting, offers a unique perspective since customers operate under restricted privileges and providers retain more control over configurations. We present the first empirical analysis of the distribution of web security features and software patching practices in shared hosting providers, the influence of providers on these security practices, and their impact on web compromise rates. We construct provider-level features on the global market for shared hosting -- containing 1,259 providers -- by gathering indicators from 442,684 domains. Exploratory factor analysis of 15 indicators identifies four main latent factors that capture security efforts: content security, webmaster security, web infrastructure security and web application security. We confirm, via a fixed-effect regression model, that providers exert significant influence over the latter two factors, which are both related to the software stack in their hosting environment. Finally, by means of GLM regression analysis of these factors on phishing and malware abuse, we show that the four security and software patching factors explain between 10% and 19% of the variance in abuse at providers, after controlling for size. For web-application security for instance, we found that when a provider moves from the bottom 10% to the best-performing 10%, it would experience 4 times fewer phishing incidents. We show that providers have influence over patch levels--even higher in the stack, where CMSes can run as client-side software--and that this influence is tied to a substantial reduction in abuse levels.Accepted Author ManuscriptOrganisation and Governanc
    corecore