2 research outputs found

    An Empirical Study of the Use of Integrity Verification Mechanisms for Web Subresources

    Get PDF
    Web developers can (and do) include subresources such as scripts, stylesheets and images in their webpages. Such subresources might be stored on remote servers such as content delivery networks (CDNs). This practice creates security and privacy risks, should a subresource be corrupted, as was recently the case for the British Airways websites. The subresource integrity (SRI) recommendation, released in mid-2016 by the W3C, enables developers to include digests in their webpages in order for web browsers to verify the integrity of subresources before loading them. In this paper, we conduct the first large-scale longitudinal study of the use of SRI on the Web by analyzing massive crawls (3B unique URLs) of the Web over the last 3.5 years. Our results show that the adoption of SRI is modest (3.40%), but grows at an increasing rate and is highly influenced by the practices of popular library developers (e.g., Bootstrap) and CDN operators (e.g., jsDelivr). We complement our analysis about SRI with a survey of web developers (N =227): It shows that a substantial proportion of developers know SRI and understand its basic functioning, but most of them ignore important aspects of the specication, such as the case of malformed digests. The results of the survey also show that the integration of SRI by developers is mostly manual-hence not scalable and error prone. This calls for a better integration of SRI in build tools

    Pareto-Optimal Defenses for the Web Infrastructure: Theory and Practice

    Get PDF
    The integrity of the content a user is exposed to when browsing the web relies on a plethora of non-web technologies and an infrastructure of interdependent hosts, communication technologies, and trust relations. Incidents like the Chinese Great Cannon or the MyEtherWallet attack make it painfully clear: the security of end users hinges on the security of the surrounding infrastructure: routing, DNS, content delivery, and the PKI. There are many competing, but isolated proposals to increase security, from the network up to the application layer. So far, researchers have focus on analyzing attacks and defenses on specific layers. We still lack an evaluation of how, given the status quo of the web, these proposals can be combined, how effective they are, and at what cost the increase of security comes. In this work, we propose a graph-based analysis based on Stackelberg planning that considers a rich attacker model and a multitude of proposals from IPsec to DNSSEC and SRI. Our threat model considers the security of billions of users against attackers ranging from small hacker groups to nation-state actors. Analyzing the infrastructure of the Top 5k Alexa domains, we discover that the security mechanisms currently deployed are ineffective and that some infrastructure providers have a comparable threat potential to nations. We find a considerable increase of security (up to 13% protected web visits) is possible at relatively modest cost, due to the effectiveness of mitigations at the application and transport layer, which dominate expensive infrastructure enhancements such as DNSSEC and IPsec
    corecore