172 research outputs found

    PerfWeb: How to Violate Web Privacy with Hardware Performance Events

    Full text link
    The browser history reveals highly sensitive information about users, such as financial status, health conditions, or political views. Private browsing modes and anonymity networks are consequently important tools to preserve the privacy not only of regular users but in particular of whistleblowers and dissidents. Yet, in this work we show how a malicious application can infer opened websites from Google Chrome in Incognito mode and from Tor Browser by exploiting hardware performance events (HPEs). In particular, we analyze the browsers' microarchitectural footprint with the help of advanced Machine Learning techniques: k-th Nearest Neighbors, Decision Trees, Support Vector Machines, and in contrast to previous literature also Convolutional Neural Networks. We profile 40 different websites, 30 of the top Alexa sites and 10 whistleblowing portals, on two machines featuring an Intel and an ARM processor. By monitoring retired instructions, cache accesses, and bus cycles for at most 5 seconds, we manage to classify the selected websites with a success rate of up to 86.3%. The results show that hardware performance events can clearly undermine the privacy of web users. We therefore propose mitigation strategies that impede our attacks and still allow legitimate use of HPEs

    Cloud Storage File Recoverability

    Get PDF
    Data loss is perceived as one of the major threats for cloud storage. Consequently, the security community developed several challenge-response protocols that allow a user to remotely verify whether an outsourced file is still intact. However, two important practical problems have not yet been considered. First, clients commonly outsource multiple files of different sizes, raising the question how to formalize such a scheme and in particular ensuring that all files can be simultaneously audited. Second, in case auditing of the files fails, existing schemes do not provide a client with any method to prove if the original files are still recoverable. We address both problems and describe appropriate solutions. The first problem is tackled by providing a new type of Proofs of Retrievability scheme, enabling a client to check all files simultaneously in a compact way. The second problem is solved by defining a novel procedure called Proofs of Recoverability , enabling a client to obtain an assurance whether a file is recoverable or irreparably damaged. Finally, we present a combination of both schemes allowing the client to check the recoverability of all her original files, thus ensuring cloud storage file recoverability

    SoK: Privacy-Preserving Signatures

    Get PDF
    Modern security systems depend fundamentally on the ability of users to authenticate their communications to other parties in a network. Unfortunately, cryptographic authentication can substantially undermine the privacy of users. One possible solution to this problem is to use privacy-preserving cryptographic authentication. These protocols allow users to authenticate their communications without revealing their identity to the verifier. In the non-interactive setting, the most common protocols include blind, ring, and group signatures, each of which has been the subject of enormous research in the security and cryptography literature. These primitives are now being deployed at scale in major applications, including Intel\u27s SGX software attestation framework. The depth of the research literature and the prospect of large-scale deployment motivate us to systematize our understanding of the research in this area. This work provides an overview of these techniques, focusing on applications and efficiency

    Flexible Information-Flow Control

    Get PDF
    As more and more sensitive data is handled by software, its trustworthinessbecomes an increasingly important concern. This thesis presents work on ensuringthat information processed by computing systems is not disclosed to thirdparties without the user\u27s permission; i.e. to prevent unwanted flows ofinformation. While this problem is widely studied, proposed rigorousinformation-flow control approaches that enforce strong securityproperties like noninterference have yet to see widespread practical use.Conversely, lightweight techniques such as taint tracking are more prevalent inpractice, but lack formal underpinnings, making it unclear what guarantees theyprovide.This thesis aims to shrink the gap between heavyweight information-flow controlapproaches that have been proven sound and lightweight practical techniqueswithout formal guarantees such as taint tracking. This thesis attempts toreconcile these areas by (a) providing formal foundations to taint trackingapproaches, (b) extending information-flow control techniques to more realisticlanguages and settings, and (c) exploring security policies and mechanisms thatfall in between information-flow control and taint tracking and investigating whattrade-offs they incur

    Undermining User Privacy on Mobile Devices Using AI

    Full text link
    Over the past years, literature has shown that attacks exploiting the microarchitecture of modern processors pose a serious threat to the privacy of mobile phone users. This is because applications leave distinct footprints in the processor, which can be used by malware to infer user activities. In this work, we show that these inference attacks are considerably more practical when combined with advanced AI techniques. In particular, we focus on profiling the activity in the last-level cache (LLC) of ARM processors. We employ a simple Prime+Probe based monitoring technique to obtain cache traces, which we classify with Deep Learning methods including Convolutional Neural Networks. We demonstrate our approach on an off-the-shelf Android phone by launching a successful attack from an unprivileged, zeropermission App in well under a minute. The App thereby detects running applications with an accuracy of 98% and reveals opened websites and streaming videos by monitoring the LLC for at most 6 seconds. This is possible, since Deep Learning compensates measurement disturbances stemming from the inherently noisy LLC monitoring and unfavorable cache characteristics such as random line replacement policies. In summary, our results show that thanks to advanced AI techniques, inference attacks are becoming alarmingly easy to implement and execute in practice. This once more calls for countermeasures that confine microarchitectural leakage and protect mobile phone applications, especially those valuing the privacy of their users
    • …
    corecore