4 research outputs found

    Cyber Threats to the Private Academic Cloud

    Get PDF
    The potential breach of access to confidential content hosted in a university\u27s Private Academic Cloud (PAC) underscores the need for developing new protection methods. This paper introduces a Threat Analyzer Software (TAS) and a predictive algorithm rooted in both an operational model and discrete threat recognition procedures (DTRPs). These tools aid in identifying the functional layers that attackers could exploit to embed malware in guest operating systems (OS) and the PAC hypervisor. The solutions proposed herein play a crucial role in ensuring countermeasures against malware introduction into the PAC. Various hypervisor components are viewed as potential threat sources to the PAC\u27s information security (IS). Such threats may manifest through the distribution of malware or the initiation of processes that compromise the PAC\u27s security. The demonstrated counter-threat method, which is founded on the operational model and discrete threat recognition procedures, facilitates the use of mechanisms within the HIPV to quickly identify cyber attacks on the PAC, especially those employing "rootkit" technologies. This prompt identification empowers defenders to take swift and appropriate actions to safeguard the PAC

    When PETs misbehave: A Contextual Integrity analysis

    Full text link
    Privacy enhancing technologies, or PETs, have been hailed as a promising means to protect privacy without compromising on the functionality of digital services. At the same time, and partly because they may encode a narrow conceptualization of privacy as confidentiality that is popular among policymakers, engineers and the public, PETs risk being co-opted to promote privacy-invasive practices. In this paper, we resort to the theory of Contextual Integrity to explain how privacy technologies may be misused to erode privacy. To illustrate, we consider three PETs and scenarios: anonymous credentials for age verification, client-side scanning for illegal content detection, and homomorphic encryption for machine learning model training. Using the theory of Contextual Integrity, we reason about the notion of privacy that these PETs encode, and show that CI enables us to identify and reason about the limitations of PETs and their misuse, and which may ultimately lead to privacy violations
    corecore