587 research outputs found

    Systematizing Decentralization and Privacy: Lessons from 15 Years of Research and Deployments

    Get PDF
    Decentralized systems are a subset of distributed systems where multiple authorities control different components and no authority is fully trusted by all. This implies that any component in a decentralized system is potentially adversarial. We revise fifteen years of research on decentralization and privacy, and provide an overview of key systems, as well as key insights for designers of future systems. We show that decentralized designs can enhance privacy, integrity, and availability but also require careful trade-offs in terms of system complexity, properties provided, and degree of decentralization. These trade-offs need to be understood and navigated by designers. We argue that a combination of insights from cryptography, distributed systems, and mechanism design, aligned with the development of adequate incentives, are necessary to build scalable and successful privacy-preserving decentralized systems

    Undetectable Communication: The Online Social Networks Case

    Get PDF
    Online Social Networks (OSNs) provide users with an easy way to share content, communicate, and update others about their activities. They also play an increasingly fundamental role in coordinating and amplifying grassroots movements, as demonstrated by recent uprisings in, e.g., Egypt, Tunisia, and Turkey. At the same time, OSNs have become primary targets of tracking, profiling, as well as censorship and surveillance. In this paper, we explore the notion of undetectable communication in OSNs and introduce formal definitions, alongside system and adversarial models, that complement better understood notions of anonymity and confidentiality. We present a novel scheme for secure covert information sharing that, to the best of our knowledge, is the first to achieve undetectable communication in OSNs. We demonstrate, via an open-source prototype, that additional costs are tolerably low

    The Threat of Offensive AI to Organizations

    Get PDF
    AI has provided us with the ability to automate tasks, extract information from vast amounts of data, and synthesize media that is nearly indistinguishable from the real thing. However, positive tools can also be used for negative purposes. In particular, cyber adversaries can use AI to enhance their attacks and expand their campaigns. Although offensive AI has been discussed in the past, there is a need to analyze and understand the threat in the context of organizations. For example, how does an AI-capable adversary impact the cyber kill chain? Does AI benefit the attacker more than the defender? What are the most significant AI threats facing organizations today and what will be their impact on the future? In this study, we explore the threat of offensive AI on organizations. First, we present the background and discuss how AI changes the adversary’s methods, strategies, goals, and overall attack model. Then, through a literature review, we identify 32 offensive AI capabilities which adversaries can use to enhance their attacks. Finally, through a panel survey spanning industry, government and academia, we rank the AI threats and provide insights on the adversaries
    • …
    corecore