25 research outputs found
Recommended from our members
Theory and practice of firewall outsourcing
A firewall system is a packet filter that is placed at the entry point of an enterprise network in the Internet. Packets that attempt to enter the enterprise network through this entry point are examined, one by one, against the rules of some underlying firewall F of the firewall system. Each rule in F has a decision which is either âacceptâ or ârejectâ. For any incoming packet p, the firewall system identifies the first rule (in the sequence of rules in F) that matches p. If the decision of this rule is âacceptâ, then the firewall system forwards p to the enterprise network. Otherwise the decision of this rule is ârejectâ and packet p is discarded and prevented from entering the network. Each firewall system consists of two units: a rule matching unit and a decision unit. Both units are usually executed in the firewall system. To simplify the task of managing the firewall system, we identify a special class of firewall systems, called the outsourced system, where the rule matching unit is executed in a public cloud. Unfortunately, public clouds are usually unreliable and execution of the rule matching unit in a public cloud can be vulnerable to two types of attacks: verifiability attacks and privacy attacks. The main objective of this dissertation is to discuss how to execute the rule matching unit of an outsourced system in a public cloud such that verifiability and privacy attacks are prevented from occurring. The main contribution of this dissertation is three-fold. First, we discuss how to design outsourced firewall system such that execution of the designed system in the public clouds prevents the occurrence of verifiability and privacy attacks. The resulting system, called the private system, make use of two public clouds. We show that this private system prevents verifiability and privacy attacks under the assumption that the two public clouds used in this system are both âsensibleâ and ânon-colludingâ. Second, we identify a special class of firewalls, called the partially specified firewall, where a firewall is called partially specified when the decisions of some of the rules in the firewall are not specified as âacceptâ or ârejectâ. We show that for every partially specified firewall PF, there is a (fully specified) firewall F such that PF and F are equivalent. We discuss how to design an outsourced system whose underlying firewall is a partially specified firewall PF such that the designed system prevents both verifiability and privacy attacks. We achieve this outsourced system by obtaining an equivalent firewall F from PF and designing a private system for F. Third, we present a generalization of firewalls called firewall expressions. A firewall expression is specified using one or more component firewalls and three firewall operators: ânotâ, âandâ, and âorâ. For example, the firewall expression (G and H) consists of two component firewalls G and H and one firewall operator âandâ. This firewall expression accepts a packet p iff both firewalls G and H accept p. For any underlying firewall expression FE, we define an Expression System as a generalization of firewall systems that takes as input any packet p and determines whether the underlying firewall expression FE accepts or rejects packet p. We design an outsourced expression system for any underlying firewall expression FE. We achieve this outsourced expression system by using a private system for each component firewall of FE and combining these private systems through an overall decision unit to determine whether any packet is accepted or rejected according to the firewall expression FEComputer Science
Zombie: Middleboxes that Donât Snoop
Zero-knowledge middleboxes (ZKMBs) are a recent paradigm in which clients get privacy while middleboxes enforce policy: clients prove in zero knowledge that the plaintext underlying their encrypted traffic complies with network policies, such as DNS filtering. However, prior work had impractically poor performance and was limited in functionality.
This work presents Zombie, the first system built using the ZKMB paradigm. Zombie introduces techniques that push ZKMBs to the verge of practicality: preprocessing (to move the bulk of proof generation to idle times between requests), asynchrony (to remove proving and verifying costs from the critical path), and batching (to amortize some of the verification work). Zombieâs choices, together with these techniques, provide a factor of 3.5 speedup in total computation done by client and middlebox, lowering the critical path overhead for a DNS filtering application to less than 300ms (on commodity hardware) or (in the asynchronous configuration) to 0.
As an additional contribution that is likely of independent interest, Zombie introduces a portfolio of techniques to efficiently encode regular expressions in probabilistic (and zero knowledge) proofs; these techniques offer significant asymptotic and constant factor improvements in performance over a standard baseline. Zombie builds on this portfolio to support policies based on regular expressions, such as data loss prevention
Outsmarting Network Security with SDN Teleportation
Software-defined networking is considered a promising new paradigm, enabling
more reliable and formally verifiable communication networks. However, this
paper shows that the separation of the control plane from the data plane, which
lies at the heart of Software-Defined Networks (SDNs), introduces a new
vulnerability which we call \emph{teleportation}. An attacker (e.g., a
malicious switch in the data plane or a host connected to the network) can use
teleportation to transmit information via the control plane and bypass critical
network functions in the data plane (e.g., a firewall), and to violate security
policies as well as logical and even physical separations. This paper
characterizes the design space for teleportation attacks theoretically, and
then identifies four different teleportation techniques. We demonstrate and
discuss how these techniques can be exploited for different attacks (e.g.,
exfiltrating confidential data at high rates), and also initiate the discussion
of possible countermeasures. Generally, and given today's trend toward more
intent-based networking, we believe that our findings are relevant beyond the
use cases considered in this paper.Comment: Accepted in EuroSP'1
Recommended from our members
Enhancing Automated Network Management
Network management benefits from automated tools. With the recent advent of software-defined principles, automated tools have been proposed from both industry and academia to fulfill function components in the network management control loop. While automation aims to accommodate the ever increasing network diversity and dynamics with improved reliability and management efficiency, it also brings new concerns as itâs becoming more difficult to understand the control of the network and operators cannot rely on traditional troubleshooting tools. Meanwhile, how to effectively integrate new automation tools with existing legacy networks remains a question. This dissertationpresents efficient methods to address key functionalities within the control loop in the adaption of automated network management.Identifying the network-wide forwarding behaviors of a packet is essential for many network management tasks, including policy enforcement, rule verification, and fault localization. We start by presenting AP Classifier. AP Classifier was developed based on the concept of atomic predicates which can be used to characterize the forwarding behaviors of packets. There is an increasing trend that enterprises outsource their Network Function (NF) processing to a cloud to lower cost and ease management. To avoid threats to the enterpriseâs private information, we propose SICS based on AP Classifier, a secure and dynamic NF outsourcing framework. Stateful NFs have become essential parts of modern networks, increasing the complexity in network management. A major step in network automation is to automatically translate high level network intents into low level configurations. To ensure those configurations and the states generated by automation match intents, we present Epinoia, a network intent checker for stateful networks. While the concept of auto-translation sounds promising, operators may not know what intents should be. To close the control loop, we present AutoInfer to automatically infer intents of running networks, which helps operators understand the network runtime states
Do You Need a Zero Knowledge Proof?
Zero-Knowledge Proofs (ZKPs), a cryptographic tool known for decades, have gained significant attention in recent years due to advancements that have made them practically applicable in real-world scenarios. ZKPs can provide unique attributes, such as succinctness, non-interactivity, and the ability to prove knowledge without revealing the information itself, making them an attractive solution for a range of applications.
This paper aims to critically analyze the applicability of ZKPs in various scenarios. We categorize ZKPs into distinct types: SNARKs (Succinct Non-Interactive Arguments of Knowledge), Commit-then-Prove ZKPs, MPC-in-the-Head, and Sigma Protocols, each offering different trade-offs and benefits. We introduce a flowchart methodology to assist in determining the most suitable ZKP system, given a set of technical application requirements. Next, we conduct an in-depth investigation of three major use cases: Outsourcing Computation, Digital Self-Sovereign Identity, and ZKPs in networking. Additionally, we provide a high-level overview of other applications of ZKPs, exploring their broader implications and opportunities. This paper aims to demystify the decision-making process involved in choosing the right ZKP system, providing clarity on when and how these cryptographic tools can be effectively utilized in various domains â and when they are better to be avoided
Design and Verification of Specialised Security Goals for Protocol Families
Communication Protocols form a fundamental backbone of our modern information networks. These protocols provide a framework to describe how agents - Computers, Smartphones, RFID Tags and more - should structure their communication. As a result, the security of these protocols is implicitly trusted to protect our personal data. In 1997, Lowe presented âA Hierarchy of Authentication Specificationsâ, formalising a set of security requirements that might be expected of communication protocols. The value of these requirements is that they can be formally tested and verified against a protocol specification. This allows a user to have confidence that their communications are protected in ways that are uniformly defined and universally agreed upon.
Since that time, the range of objectives and applications of real-world protocols has grown. Novel requirements - such as checking the physical distance between participants, or evolving trust assumptions of intermediate nodes on the network - mean that new attack vectors are found on a frequent basis. The challenge, then, is to define security goals which will guarantee security, even when the nature of these attacks is not known.
In this thesis, a methodology for the design of security goals is created. It is used to define a collection of specialised security goals for protocols in multiple different families, by considering tailor-made models for these specific scenarios. For complex requirements, theorems are proved that simplify analysis, allowing the verification of security goals to be efficiently modelled in automated prover tools
Bounded Verification for Finite-Field-Blasting (In a Compiler for Zero Knowledge Proofs)
Zero Knowledge Proofs (ZKPs) are cryptographic protocols
by which a prover convinces a verifier of the truth of a statement with-
out revealing any other information. Typically, statements are expressed
in a high-level language and then compiled to a low-level representation
on which the ZKP operates. Thus, a bug in a ZKP compiler can com-
promise the statement that the ZK proof is supposed to establish. This
paper takes a step towards ZKP compiler correctness by partially veri-
fying a field-blasting compiler pass, a pass that translates Boolean and
bit-vector logic into equivalent operations in a finite field. First, we define
correctness for field-blasters and ZKP compilers more generally. Next, we
describe the specific field-blaster using a set of encoding rules and de-
fine verification conditions for individual rules. Finally, we connect the
rules and the correctness definition by showing that if our verification
conditions hold, the field-blaster is correct. We have implemented our
approach in the CirC ZKP compiler and have proved bounded versions
of the corresponding verification conditions. We show that our partially
verified field-blaster does not hurt the performance of the compiler or its
output; we also report on four bugs uncovered during verification