19 research outputs found

    An empirical evaluation of adversarial robustness under transfer learning

    Get PDF
    In this work, we evaluate adversarial robustness in the context of transfer learning from a source trained on CIFAR 100 to a target network trained on CIFAR 10. Specifically, we study the effects of using robust optimisation in the source and target networks. This allows us to identify transfer learning strategies under which adversarial defences are successfully retained, in addition to revealing potential vulnerabilities. We study the extent to which features learnt by a fast gradient sign method (FGSM) and its iterative alternative (PGD) can preserve their defence properties against black and white-box attacks under three different transfer learning strategies. We find that using PGD examples during training on the source task leads to more general robust features that are easier to transfer. Furthermore, under successful transfer, it achieves 5.2% more accuracy against white-box PGD attacks than suitable baselines. Overall, our empirical evaluations give insights on how well adversarial robustness under transfer learning can generalise

    DrNLA: Extending Verification to Non-linear Programs through Dual Re-writing

    Full text link
    For many decades, advances in static verification have focused on linear integer arithmetic (LIA) programs. Many real-world programs are, however, written with non-linear integer arithmetic (NLA) expressions, such as programs that model physical events, control systems, or nonlinear activation functions in neural networks. While there are some approaches to reasoning about such NLA programs, still many verification tools fall short when trying to analyze them. To expand the scope of existing tools, we introduce a new method of converting programs with NLA expressions into semantically equivalent LIA programs via a technique we call dual rewriting. Dual rewriting discovers a linear replacement for an NLA Boolean expression (e.g. as found in conditional branching), simultaneously exploring both the positive and negative side of the condition, and using a combination of static validation and dynamic generalization of counterexamples. While perhaps surprising at first, this is often possible because the truth value of a Boolean NLA expression can be characterized in terms of a Boolean combination of linearly-described regions/intervals where the expression is true and those where it is false. The upshot is that rewriting NLA expressions to LIA expressions beforehand enables off-the-shelf LIA tools to be applied to the wider class of NLA programs. We built a new tool DrNLA and show it can discover LIA replacements for a variety of NLA programs. We then applied our work to branching-time verification of NLA programs, creating the first set of such benchmarks (92 in total) and showing that DrNLA's rewriting enable tools such as FuncTion and T2 to verify CTL properties of 42 programs that previously could not be verified. We also show a potential use of DrNLA assisting Frama-C in program slicing, and report that execution speed is not impacted much by rewriting

    ppSAT: Towards Two-Party Private SAT Solving

    Get PDF
    We design and implement a privacy-preserving Boolean satisfiability (ppSAT) solver, which allows mutually distrustful parties to evaluate the conjunction of their input formulas while maintaining privacy. We first define a family of security guarantees reconcilable with the (known) exponential complexity of SAT solving, and then construct an oblivious variant of the classic DPLL algorithm which can be integrated with existing secure two-party computation (2PC) techniques. We further observe that most known SAT solving heuristics are unsuitable for 2PC, as they are highly data-dependent in order to minimize the number of exploration steps. Faced with how best to trade off between the number of steps and the cost of obliviously executing each one, we design three efficient oblivious heuristics, one deterministic and two randomized. As a result of this effort we are able to evaluate our ppSAT solver on small but practical instances arising from the haplotype inference problem in bioinformatics. We conclude by looking towards future directions for making ppSAT solving more practical, most especially the integration of conflict-driven clause learning (CDCL)

    Proving UNSAT in Zero Knowledge

    Get PDF
    Zero-knowledge (ZK) protocols enable one party to prove to others that it knows a fact without revealing any information about the evidence for such knowledge. There exist ZK protocols for all problems in NP, and recent works developed highly efficient protocols for proving knowledge of satisfying assignments to Boolean formulas, circuits and other NP formalisms. This work shows an efficient protocol for the the converse: proving formula *unsatisfiability* in ZK (when the prover posses a non-ZK proof). An immediate practical application is efficiently proving safety of secret programs. The key insight is to prove, in ZK, the validity of *resolution proofs* of unsatisfiability. This is efficiently realized using an algebraic representation that exploits resolution proofs\u27 structure to represent formula clauses as low-degree polynomials, combined with ZK random-access arguments. Only the proof\u27s dimensions are revealed. We implemented our protocol and used it to prove unsatisfiability of formulas that encode combinatoric problems and program correctness conditions in standard verification benchmarks, including Linux kernel drivers and Intel cryptography modules. The results demonstrate both that our protocol has practical utility, and that its aggressive optimizations, based on non-trivial encodings, significantly improve practical performance

    Ou: Automating the Parallelization of Zero-Knowledge Protocols

    Get PDF
    A zero-knowledge proof (ZKP) is a powerful cryptographic primitive used in many decentralized or privacy-focused applications. However, the high overhead of ZKPs can restrict their practical applicability. We design a programming language, Ou, aimed at easing the programmer\u27s burden when writing efficient ZKPs, and a compiler framework, Lian, that automates the analysis and distribution of statements to a computing cluster. Lian uses programming language semantics, formal methods, and combinatorial optimization to automatically partition an Ou program into efficiently sized chunks for parallel ZK-proving and/or verification. We contribute: • A front-end language where users can write proof statements as imperative programs in a familiar syntax; • A compiler architecture and implementation that automatically analyzes the program and compiles it into an optimized IR that can be lifted to a variety of ZKP constructions; and • A cutting algorithm, based on Pseudo-Boolean optimization and Integer Linear Programming, that reorders instructions and then partitions the program into efficiently sized chunks for parallel evaluation and efficient state reconciliation

    ZKSMT: A VM for Proving SMT Theorems in Zero Knowledge

    Get PDF
    Verification of program safety is often reducible to proving the unsatisfiability (i.e., validity) of a formula in Satisfiability Modulo Theories (SMT): Boolean logic combined with theories that formalize arbitrary first-order fragments. Zero-knowledge (ZK) proofs allow SMT formulas to be validated without revealing the underlying formulas or their proofs to other parties, which is a crucial building block for proving the safety of proprietary programs. Recently, Luo et al. (CCS 2022) studied the simpler problem of proving the unsatisfiability of pure Boolean formulas, but it does not support safety proofs generated by SMT solvers. This work presents ZKSMT, a novel framework for proving the validity of SMT formulas in ZK. We design a virtual machine (VM) tailored to efficiently represent the verification process of SMT validity proofs in ZK. Our VM can support the vast majority of popular theories when proving program safety while being complete and sound. To demonstrate this, we instantiate the commonly used theories of equality and linear integer arithmetic in our VM with theory-specific optimizations for proving them in ZK. ZKSMT achieves high practicality even when running on realistic SMT formulas generated by Boogie, a common tool for software verification. It achieves a three-order-of-magnitude improvement compared to a baseline that executes the proof verification code in a general ZK system

    Deciding Twig-definability of Node Selecting Tree Automata

    Get PDF
    Node selecting tree automata (NSTAs) constitute a general formalism defining unary queries over trees. Basically, a node is selected by an NSTA when it is visited in a selecting state during an accepting run. We consider twig patterns as an abstraction of XPath. Since the queries definable by NSTAs form a strict superset of twig-definable queries, we study the complexity of the problem to decide whether the query by a given NSTA is twig-definable. In particular, we obtain that the latter problem is EXPTIME-complete. In addition, we show that it is also EXPTIME-complete to decide whether the query by a given NSTA is definable by a node selecting string automaton. The final publication is available at Springer via http://dx.doi.org/10.1007/s00224-015-9623-

    Definability problems for graph query languages ∗

    Get PDF
    Given a graph, a relation on its nodes, and a query language Q of interest, we study the Q-definability problem which amounts to deciding whether there exists a query in Q defining precisely the given relation over the given graph. Previous research has identified the complexity of FO- and CQ-definability. In this paper, we consider the definability problem for regular paths and conjunctive regular path queries (CRPQs) over labelled graphs. 1
    corecore