224 research outputs found
Robustness Verification for Classifier Ensembles
We give a formal verification procedure that decides whether a classifier
ensemble is robust against arbitrary randomized attacks. Such attacks consist
of a set of deterministic attacks and a distribution over this set. The
robustness-checking problem consists of assessing, given a set of classifiers
and a labelled data set, whether there exists a randomized attack that induces
a certain expected loss against all classifiers. We show the NP-hardness of the
problem and provide an upper bound on the number of attacks that is sufficient
to form an optimal randomized attack. These results provide an effective way to
reason about the robustness of a classifier ensemble. We provide SMT and MILP
encodings to compute optimal randomized attacks or prove that there is no
attack inducing a certain expected loss. In the latter case, the classifier
ensemble is provably robust. Our prototype implementation verifies multiple
neural-network ensembles trained for image-classification tasks. The
experimental results using the MILP encoding are promising both in terms of
scalability and the general applicability of our verification procedure
Survey: Leakage and Privacy at Inference Time
Leakage of data from publicly available Machine Learning (ML) models is an
area of growing significance as commercial and government applications of ML
can draw on multiple sources of data, potentially including users' and clients'
sensitive data. We provide a comprehensive survey of contemporary advances on
several fronts, covering involuntary data leakage which is natural to ML
models, potential malevolent leakage which is caused by privacy attacks, and
currently available defence mechanisms. We focus on inference-time leakage, as
the most likely scenario for publicly available models. We first discuss what
leakage is in the context of different data, tasks, and model architectures. We
then propose a taxonomy across involuntary and malevolent leakage, available
defences, followed by the currently available assessment metrics and
applications. We conclude with outstanding challenges and open questions,
outlining some promising directions for future research
Computer Aided Verification
This open access two-volume set LNCS 13371 and 13372 constitutes the refereed proceedings of the 34rd International Conference on Computer Aided Verification, CAV 2022, which was held in Haifa, Israel, in August 2022. The 40 full papers presented together with 9 tool papers and 2 case studies were carefully reviewed and selected from 209 submissions. The papers were organized in the following topical sections: Part I: Invited papers; formal methods for probabilistic programs; formal methods for neural networks; software Verification and model checking; hyperproperties and security; formal methods for hardware, cyber-physical, and hybrid systems. Part II: Probabilistic techniques; automata and logic; deductive verification and decision procedures; machine learning; synthesis and concurrency. This is an open access book
- …