1,073 research outputs found
Abstract Learning Frameworks for Synthesis
We develop abstract learning frameworks (ALFs) for synthesis that embody the
principles of CEGIS (counter-example based inductive synthesis) strategies that
have become widely applicable in recent years. Our framework defines a general
abstract framework of iterative learning, based on a hypothesis space that
captures the synthesized objects, a sample space that forms the space on which
induction is performed, and a concept space that abstractly defines the
semantics of the learning process. We show that a variety of synthesis
algorithms in current literature can be embedded in this general framework.
While studying these embeddings, we also generalize some of the synthesis
problems these instances are of, resulting in new ways of looking at synthesis
problems using learning. We also investigate convergence issues for the general
framework, and exhibit three recipes for convergence in finite time. The first
two recipes generalize current techniques for convergence used by existing
synthesis engines. The third technique is a more involved technique of which we
know of no existing instantiation, and we instantiate it to concrete synthesis
problems
Efficient Image Evidence Analysis of CNN Classification Results
Convolutional neural networks (CNNs) define the current state-of-the-art for
image recognition. With their emerging popularity, especially for critical
applications like medical image analysis or self-driving cars, confirmability
is becoming an issue. The black-box nature of trained predictors make it
difficult to trace failure cases or to understand the internal reasoning
processes leading to results. In this paper we introduce a novel efficient
method to visualise evidence that lead to decisions in CNNs. In contrast to
network fixation or saliency map methods, our method is able to illustrate the
evidence for or against a classifier's decision in input pixel space
approximately 10 times faster than previous methods. We also show that our
approach is less prone to noise and can focus on the most relevant input
regions, thus making it more accurate and interpretable. Moreover, by making
simplifications we link our method with other visualisation methods, providing
a general explanation for gradient-based visualisation techniques. We believe
that our work makes network introspection more feasible for debugging and
understanding deep convolutional networks. This will increase trust between
humans and deep learning models.Comment: 14 pages, 19 figure
A Formalization of Robustness for Deep Neural Networks
Deep neural networks have been shown to lack robustness to small input
perturbations. The process of generating the perturbations that expose the lack
of robustness of neural networks is known as adversarial input generation. This
process depends on the goals and capabilities of the adversary, In this paper,
we propose a unifying formalization of the adversarial input generation process
from a formal methods perspective. We provide a definition of robustness that
is general enough to capture different formulations. The expressiveness of our
formalization is shown by modeling and comparing a variety of adversarial
attack techniques
dtControl: Decision Tree Learning Algorithms for Controller Representation
Decision tree learning is a popular classification technique most commonly
used in machine learning applications. Recent work has shown that decision
trees can be used to represent provably-correct controllers concisely. Compared
to representations using lookup tables or binary decision diagrams, decision
trees are smaller and more explainable. We present dtControl, an easily
extensible tool for representing memoryless controllers as decision trees. We
give a comprehensive evaluation of various decision tree learning algorithms
applied to 10 case studies arising out of correct-by-construction controller
synthesis. These algorithms include two new techniques, one for using arbitrary
linear binary classifiers in the decision tree learning, and one novel approach
for determinizing controllers during the decision tree construction. In
particular the latter turns out to be extremely efficient, yielding decision
trees with a single-digit number of decision nodes on 5 of the case studies
Efficient Neural Network Robustness Certification with General Activation Functions
Finding minimum distortion of adversarial examples and thus certifying
robustness in neural network classifiers for given data points is known to be a
challenging problem. Nevertheless, recently it has been shown to be possible to
give a non-trivial certified lower bound of minimum adversarial distortion, and
some recent progress has been made towards this direction by exploiting the
piece-wise linear nature of ReLU activations. However, a generic robustness
certification for general activation functions still remains largely
unexplored. To address this issue, in this paper we introduce CROWN, a general
framework to certify robustness of neural networks with general activation
functions for given input data points. The novelty in our algorithm consists of
bounding a given activation function with linear and quadratic functions, hence
allowing it to tackle general activation functions including but not limited to
four popular choices: ReLU, tanh, sigmoid and arctan. In addition, we
facilitate the search for a tighter certified lower bound by adaptively
selecting appropriate surrogates for each neuron activation. Experimental
results show that CROWN on ReLU networks can notably improve the certified
lower bounds compared to the current state-of-the-art algorithm Fast-Lin, while
having comparable computational efficiency. Furthermore, CROWN also
demonstrates its effectiveness and flexibility on networks with general
activation functions, including tanh, sigmoid and arctan.Comment: Accepted by NIPS 2018. Huan Zhang and Tsui-Wei Weng contributed
equall
Non-Imaging Medical Data Synthesis for Trustworthy AI: A Comprehensive Survey
Data quality is the key factor for the development of trustworthy AI in
healthcare. A large volume of curated datasets with controlled confounding
factors can help improve the accuracy, robustness and privacy of downstream AI
algorithms. However, access to good quality datasets is limited by the
technical difficulty of data acquisition and large-scale sharing of healthcare
data is hindered by strict ethical restrictions. Data synthesis algorithms,
which generate data with a similar distribution as real clinical data, can
serve as a potential solution to address the scarcity of good quality data
during the development of trustworthy AI. However, state-of-the-art data
synthesis algorithms, especially deep learning algorithms, focus more on
imaging data while neglecting the synthesis of non-imaging healthcare data,
including clinical measurements, medical signals and waveforms, and electronic
healthcare records (EHRs). Thus, in this paper, we will review the synthesis
algorithms, particularly for non-imaging medical data, with the aim of
providing trustworthy AI in this domain. This tutorial-styled review paper will
provide comprehensive descriptions of non-imaging medical data synthesis on
aspects including algorithms, evaluations, limitations and future research
directions.Comment: 35 pages, Submitted to ACM Computing Survey
- …