12 research outputs found
Recommended from our members
On fundamental computational barriers in the mathematics of information
This thesis is about computational theory in the setting of the mathematics of information. The first goal is to demonstrate that many commonly considered problems
in optimisation theory cannot be solved with an algorithm if the input data is only
known up to an arbitrarily small error (modelling the fact that most real numbers are
not expressible to infinite precision with a floating point based computational device).
This includes computing the minimisers to basis pursuit, linear programming, lasso
and image deblurring as well as finding an optimal neural network given training data.
These results are somewhat paradoxical given the success that existing algorithms exhibit when tackling these problems with real world datasets and a substantial portion
of this thesis is dedicated to explaining the apparent disparity, particularly in the context of compressed sensing. To do so requires the introduction of a variety of new
concepts, including that of a breakdown epsilon, which may have broader applicability
to computational problems outside of the ones central to this thesis. We conclude with
a discussion on future research directions opened up by this work.This work was supported by the UK Engineering and Physical Sciences Research Council (EPSRC) grant EP/H023348/1 for the University of Cambridge Centre for Doctoral Training, the Cambridge Centre for Analysis
The Feasibility and Inevitability of Stealth Attacks
We develop and study new adversarial perturbations that enable an attacker to
gain control over decisions in generic Artificial Intelligence (AI) systems
including deep learning neural networks. In contrast to adversarial data
modification, the attack mechanism we consider here involves alterations to the
AI system itself. Such a stealth attack could be conducted by a mischievous,
corrupt or disgruntled member of a software development team. It could also be
made by those wishing to exploit a "democratization of AI" agenda, where
network architectures and trained parameter sets are shared publicly. Building
on work by [Tyukin et al., International Joint Conference on Neural Networks,
2020], we develop a range of new implementable attack strategies with
accompanying analysis, showing that with high probability a stealth attack can
be made transparent, in the sense that system performance is unchanged on a
fixed validation set which is unknown to the attacker, while evoking any
desired output on a trigger input of interest. The attacker only needs to have
estimates of the size of the validation set and the spread of the AI's relevant
latent space. In the case of deep learning neural networks, we show that a one
neuron attack is possible - a modification to the weights and bias associated
with a single neuron - revealing a vulnerability arising from
over-parameterization. We illustrate these concepts in a realistic setting.
Guided by the theory and computational results, we also propose strategies to
guard against stealth attacks
The Boundaries of Verifiable Accuracy, Robustness, and Generalisation in Deep Learning
In this work, we assess the theoretical limitations of determining guaranteed
stability and accuracy of neural networks in classification tasks. We consider
classical distribution-agnostic framework and algorithms minimising empirical
risks and potentially subjected to some weights regularisation. We show that
there is a large family of tasks for which computing and verifying ideal stable
and accurate neural networks in the above settings is extremely challenging, if
at all possible, even when such ideal solutions exist within the given class of
neural architectures
The Boundaries of Verifiable Accuracy, Robustness, and Generalisation in Deep Learning
In this work, we assess the theoretical limitations of determining guaranteed stability and accuracy of neural networks in classification tasks. We consider classical distribution-agnostic framework and algorithms minimising empirical risks and potentially subjected to some weights regularisation. We show that there is a large family of tasks for which computing and verifying ideal stable and accurate neural networks in the above settings is extremely challenging, if at all possible, even when such ideal solutions exist within the given class of neural architectures.</p
Cooperative cell motility during tandem locomotion of amoeboid cells
Streams of migratory cells are initiated by the formation of tandem pairs of cells connected head to tail to which other cells subsequently adhere. The mechanisms regulating the transition from single to streaming cell migration remain elusive, although several molecules have been suggested to be involved. In this work, we investigate the mechanics of the locomotion of Dictyostelium tandem pairs by analyzing the spatiotemporal evolution of their traction adhesions (TAs). We find that in migrating wild-type tandem pairs, each cell exerts traction forces on stationary sites (âŒ80% of the time), and the trailing cell reuses the location of the TAs of the leading cell. Both leading and trailing cells form contractile dipoles and synchronize the formation of new frontal TAs with âŒ54-s time delay. Cells not expressing the lectin discoidin I or moving on discoidin Iâcoated substrata form fewer tandems, but the trailing cell still reuses the locations of the TAs of the leading cell, suggesting that discoidin I is not responsible for a possible chemically driven synchronization process. The migration dynamics of the tandems indicate that their TAsâ reuse results from the mechanical synchronization of the leading and trailing cellsâ protrusions and retractions (motility cycles) aided by the cellâcell adhesions