17 research outputs found

    Towards Deep Learning with Competing Generalisation Objectives

    Get PDF
    The unreasonable effectiveness of Deep Learning continues to deliver unprecedented Artificial Intelligence capabilities to billions of people. Growing datasets and technological advances keep extending the reach of expressive model architectures trained through efficient optimisations. Thus, deep learning approaches continue to provide increasingly proficient subroutines for, among others, computer vision and natural interaction through speech and text. Due to their scalable learning and inference priors, higher performance is often gained cost-effectively through largely automatic training. As a result, new and improved capabilities empower more people while the costs of access drop. The arising opportunities and challenges have profoundly influenced research. Quality attributes of scalable software became central desiderata of deep learning paradigms, including reusability, efficiency, robustness and safety. Ongoing research into continual, meta- and robust learning aims to maximise such scalability metrics in addition to multiple generalisation criteria, despite possible conflicts. A significant challenge is to satisfy competing criteria automatically and cost-effectively. In this thesis, we introduce a unifying perspective on learning with competing generalisation objectives and make three additional contributions. When autonomous learning through multi-criteria optimisation is impractical, it is reasonable to ask whether knowledge of appropriate trade-offs could make it simultaneously effective and efficient. Informed by explicit trade-offs of interest to particular applications, we developed and evaluated bespoke model architecture priors. We introduced a novel architecture for sim-to-real transfer of robotic control policies by learning progressively to generalise anew. Competing desiderata of continual learning were balanced through disjoint capacity and hierarchical reuse of previously learnt representations. A new state-of-the-art meta-learning approach is then proposed. We showed that meta-trained hypernetworks efficiently store and flexibly reuse knowledge for new generalisation criteria through few-shot gradient-based optimisation. Finally, we characterised empirical trade-offs between the many desiderata of adversarial robustness and demonstrated a novel defensive capability of implicit neural networks to hinder many attacks simultaneously

    画像変化に頑健な畳込みニューラルネットワークの研究

    Get PDF
    Tohoku University岡谷貴之課

    Fortifying robustness: unveiling the intricacies of training and inference vulnerabilities in centralized and federated neural networks

    Get PDF
    Neural network (NN) classifiers have gained significant traction in diverse domains such as natural language processing, computer vision, and cybersecurity, owing to their remarkable ability to approximate complex latent distributions from data. Nevertheless, the conventional assumption of an attack-free operating environment has been challenged by the emergence of adversarial examples. These perturbed samples, which are typically imperceptible to human observers, can lead to misclassifications by the NN classifiers. Moreover, recent studies have uncovered the ability of poisoned training data to generate Trojan backdoored classifiers that exhibit misclassification behavior triggered by predefined patterns. In recent years, significant research efforts have been dedicated to uncovering the vulnerabilities of NN classifiers and developing defenses or mitigations against them. However, the existing approaches still fall short of providing mature solutions to address this ever-evolving problem. The widely adopted defense mechanisms against adversarial examples are computationally expensive and impractical for certain real-world applications. Likewise, the practical black-box defense against Trojan backdoors has failed to achieve state-of-the-art performance. More concerning is the limited exploration of these vulnerabilities within the context of cooperative attack or Federated learning, leaving NN classifiers exposed to unknown risks. This dissertation aims to address these critical gaps and refine our understanding of these vulnerabilities. The research conducted within this dissertation encompasses both the attack and defense perspectives, aiming to shed light on future research directions for vulnerabilities in NN classifiers
    corecore