51 research outputs found
Recommended from our members
Autogenerative Networks
Artificial intelligence powered by deep neural networks has seen tremendous improvements in the last decade, achieving superhuman performance on a diverse range of tasks. Many worry that it can one day develop the ability to recursively self-improve itself, leading to an intelligence explosion known as the Singularity. Autogenerative networks, or neural networks generating neural networks, is one major plausible pathway towards realizing this possibility. The object of this thesis is to study various challenges and applications of small-scale autogenerative networks in domains such as artificial life, reinforcement learning, neural network initialization and optimization, gradient-based meta-learning, and logical networks. Chapters 2 and 3 describe novel mechanisms for generating neural network weights and embeddings. Chapters 4 and 5 identify problems and propose solutions to fix optimization difficulties in differentiable mechanisms of neural network generation known as Hypernetworks. Chapters 6 and 7 study implicit models of network generation like backpropagating through gradient descent itself and integrating discrete solvers into continuous functions. Together, the chapters in this thesiscontribute novel proposals for non-differentiable neural network generation mechanisms, significant improvements to existing differentiable network generation mechanisms, and an assimilation of different learning paradigms in autogenerative networks
End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes
Meta-Bayesian optimisation (meta-BO) aims to improve the sample efficiency of
Bayesian optimisation by leveraging data from related tasks. While previous
methods successfully meta-learn either a surrogate model or an acquisition
function independently, joint training of both components remains an open
challenge. This paper proposes the first end-to-end differentiable meta-BO
framework that generalises neural processes to learn acquisition functions via
transformer architectures. We enable this end-to-end framework with
reinforcement learning (RL) to tackle the lack of labelled acquisition data.
Early on, we notice that training transformer-based neural processes from
scratch with RL is challenging due to insufficient supervision, especially when
rewards are sparse. We formalise this claim with a combinatorial analysis
showing that the widely used notion of regret as a reward signal exhibits a
logarithmic sparsity pattern in trajectory lengths. To tackle this problem, we
augment the RL objective with an auxiliary task that guides part of the
architecture to learn a valid probabilistic model as an inductive bias. We
demonstrate that our method achieves state-of-the-art regret results against
various baselines in experiments on standard hyperparameter optimisation tasks
and also outperforms others in the real-world problems of mixed-integer
programming tuning, antibody design, and logic synthesis for electronic design
automation
Meta-Learning in Neural Networks: A Survey
The field of meta-learning, or learning-to-learn, has seen a dramatic rise in
interest in recent years. Contrary to conventional approaches to AI where tasks
are solved from scratch using a fixed learning algorithm, meta-learning aims to
improve the learning algorithm itself, given the experience of multiple
learning episodes. This paradigm provides an opportunity to tackle many
conventional challenges of deep learning, including data and computation
bottlenecks, as well as generalization. This survey describes the contemporary
meta-learning landscape. We first discuss definitions of meta-learning and
position it with respect to related fields, such as transfer learning and
hyperparameter optimization. We then propose a new taxonomy that provides a
more comprehensive breakdown of the space of meta-learning methods today. We
survey promising applications and successes of meta-learning such as few-shot
learning and reinforcement learning. Finally, we discuss outstanding challenges
and promising areas for future research
Survival of the Most Influential Prompts: Efficient Black-Box Prompt Search via Clustering and Pruning
Prompt-based learning has been an effective paradigm for large pretrained
language models (LLM), enabling few-shot or even zero-shot learning. Black-box
prompt search has received growing interest recently for its distinctive
properties of gradient-free optimization, proven particularly useful and
powerful for model-as-a-service usage. However, the discrete nature and the
complexity of combinatorial optimization hinder the efficiency of modern
black-box approaches. Despite extensive research on search algorithms, the
crucial aspect of search space design and optimization has been largely
overlooked. In this paper, we first conduct a sensitivity analysis by prompting
LLM, revealing that only a small number of tokens exert a disproportionate
amount of influence on LLM predictions. Leveraging this insight, we propose the
Clustering and Pruning for Efficient Black-box Prompt Search (ClaPS), a simple
black-box search method that first clusters and prunes the search space to
focus exclusively on influential prompt tokens. By employing even simple search
methods within the pruned search space, ClaPS achieves state-of-the-art
performance across various tasks and LLMs, surpassing the performance of
complex approaches while significantly reducing search costs. Our findings
underscore the critical role of search space design and optimization in
enhancing both the usefulness and the efficiency of black-box prompt-based
learning.Comment: Findings of EMNLP 2023. 10 pages, 5 figures, 4 tables (14 pages, 5
figures, 8 tables including references and appendices
Integration of multi-scale protein interactions for biomedical data analysis
With the advancement of modern technologies, we observe an increasing accumulation of biomedical data about diseases. There is a need for computational methods to sift through and extract knowledge from the diverse data available in order to improve our mechanistic understanding of diseases and improve patient care. Biomedical data come in various forms as exemplified by the various omics data. Existing studies have shown that each form of omics data gives only partial information on cells state and motivated jointly mining multi-omics, multi-modal data to extract integrated system knowledge. The interactome is of particular importance as it enables the modelling of dependencies arising from molecular interactions. This Thesis takes a special interest in the multi-scale protein interactome and its integration with computational models to extract relevant information from biomedical data. We define multi-scale interactions at different omics scale that involve proteins: pairwise protein-protein interactions, multi-protein complexes, and biological pathways. Using hypergraph representations, we motivate considering higher-order protein interactions, highlighting the complementary biological information contained in the multi-scale interactome. Based on those results, we further investigate how those multi-scale protein interactions can be used as either prior knowledge, or auxiliary data to develop machine learning algorithms. First, we design a neural network using the multi-scale organization of proteins in a cell into biological pathways as prior knowledge and train it to predict a patient's diagnosis based on transcriptomics data. From the trained models, we develop a strategy to extract biomedical knowledge pertaining to the diseases investigated. Second, we propose a general framework based on Non-negative Matrix Factorization to integrate the multi-scale protein interactome with multi-omics data. We show that our approach outperforms the existing methods, provide biomedical insights and relevant hypotheses for specific cancer types
- …