16,230 research outputs found
HOFA: Twitter Bot Detection with Homophily-Oriented Augmentation and Frequency Adaptive Attention
Twitter bot detection has become an increasingly important and challenging
task to combat online misinformation, facilitate social content moderation, and
safeguard the integrity of social platforms. Though existing graph-based
Twitter bot detection methods achieved state-of-the-art performance, they are
all based on the homophily assumption, which assumes users with the same label
are more likely to be connected, making it easy for Twitter bots to disguise
themselves by following a large number of genuine users. To address this issue,
we proposed HOFA, a novel graph-based Twitter bot detection framework that
combats the heterophilous disguise challenge with a homophily-oriented graph
augmentation module (Homo-Aug) and a frequency adaptive attention module
(FaAt). Specifically, the Homo-Aug extracts user representations and computes a
k-NN graph using an MLP and improves Twitter's homophily by injecting the k-NN
graph. For the FaAt, we propose an attention mechanism that adaptively serves
as a low-pass filter along a homophilic edge and a high-pass filter along a
heterophilic edge, preventing user features from being over-smoothed by their
neighborhood. We also introduce a weight guidance loss to guide the frequency
adaptive attention module. Our experiments demonstrate that HOFA achieves
state-of-the-art performance on three widely-acknowledged Twitter bot detection
benchmarks, which significantly outperforms vanilla graph-based bot detection
techniques and strong heterophilic baselines. Furthermore, extensive studies
confirm the effectiveness of our Homo-Aug and FaAt module, and HOFA's ability
to demystify the heterophilous disguise challenge.Comment: 11 pages, 7 figure
On information captured by neural networks: connections with memorization and generalization
Despite the popularity and success of deep learning, there is limited
understanding of when, how, and why neural networks generalize to unseen
examples. Since learning can be seen as extracting information from data, we
formally study information captured by neural networks during training.
Specifically, we start with viewing learning in presence of noisy labels from
an information-theoretic perspective and derive a learning algorithm that
limits label noise information in weights. We then define a notion of unique
information that an individual sample provides to the training of a deep
network, shedding some light on the behavior of neural networks on examples
that are atypical, ambiguous, or belong to underrepresented subpopulations. We
relate example informativeness to generalization by deriving nonvacuous
generalization gap bounds. Finally, by studying knowledge distillation, we
highlight the important role of data and label complexity in generalization.
Overall, our findings contribute to a deeper understanding of the mechanisms
underlying neural network generalization.Comment: PhD thesi
Machine learning in solar physics
The application of machine learning in solar physics has the potential to
greatly enhance our understanding of the complex processes that take place in
the atmosphere of the Sun. By using techniques such as deep learning, we are
now in the position to analyze large amounts of data from solar observations
and identify patterns and trends that may not have been apparent using
traditional methods. This can help us improve our understanding of explosive
events like solar flares, which can have a strong effect on the Earth
environment. Predicting hazardous events on Earth becomes crucial for our
technological society. Machine learning can also improve our understanding of
the inner workings of the sun itself by allowing us to go deeper into the data
and to propose more complex models to explain them. Additionally, the use of
machine learning can help to automate the analysis of solar data, reducing the
need for manual labor and increasing the efficiency of research in this field.Comment: 100 pages, 13 figures, 286 references, accepted for publication as a
Living Review in Solar Physics (LRSP
SWARM Parallelism: Training Large Models Can Be Surprisingly Communication-Efficient
Many deep learning applications benefit from using large models with billions
of parameters. Training these models is notoriously expensive due to the need
for specialized HPC clusters. In this work, we consider alternative setups for
training large models: using cheap "preemptible" instances or pooling existing
resources from multiple regions. We analyze the performance of existing
model-parallel algorithms in these conditions and find configurations where
training larger models becomes less communication-intensive. Based on these
findings, we propose SWARM parallelism, a model-parallel training algorithm
designed for poorly connected, heterogeneous and unreliable devices. SWARM
creates temporary randomized pipelines between nodes that are rebalanced in
case of failure. We empirically validate our findings and compare SWARM
parallelism with existing large-scale training approaches. Finally, we combine
our insights with compression strategies to train a large Transformer language
model with 1B shared parameters (approximately 13B before sharing) on
preemptible T4 GPUs with less than 200Mb/s network.Comment: Accepted to International Conference on Machine Learning (ICML) 2023.
25 pages, 8 figure
vONTSS: vMF based semi-supervised neural topic modeling with optimal transport
Recently, Neural Topic Models (NTM), inspired by variational autoencoders,
have attracted a lot of research interest; however, these methods have limited
applications in the real world due to the challenge of incorporating human
knowledge. This work presents a semi-supervised neural topic modeling method,
vONTSS, which uses von Mises-Fisher (vMF) based variational autoencoders and
optimal transport. When a few keywords per topic are provided, vONTSS in the
semi-supervised setting generates potential topics and optimizes topic-keyword
quality and topic classification. Experiments show that vONTSS outperforms
existing semi-supervised topic modeling methods in classification accuracy and
diversity. vONTSS also supports unsupervised topic modeling. Quantitative and
qualitative experiments show that vONTSS in the unsupervised setting
outperforms recent NTMs on multiple aspects: vONTSS discovers highly clustered
and coherent topics on benchmark datasets. It is also much faster than the
state-of-the-art weakly supervised text classification method while achieving
similar classification performance. We further prove the equivalence of optimal
transport loss and cross-entropy loss at the global minimum.Comment: 24 pages, 12 figures, ACL findings 202
Layer-wise Adaptive Step-Sizes for Stochastic First-Order Methods for Deep Learning
We propose a new per-layer adaptive step-size procedure for stochastic
first-order optimization methods for minimizing empirical loss functions in
deep learning, eliminating the need for the user to tune the learning rate
(LR). The proposed approach exploits the layer-wise stochastic curvature
information contained in the diagonal blocks of the Hessian in deep neural
networks (DNNs) to compute adaptive step-sizes (i.e., LRs) for each layer. The
method has memory requirements that are comparable to those of first-order
methods, while its per-iteration time complexity is only increased by an amount
that is roughly equivalent to an additional gradient computation. Numerical
experiments show that SGD with momentum and AdamW combined with the proposed
per-layer step-sizes are able to choose effective LR schedules and outperform
fine-tuned LR versions of these methods as well as popular first-order and
second-order algorithms for training DNNs on Autoencoder, Convolutional Neural
Network (CNN) and Graph Convolutional Network (GCN) models. Finally, it is
proved that an idealized version of SGD with the layer-wise step sizes
converges linearly when using full-batch gradients
Implicit Loss of Surjectivity and Facial Reduction: Theory and Applications
Facial reduction, pioneered by Borwein and Wolkowicz, is a preprocessing method that is commonly used to obtain strict feasibility in the reformulated, reduced constraint system.
The importance of strict feasibility is often addressed in the context of the convergence results for interior point methods.
Beyond the theoretical properties that the facial reduction conveys, we show that facial reduction, not only limited to interior point methods, leads to strong numerical performances in different classes of algorithms.
In this thesis we study various consequences and the broad applicability of facial reduction.
The thesis is organized in two parts.
In the first part, we show the instabilities accompanied by the absence
of strict feasibility through the lens of facially reduced systems.
In particular, we exploit the implicit redundancies, revealed by each nontrivial facial reduction step, resulting in the implicit loss of surjectivity.
This leads to the two-step facial reduction and two novel related notions of singularity.
For the area of semidefinite programming, we use these singularities to strengthen a known bound on the solution rank, the Barvinok-Pataki bound.
For the area of linear programming, we reveal degeneracies caused by the implicit redundancies.
Furthermore, we propose a preprocessing tool that uses the simplex method.
In the second part of this thesis, we continue with the semidefinite programs that do not have strictly feasible points.
We focus on the doubly-nonnegative relaxation of the binary quadratic program and a semidefinite program with a nonlinear objective function.
We closely work with two classes of algorithms, the splitting method and the Gauss-Newton interior point method.
We elaborate on the advantages in building models from facial reduction. Moreover, we develop algorithms for real-world problems including the quadratic assignment problem, the protein side-chain positioning problem, and the key rate computation for quantum key distribution.
Facial reduction continues to play an important role for
providing robust reformulated models in both the theoretical and the practical aspects, resulting in successful numerical performances
Modular lifelong machine learning
Deep learning has drastically improved the state-of-the-art in many important fields, including computer vision and natural language processing (LeCun et al., 2015). However, it is expensive to train a deep neural network on a machine learning problem. The overall training cost further increases when one wants to solve additional problems. Lifelong machine learning (LML) develops algorithms that aim to efficiently learn to solve a sequence of problems, which become available one at a time. New problems are solved with less resources by transferring previously learned knowledge. At the same time, an LML algorithm needs to retain good performance on all encountered problems, thus avoiding catastrophic forgetting. Current approaches do not possess all the desired properties of an LML algorithm. First, they primarily focus on preventing catastrophic forgetting (Diaz-Rodriguez et al., 2018; Delange et al., 2021). As a result, they neglect some knowledge transfer properties. Furthermore, they assume that all problems in a sequence share the same input space. Finally, scaling these methods to a large sequence of problems remains a challenge.
Modular approaches to deep learning decompose a deep neural network into sub-networks, referred to as modules. Each module can then be trained to perform an atomic transformation, specialised in processing a distinct subset of inputs. This modular approach to storing knowledge makes it easy to only reuse the subset of modules which are useful for the task at hand.
This thesis introduces a line of research which demonstrates the merits of a modular approach to lifelong machine learning, and its ability to address the aforementioned shortcomings of other methods. Compared to previous work, we show that a modular approach can be used to achieve more LML properties than previously demonstrated. Furthermore, we develop tools which allow modular LML algorithms to scale in order to retain said properties on longer sequences of problems.
First, we introduce HOUDINI, a neurosymbolic framework for modular LML. HOUDINI represents modular deep neural networks as functional programs and accumulates a library of pre-trained modules over a sequence of problems. Given a new problem, we use program synthesis to select a suitable neural architecture, as well as a high-performing combination of pre-trained and new modules. We show that our approach has most of the properties desired from an LML algorithm. Notably, it can perform forward transfer, avoid negative transfer and prevent catastrophic forgetting, even across problems with disparate input domains and problems which require different neural architectures.
Second, we produce a modular LML algorithm which retains the properties of HOUDINI but can also scale to longer sequences of problems. To this end, we fix the choice of a neural architecture and introduce a probabilistic search framework, PICLE, for searching through different module combinations. To apply PICLE, we introduce two probabilistic models over neural modules which allows us to efficiently identify promising module combinations.
Third, we phrase the search over module combinations in modular LML as black-box optimisation, which allows one to make use of methods from the setting of hyperparameter optimisation (HPO). We then develop a new HPO method which marries a multi-fidelity approach with model-based optimisation. We demonstrate that this leads to improvement in anytime performance in the HPO setting and discuss how this can in turn be used to augment modular LML methods.
Overall, this thesis identifies a number of important LML properties, which have not all been attained in past methods, and presents an LML algorithm which can achieve all of them, apart from backward transfer
How Different Is Stereotypical Bias Across Languages?
Recent studies have demonstrated how to assess the stereotypical bias in
pre-trained English language models. In this work, we extend this branch of
research in multiple different dimensions by systematically investigating (a)
mono- and multilingual models of (b) different underlying architectures with
respect to their bias in (c) multiple different languages. To that end, we make
use of the English StereoSet data set (Nadeem et al., 2021), which we
semi-automatically translate into German, French, Spanish, and Turkish. We find
that it is of major importance to conduct this type of analysis in a
multilingual setting, as our experiments show a much more nuanced picture as
well as notable differences from the English-only analysis. The main takeaways
from our analysis are that mGPT-2 (partly) shows surprising anti-stereotypical
behavior across languages, English (monolingual) models exhibit the strongest
bias, and the stereotypes reflected in the data set are least present in
Turkish models. Finally, we release our codebase alongside the translated data
sets and practical guidelines for the semi-automatic translation to encourage a
further extension of our work to other languages.Comment: Accepted @ "3rd Workshop on Bias and Fairness in AI" (co-located with
ECML PKDD 2023). This is the author's version of the work. The definite
version of record will be published in the proceeding
Continual Learning, Fast and Slow
According to the Complementary Learning Systems (CLS)
theory~\cite{mcclelland1995there} in neuroscience, humans do effective
\emph{continual learning} through two complementary systems: a fast learning
system centered on the hippocampus for rapid learning of the specifics,
individual experiences; and a slow learning system located in the neocortex for
the gradual acquisition of structured knowledge about the environment.
Motivated by this theory, we propose \emph{DualNets} (for Dual Networks), a
general continual learning framework comprising a fast learning system for
supervised learning of pattern-separated representation from specific tasks and
a slow learning system for representation learning of task-agnostic general
representation via Self-Supervised Learning (SSL). DualNets can seamlessly
incorporate both representation types into a holistic framework to facilitate
better continual learning in deep neural networks. Via extensive experiments,
we demonstrate the promising results of DualNets on a wide range of continual
learning protocols, ranging from the standard offline, task-aware setting to
the challenging online, task-free scenario. Notably, on the
CTrL~\cite{veniat2020efficient} benchmark that has unrelated tasks with vastly
different visual images, DualNets can achieve competitive performance with
existing state-of-the-art dynamic architecture
strategies~\cite{ostapenko2021continual}. Furthermore, we conduct comprehensive
ablation studies to validate DualNets efficacy, robustness, and scalability.
Code will be made available at \url{https://github.com/phquang/DualNet}.Comment: arXiv admin note: substantial text overlap with arXiv:2110.0017
- …