631 research outputs found
Adaptation and learning over networks for nonlinear system modeling
In this chapter, we analyze nonlinear filtering problems in distributed
environments, e.g., sensor networks or peer-to-peer protocols. In these
scenarios, the agents in the environment receive measurements in a streaming
fashion, and they are required to estimate a common (nonlinear) model by
alternating local computations and communications with their neighbors. We
focus on the important distinction between single-task problems, where the
underlying model is common to all agents, and multitask problems, where each
agent might converge to a different model due to, e.g., spatial dependencies or
other factors. Currently, most of the literature on distributed learning in the
nonlinear case has focused on the single-task case, which may be a strong
limitation in real-world scenarios. After introducing the problem and reviewing
the existing approaches, we describe a simple kernel-based algorithm tailored
for the multitask case. We evaluate the proposal on a simulated benchmark task,
and we conclude by detailing currently open problems and lines of research.Comment: To be published as a chapter in `Adaptive Learning Methods for
Nonlinear System Modeling', Elsevier Publishing, Eds. D. Comminiello and J.C.
Principe (2018
Meta-Learning via Classifier(-free) Guidance
State-of-the-art meta-learning techniques do not optimize for zero-shot adaptation to unseen tasks, a setting in which humans excel. On the contrary, meta-learning algorithms learn hyperparameters and weight initializations that explicitly optimize for few-shot learning performance. In this work, we take inspiration from recent advances in generative modeling and language-conditioned image synthesis to propose meta-learning techniques that use natural language guidance to achieve higher zero-shot performance compared to the state-of-the-art. We do so by recasting the meta-learning problem as a multi-modal generative modeling problem: given a task, we consider its adapted neural network weights and its natural language description as equivalent multi-modal task representations. We first train an unconditional generative hypernetwork model to produce neural network weights; then we train a second "guidance" model that, given a natural language task description, traverses the hypernetwork latent space to find high-performance task-adapted weights in a zero-shot manner. We explore two alternative approaches for latent space guidance: "HyperCLIP"-based classifier guidance and a conditional Hypernetwork Latent Diffusion Model ("HyperLDM"), which we show to benefit from the classifier-free guidance technique common in image generation. Finally, we demonstrate that our approaches outperform existing meta-learning methods with zero-shot learning experiments on our Meta-VQA dataset, which we specifically constructed to reflect the multi-modal meta-learning setting
Robust Distributed Clustering Algorithm Over Multitask Networks
We propose a new adaptive clustering algorithm that is robust to various multitask environments. Positional relationships among optimal vectors and a reference signal are determined by using the mean-square deviation relation derived from a one-step least-mean-square update. Clustering is performed by combining determinations on the positional relationships at several iterations. From this geometrical basis, unlike the conventional clustering algorithms using simple thresholding method, the proposed algorithm can perform clustering accurately in various multitask environments. Simulation results show that the proposed algorithm has more accurate estimation accuracy than the conventional algorithms and is insensitive to parameter selection.11Ysciescopu
Efficient Large-Scale Visual Representation Learning
In this article, we present our approach to single-modality visual
representation learning. Understanding visual representations of product
content is vital for recommendations, search, and advertising applications in
e-commerce. We detail and contrast techniques used to fine-tune large-scale
visual representation learning models in an efficient manner under low-resource
settings, including several pretrained backbone architectures, both in the
convolutional neural network as well as the vision transformer family. We
highlight the challenges for e-commerce applications at-scale and highlight the
efforts to more efficiently train, evaluate, and serve visual representations.
We present ablation studies evaluating the representation offline performance
for several downstream tasks, including our visually similar ad
recommendations. To this end, we present a novel text-to-image generative
offline evaluation method for visually similar recommendation systems. Finally,
we include online results from deployed machine learning systems in production
at Etsy
An Event-based Diffusion LMS Strategy
We consider a wireless sensor network consists of cooperative nodes, each of
them keep adapting to streaming data to perform a least-mean-squares
estimation, and also maintain information exchange among neighboring nodes in
order to improve performance. For the sake of reducing communication overhead,
prolonging batter life while preserving the benefits of diffusion cooperation,
we propose an energy-efficient diffusion strategy that adopts an event-based
communication mechanism, which allow nodes to cooperate with neighbors only
when necessary. We also study the performance of the proposed algorithm, and
show that its network mean error and MSD are bounded in steady state. Numerical
results demonstrate that the proposed method can effectively reduce the network
energy consumption without sacrificing steady-state network MSD performance
significantly
- …