106 research outputs found
Displaced dynamics of binary mixtures in linear and nonlinear optical lattices
The dynamical behavior of matter wave solitons of two-component Bose-Einstein
condensates (BEC) in combined linear and nonlinear optical lattices (OLs) is
investigated. In particular, the dependence of the frequency of the oscillating
dynamics resulting from initially slightly displaced components is investigated
both analytically, by means of a variational effective potential approach for
the reduced collective coordinate dynamics of the soliton, and numerically, by
direct integrations of the mean field equations of the BEC mixture. We show
that for small initial displacements binary solitons can be viewed as point
masses connected by elastic springs of strengths related to the amplitude of
the OL and to the intra and inter-species interactions. Analytical expressions
of symmetric and anti-symmetric mode frequencies, are derived and occurrence of
beatings phenomena in the displaced dynamics is predicted. These expressions
are shown to give a very good estimation of the oscillation frequencies for
different values of the intra-species interatomic scattering length, as
confirmed by direct numerical integrations of the mean field Gross-Pitaevskii
equations (GPE) of the mixture. The possibility to use displaced dynamics for
indirect measurements of BEC mixture characteristics such as number of atoms
and interatomic interactions is also suggested.Comment: 8 pages, 21 figure
Band of cacophony - abdominal catastrophe caused by the fibrous band of Meckel’s diverticulum: a case report
CoDeC: Communication-Efficient Decentralized Continual Learning
Training at the edge utilizes continuously evolving data generated at
different locations. Privacy concerns prohibit the co-location of this
spatially as well as temporally distributed data, deeming it crucial to design
training algorithms that enable efficient continual learning over decentralized
private data. Decentralized learning allows serverless training with spatially
distributed data. A fundamental barrier in such distributed learning is the
high bandwidth cost of communicating model updates between agents. Moreover,
existing works under this training paradigm are not inherently suitable for
learning a temporal sequence of tasks while retaining the previously acquired
knowledge. In this work, we propose CoDeC, a novel communication-efficient
decentralized continual learning algorithm which addresses these challenges. We
mitigate catastrophic forgetting while learning a task sequence in a
decentralized learning setup by combining orthogonal gradient projection with
gossip averaging across decentralized agents. Further, CoDeC includes a novel
lossless communication compression scheme based on the gradient subspaces. We
express layer-wise gradients as a linear combination of the basis vectors of
these gradient subspaces and communicate the associated coefficients. We
theoretically analyze the convergence rate for our algorithm and demonstrate
through an extensive set of experiments that CoDeC successfully learns
distributed continual tasks with minimal forgetting. The proposed compression
scheme results in up to 4.8x reduction in communication costs with
iso-performance as the full communication baseline
Homogenizing Non-IID datasets via In-Distribution Knowledge Distillation for Decentralized Learning
Decentralized learning enables serverless training of deep neural networks
(DNNs) in a distributed manner on multiple nodes. This allows for the use of
large datasets, as well as the ability to train with a wide variety of data
sources. However, one of the key challenges with decentralized learning is
heterogeneity in the data distribution across the nodes. In this paper, we
propose In-Distribution Knowledge Distillation (IDKD) to address the challenge
of heterogeneous data distribution. The goal of IDKD is to homogenize the data
distribution across the nodes. While such data homogenization can be achieved
by exchanging data among the nodes sacrificing privacy, IDKD achieves the same
objective using a common public dataset across nodes without breaking the
privacy constraint. This public dataset is different from the training dataset
and is used to distill the knowledge from each node and communicate it to its
neighbors through the generated labels. With traditional knowledge
distillation, the generalization of the distilled model is reduced because all
the public dataset samples are used irrespective of their similarity to the
local dataset. Thus, we introduce an Out-of-Distribution (OoD) detector at each
node to label a subset of the public dataset that maps close to the local
training data distribution. Finally, only labels corresponding to these subsets
are exchanged among the nodes and with appropriate label averaging each node is
finetuned on these data subsets along with its local data. Our experiments on
multiple image classification datasets and graph topologies show that the
proposed IDKD scheme is more effective than traditional knowledge distillation
and achieves state-of-the-art generalization performance on heterogeneously
distributed data with minimal communication overhead
- …