19 research outputs found
Neural Discovery of Permutation Subgroups
We consider the problem of discovering subgroup of permutation group
. Unlike the traditional -invariant networks wherein is assumed
to be known, we present a method to discover the underlying subgroup, given
that it satisfies certain conditions. Our results show that one could discover
any subgroup of type by learning an -invariant
function and a linear transformation. We also prove similar results for cyclic
and dihedral subgroups. Finally, we provide a general theorem that can be
extended to discover other subgroups of . We also demonstrate the
applicability of our results through numerical experiments on image-digit sum
and symmetric polynomial regression tasks
A Unified Framework for Discovering Discrete Symmetries
We consider the problem of learning a function respecting a symmetry from
among a class of symmetries. We develop a unified framework that enables
symmetry discovery across a broad range of subgroups including locally
symmetric, dihedral and cyclic subgroups. At the core of the framework is a
novel architecture composed of linear and tensor-valued functions that
expresses functions invariant to these subgroups in a principled manner. The
structure of the architecture enables us to leverage multi-armed bandit
algorithms and gradient descent to efficiently optimize over the linear and the
tensor-valued functions, respectively, and to infer the symmetry that is
ultimately learnt. We also discuss the necessity of the tensor-valued functions
in the architecture. Experiments on image-digit sum and polynomial regression
tasks demonstrate the effectiveness of our approach
Adapt then Unlearn: Exploiting Parameter Space Semantics for Unlearning in Generative Adversarial Networks
The increased attention to regulating the outputs of deep generative models,
driven by growing concerns about privacy and regulatory compliance, has
highlighted the need for effective control over these models. This necessity
arises from instances where generative models produce outputs containing
undesirable, offensive, or potentially harmful content. To tackle this
challenge, the concept of machine unlearning has emerged, aiming to forget
specific learned information or to erase the influence of undesired data
subsets from a trained model. The objective of this work is to prevent the
generation of outputs containing undesired features from a pre-trained GAN
where the underlying training data set is inaccessible. Our approach is
inspired by a crucial observation: the parameter space of GANs exhibits
meaningful directions that can be leveraged to suppress specific undesired
features. However, such directions usually result in the degradation of the
quality of generated samples. Our proposed method, known as
'Adapt-then-Unlearn,' excels at unlearning such undesirable features while also
maintaining the quality of generated samples. This method unfolds in two
stages: in the initial stage, we adapt the pre-trained GAN using negative
samples provided by the user, while in the subsequent stage, we focus on
unlearning the undesired feature. During the latter phase, we train the
pre-trained GAN using positive samples, incorporating a repulsion regularizer.
This regularizer encourages the model's parameters to be away from the
parameters associated with the adapted model from the first stage while also
maintaining the quality of generated samples. To the best of our knowledge, our
approach stands as first method addressing unlearning in GANs. We validate the
effectiveness of our method through comprehensive experiments.Comment: 15 pages, 12 figure
Guided Prompting in SAM for Weakly Supervised Cell Segmentation in Histopathological Images
Cell segmentation in histopathological images plays a crucial role in
understanding, diagnosing, and treating many diseases. However, data annotation
for this is expensive since there can be a large number of cells per image, and
expert pathologists are needed for labelling images. Instead, our paper focuses
on using weak supervision -- annotation from related tasks -- to induce a
segmenter. Recent foundation models, such as Segment Anything (SAM), can use
prompts to leverage additional supervision during inference. SAM has performed
remarkably well in natural image segmentation tasks; however, its applicability
to cell segmentation has not been explored.
In response, we investigate guiding the prompting procedure in SAM for weakly
supervised cell segmentation when only bounding box supervision is available.
We develop two workflows: (1) an object detector's output as a test-time prompt
to SAM (D-SAM), and (2) SAM as pseudo mask generator over training data to
train a standalone segmentation model (SAM-S). On finding that both workflows
have some complementary strengths, we develop an integer programming-based
approach to reconcile the two sets of segmentation masks, achieving yet higher
performance. We experiment on three publicly available cell segmentation
datasets namely, ConSep, MoNuSeg, and TNBC, and find that all SAM-based
solutions hugely outperform existing weakly supervised image segmentation
models, obtaining 9-15 pt Dice gains
SCLAiR : Supervised Contrastive Learning for User and Device Independent Airwriting Recognition
Airwriting Recognition is the problem of identifying letters written in free
space with finger movement. It is essentially a specialized case of gesture
recognition, wherein the vocabulary of gestures corresponds to letters as in a
particular language. With the wide adoption of smart wearables in the general
population, airwriting recognition using motion sensors from a smart-band can
be used as a medium of user input for applications in Human-Computer
Interaction. There has been limited work in the recognition of in-air
trajectories using motion sensors, and the performance of the techniques in the
case when the device used to record signals is changed has not been explored
hitherto. Motivated by these, a new paradigm for device and user-independent
airwriting recognition based on supervised contrastive learning is proposed. A
two stage classification strategy is employed, the first of which involves
training an encoder network with supervised contrastive loss. In the subsequent
stage, a classification head is trained with the encoder weights kept frozen.
The efficacy of the proposed method is demonstrated through experiments on a
publicly available dataset and also with a dataset recorded in our lab using a
different device. Experiments have been performed in both supervised and
unsupervised settings and compared against several state-of-the-art domain
adaptation techniques. Data and the code for our implementation will be made
available at https://github.com/ayushayt/SCLAiR