106 research outputs found
Local control of multiple module converters with ratings-based load sharing
Multiple module dc-dc converters show promise in meeting the increasing demands on ef-
ficiency and performance of energy conversion systems. In order to increase reliability,
maintainability, and expandability, a modular approach in converter design is often desired.
This thesis proposes local control of multiple module converters as an alternative to using
a central controller or master controller. A power ratings-based load sharing scheme that
allows for uniform and non-uniform sharing is introduced. Focus is given to an input series,
output parallel (ISOP) configuration and modules with a push-pull topology. Sensorless
current mode (SCM) control is digitally implemented on separate controllers for each of the
modules. The benefits of interleaving the switching signals of the distributed modules is
presented. Simulation and experimental results demonstrate stable, ratings-based sharing
in an ISOP converter with a high conversion ratio for both uniform and non-uniform load
sharing cases
Selective classification using a robust meta-learning approach
Selective classification involves identifying the subset of test samples that
a model can classify with high accuracy, and is important for applications such
as automated medical diagnosis. We argue that this capability of identifying
uncertain samples is valuable for training classifiers as well, with the aim of
building more accurate classifiers. We unify these dual roles by training a
single auxiliary meta-network to output an importance weight as a function of
the instance. This measure is used at train time to reweight training data, and
at test-time to rank test instances for selective classification. A second, key
component of our proposal is the meta-objective of minimizing dropout variance
(the variance of classifier output when subjected to random weight dropout) for
training the metanetwork. We train the classifier together with its metanetwork
using a nested objective of minimizing classifier loss on training data and
meta-loss on a separate meta-training dataset. We outperform current
state-of-the-art on selective classification by substantial margins--for
instance, upto 1.9% AUC and 2% accuracy on a real-world diabetic retinopathy
dataset. Finally, our meta-learning framework extends naturally to unsupervised
domain adaptation, given our unsupervised variance minimization meta-objective.
We show cumulative absolute gains of 3.4% / 3.3% accuracy and AUC over the
other baselines in domain shift settings on the Retinopathy dataset using
unsupervised domain adaptation
Learning on non-stationary data with re-weighting
Many real-world learning scenarios face the challenge of slow concept drift,
where data distributions change gradually over time. In this setting, we pose
the problem of learning temporally sensitive importance weights for training
data, in order to optimize predictive accuracy. We propose a class of temporal
reweighting functions that can capture multiple timescales of change in the
data, as well as instance-specific characteristics. We formulate a bi-level
optimization criterion, and an associated meta-learning algorithm, by which
these weights can be learned. In particular, our formulation trains an
auxiliary network to output weights as a function of training instances,
thereby compactly representing the instance weights. We validate our temporal
reweighting scheme on a large real-world dataset of 39M images spread over a 9
year period. Our extensive experiments demonstrate the necessity of
instance-based temporal reweighting in the dataset, and achieve significant
improvements to classical batch-learning approaches. Further, our proposal
easily generalizes to a streaming setting and shows significant gains compared
to recent continual learning methods
Model-agnostic Fits for Understanding Information Seeking Patterns in Humans
In decision making tasks under uncertainty, humans display characteristic
biases in seeking, integrating, and acting upon information relevant to the
task. Here, we reexamine data from previous carefully designed experiments,
collected at scale, that measured and catalogued these biases in aggregate
form. We design deep learning models that replicate these biases in aggregate,
while also capturing individual variation in behavior. A key finding of our
work is that paucity of data collected from each individual subject can be
overcome by sampling large numbers of subjects from the population, while still
capturing individual differences. In addition, we can predict human behavior
with high accuracy without making any assumptions about task goals, reward
structure, or individual biases, thus providing a model-agnostic fit to human
behavior in the task. Such an approach can sidestep potential limitations in
modeler-specified inductive biases, and has implications for computational
modeling of human cognitive function in general, and of human-AI interfaces in
particular.Comment: 8 pages, 9 figures. AAAI 202
Rational Decision-Making in Inhibitory Control
An important aspect of cognitive flexibility is inhibitory control, the ability to dynamically modify or cancel planned actions in response to changes in the sensory environment or task demands. We formulate a probabilistic, rational decision-making framework for inhibitory control in the stop signal paradigm. Our model posits that subjects maintain a Bayes-optimal, continually updated representation of sensory inputs, and repeatedly assess the relative value of stopping and going on a fine temporal scale, in order to make an optimal decision on when and whether to go on each trial. We further posit that they implement this continual evaluation with respect to a global objective function capturing the various reward and penalties associated with different behavioral outcomes, such as speed and accuracy, or the relative costs of stop errors and go errors. We demonstrate that our rational decision-making model naturally gives rise to basic behavioral characteristics consistently observed for this paradigm, as well as more subtle effects due to contextual factors such as reward contingencies or motivational factors. Furthermore, we show that the classical race model can be seen as a computationally simpler, perhaps neurally plausible, approximation to optimal decision-making. This conceptual link allows us to predict how the parameters of the race model, such as the stopping latency, should change with task parameters and individual experiences/ability
Evaluation of the efficacy of Shiva modaka on Hematological, Biochemical and Immunological Parameters in the management of malnutrition among school going children
Malnutrition is an issue of global dimensions affecting all ages. Malnutrition in children is common in early age especially during infancy and weaning. However, it also prevails during early schooling. In adults and elderly it is studied as under Protein Energy Malnutrition. It has not only short term adverse effects but also exhibits long term sustained and progressive effects. Kuposhana/Bala Shosha is explained in the Ayurveda literatures and elaborate therapeutic interventions are also described. The disease Karshya also applies to this condition. Shiva Modaka, a drug described under Bala Roga seems to act on vide dimensions of pediatric health with indications in common pediatric ailments too. The present clinical study is an effort to evaluate the efficacy of the said drug on hematological, biochemical and immunological parameters in Malnutrition in school going children
Improving Generalization via Meta-Learning on Hard Samples
Learned reweighting (LRW) approaches to supervised learning use an
optimization criterion to assign weights for training instances, in order to
maximize performance on a representative validation dataset. We pose and
formalize the problem of optimized selection of the validation set used in LRW
training, to improve classifier generalization. In particular, we show that
using hard-to-classify instances in the validation set has both a theoretical
connection to, and strong empirical evidence of generalization. We provide an
efficient algorithm for training this meta-optimized model, as well as a simple
train-twice heuristic for careful comparative study. We demonstrate that LRW
with easy validation data performs consistently worse than LRW with hard
validation data, establishing the validity of our meta-optimization problem.
Our proposed algorithm outperforms a wide range of baselines on a range of
datasets and domain shift challenges (Imagenet-1K, CIFAR-100, Clothing-1M,
CAMELYON, WILDS, etc.), with ~1% gains using VIT-B on Imagenet. We also show
that using naturally hard examples for validation (Imagenet-R / Imagenet-A) in
LRW training for Imagenet improves performance on both clean and naturally hard
test instances by 1-2%. Secondary analyses show that using hard validation data
in an LRW framework improves margins on test data, hinting at the mechanism
underlying our empirical gains. We believe this work opens up new research
directions for the meta-optimization of meta-learning in a supervised learning
context.Comment: Accepted at CVPR 202
- …