343,422 research outputs found
Using Case Work as a Pretest to Measure Crisis Leadership Preparedness
Today’s leaders must thrive in a world of turbulence and constant change. Unstable conditions frequently generate crises, emphasizing the need for crisis leadership preparedness, which is missing from many business curricula. Thus, the purpose of this work was to develop a learning module in crisis leadership preparedness. As a baseline measure or pretest, 217 graduate students were asked to analyze two crisis leadership cases during the first week of an entry leadership class. Content analysis provided the method to identify where student analyses fell short. These gaps in learning then informed the creation of student learning objectives. Applying inquiry-based learning, I then suggest instructional methods that I incorporated into an active learning module to better prepare today’s leaders for crisis leadership
Optimizing Multi-Domain Performance with Active Learning-based Improvement Strategies
Improving performance in multiple domains is a challenging task, and often
requires significant amounts of data to train and test models. Active learning
techniques provide a promising solution by enabling models to select the most
informative samples for labeling, thus reducing the amount of labeled data
required to achieve high performance. In this paper, we present an active
learning-based framework for improving performance across multiple domains. Our
approach consists of two stages: first, we use an initial set of labeled data
to train a base model, and then we iteratively select the most informative
samples for labeling to refine the model. We evaluate our approach on several
multi-domain datasets, including image classification, sentiment analysis, and
object recognition. Our experiments demonstrate that our approach consistently
outperforms baseline methods and achieves state-of-the-art performance on
several datasets. We also show that our method is highly efficient, requiring
significantly fewer labeled samples than other active learning-based methods.
Overall, our approach provides a practical and effective solution for improving
performance across multiple domains using active learning techniques.Comment: 13 pages, 20 figures, draft work previously published as a medium
stor
Active learning for medical image segmentation with stochastic batches
The performance of learning-based algorithms improves with the amount of
labelled data used for training. Yet, manually annotating data is particularly
difficult for medical image segmentation tasks because of the limited expert
availability and intensive manual effort required. To reduce manual labelling,
active learning (AL) targets the most informative samples from the unlabelled
set to annotate and add to the labelled training set. On the one hand, most
active learning works have focused on the classification or limited
segmentation of natural images, despite active learning being highly desirable
in the difficult task of medical image segmentation. On the other hand,
uncertainty-based AL approaches notoriously offer sub-optimal batch-query
strategies, while diversity-based methods tend to be computationally expensive.
Over and above methodological hurdles, random sampling has proven an extremely
difficult baseline to outperform when varying learning and sampling conditions.
This work aims to take advantage of the diversity and speed offered by random
sampling to improve the selection of uncertainty-based AL methods for
segmenting medical images. More specifically, we propose to compute uncertainty
at the level of batches instead of samples through an original use of
stochastic batches (SB) during sampling in AL. Stochastic batch querying is a
simple and effective add-on that can be used on top of any uncertainty-based
metric. Extensive experiments on two medical image segmentation datasets show
that our strategy consistently improves conventional uncertainty-based sampling
methods. Our method can hence act as a strong baseline for medical image
segmentation. The code is available on:
https://github.com/Minimel/StochasticBatchAL.git.Comment: Accepted to Medical Image Analysis, 17 page
Toward Interpretable Deep Reinforcement Learning with Linear Model U-Trees
Deep Reinforcement Learning (DRL) has achieved impressive success in many
applications. A key component of many DRL models is a neural network
representing a Q function, to estimate the expected cumulative reward following
a state-action pair. The Q function neural network contains a lot of implicit
knowledge about the RL problems, but often remains unexamined and
uninterpreted. To our knowledge, this work develops the first mimic learning
framework for Q functions in DRL. We introduce Linear Model U-trees (LMUTs) to
approximate neural network predictions. An LMUT is learned using a novel
on-line algorithm that is well-suited for an active play setting, where the
mimic learner observes an ongoing interaction between the neural net and the
environment. Empirical evaluation shows that an LMUT mimics a Q function
substantially better than five baseline methods. The transparent tree structure
of an LMUT facilitates understanding the network's learned knowledge by
analyzing feature influence, extracting rules, and highlighting the
super-pixels in image inputs.Comment: This paper is accepted by ECML-PKDD 201
- …