240 research outputs found
Personalized Course Sequence Recommendations
Given the variability in student learning it is becoming increasingly
important to tailor courses as well as course sequences to student needs. This
paper presents a systematic methodology for offering personalized course
sequence recommendations to students. First, a forward-search
backward-induction algorithm is developed that can optimally select course
sequences to decrease the time required for a student to graduate. The
algorithm accounts for prerequisite requirements (typically present in higher
level education) and course availability. Second, using the tools of
multi-armed bandits, an algorithm is developed that can optimally recommend a
course sequence that both reduces the time to graduate while also increasing
the overall GPA of the student. The algorithm dynamically learns how students
with different contextual backgrounds perform for given course sequences and
then recommends an optimal course sequence for new students. Using real-world
student data from the UCLA Mechanical and Aerospace Engineering department, we
illustrate how the proposed algorithms outperform other methods that do not
include student contextual information when making course sequence
recommendations
Mean almost periodicity and moment exponential stability of discrete-time stochastic shunting inhibitory cellular neural networks with time delays
summary:By using the semi-discrete method of differential equations, a new version of discrete analogue of stochastic shunting inhibitory cellular neural networks (SICNNs) is formulated, which gives a more accurate characterization for continuous-time stochastic SICNNs than that by Euler scheme. Firstly, the existence of the 2th mean almost periodic sequence solution of the discrete-time stochastic SICNNs is investigated with the help of Minkowski inequality, Hölder inequality and Krasnoselskii's fixed point theorem. Secondly, the moment global exponential stability of the discrete-time stochastic SICNNs is also studied by using some analytical skills and the proof of contradiction. Finally, two examples are given to demonstrate that our results are feasible. By numerical simulations, we discuss the effect of stochastic perturbation on the almost periodicity and global exponential stability of the discrete-time stochastic SICNNs
A Benchmark of Long-tailed Instance Segmentation with Noisy Labels (Short Version)
In this paper, we consider the instance segmentation task on a long-tailed
dataset, which contains label noise, i.e., some of the annotations are
incorrect. There are two main reasons making this case realistic. First,
datasets collected from real world usually obey a long-tailed distribution.
Second, for instance segmentation datasets, as there are many instances in one
image and some of them are tiny, it is easier to introduce noise into the
annotations. Specifically, we propose a new dataset, which is a large
vocabulary long-tailed dataset containing label noise for instance
segmentation. Furthermore, we evaluate previous proposed instance segmentation
algorithms on this dataset. The results indicate that the noise in the training
dataset will hamper the model in learning rare categories and decrease the
overall performance, and inspire us to explore more effective approaches to
address this practical challenge. The code and dataset are available in
https://github.com/GuanlinLee/Noisy-LVIS
Adversarial Training Over Long-Tailed Distribution
In this paper, we study adversarial training on datasets that obey the
long-tailed distribution, which is practical but rarely explored in previous
works. Compared with conventional adversarial training on balanced datasets,
this process falls into the dilemma of generating uneven adversarial examples
(AEs) and an unbalanced feature embedding space, causing the resulting model to
exhibit low robustness and accuracy on tail data. To combat that, we propose a
new adversarial training framework -- Re-balancing Adversarial Training (REAT).
This framework consists of two components: (1) a new training strategy inspired
by the term effective number to guide the model to generate more balanced and
informative AEs; (2) a carefully constructed penalty function to force a
satisfactory feature space. Evaluation results on different datasets and model
structures prove that REAT can effectively enhance the model's robustness and
preserve the model's clean accuracy. The code can be found in
https://github.com/GuanlinLee/REAT
A Stealthy and Robust Fingerprinting Scheme for Generative Models
This paper presents a novel fingerprinting methodology for the Intellectual
Property protection of generative models. Prior solutions for discriminative
models usually adopt adversarial examples as the fingerprints, which give
anomalous inference behaviors and prediction results. Hence, these methods are
not stealthy and can be easily recognized by the adversary. Our approach
leverages the invisible backdoor technique to overcome the above limitation.
Specifically, we design verification samples, whose model outputs look normal
but can trigger a backdoor classifier to make abnormal predictions. We propose
a new backdoor embedding approach with Unique-Triplet Loss and fine-grained
categorization to enhance the effectiveness of our fingerprints. Extensive
evaluations show that this solution can outperform other strategies with higher
robustness, uniqueness and stealthiness for various GAN models
A novel facial expression recognition based on the curevlet features
Curvelet transform has been recently proved to be a powerful tool for multi-resolution analysis on images. In this paper we propose a new approach for facial expression recognition based on features extracted via curvelet transform. First curvelet transform is presented and its advantages in image analysis are described. Then the coefficients of curvelet in selected scales and angles are used as features for image analysis. Consequently the Principal Component Analysis (PCA) and Linear Discriminate Analysis (LDA) are used to reduce and optimize the curvelet features. Finally we use the nearest neighbor classifier to recognize the facial expressions based on these features. The experimental results on JAFFE and Cohn Kanade two benchmark databases show that the proposed approach outperforms the PCA and LDA techniques on the original image pixel values as well as its counterparts with the wavelet features
Omnipotent Adversarial Training in the Wild
Adversarial training is an important topic in robust deep learning, but the
community lacks attention to its practical usage. In this paper, we aim to
resolve a real-world challenge, i.e., training a model on an imbalanced and
noisy dataset to achieve high clean accuracy and adversarial robustness, with
our proposed Omnipotent Adversarial Training (OAT) strategy. OAT consists of
two innovative methodologies to address the imperfection in the training set.
We first introduce an oracle into the adversarial training process to help the
model learn a correct data-label conditional distribution. This
carefully-designed oracle can provide correct label annotations for adversarial
training. We further propose logits adjustment adversarial training to overcome
the data imbalance issue, which can help the model learn a Bayes-optimal
distribution. Our comprehensive evaluation results show that OAT outperforms
other baselines by more than 20% clean accuracy improvement and 10% robust
accuracy improvement under complex combinations of data imbalance and label
noise scenarios. The code can be found in https://github.com/GuanlinLee/OAT
- …