22 research outputs found
Deep Transductive Transfer Learning for Automatic Target Recognition
One of the major obstacles in designing an automatic target recognition (ATR)
algorithm, is that there are often labeled images in one domain (i.e., infrared
source domain) but no annotated images in the other target domains (i.e.,
visible, SAR, LIDAR). Therefore, automatically annotating these images is
essential to build a robust classifier in the target domain based on the
labeled images of the source domain. Transductive transfer learning is an
effective way to adapt a network to a new target domain by utilizing a
pretrained ATR network in the source domain. We propose an unpaired
transductive transfer learning framework where a CycleGAN model and a
well-trained ATR classifier in the source domain are used to construct an ATR
classifier in the target domain without having any labeled data in the target
domain. We employ a CycleGAN model to transfer the mid-wave infrared (MWIR)
images to visible (VIS) domain images (or visible to MWIR domain). To train the
transductive CycleGAN, we optimize a cost function consisting of the
adversarial, identity, cycle-consistency, and categorical cross-entropy loss
for both the source and target classifiers. In this paper, we perform a
detailed experimental analysis on the challenging DSIAC ATR dataset. The
dataset consists of ten classes of vehicles at different poses and distances
ranging from 1-5 kilometers on both the MWIR and VIS domains. In our
experiment, we assume that the images in the VIS domain are the unlabeled
target dataset. We first detect and crop the vehicles from the raw images and
then project them into a common distance of 2 kilometers. Our proposed
transductive CycleGAN achieves 71.56% accuracy in classifying the visible
domain vehicles in the DSIAC ATR dataset.Comment: 10 pages, 5 figure
Survey on Emotion Recognition Using Facial Expression
Automatic recognition of human affects has become more interesting and challenging problem in artificial intelligence, human-computer interaction and computer vision fields. Facial Expression (FE) is the one of the most significant features to recognize the emotion of human in daily human interaction. FE Recognition (FER) has received important interest from psychologists and computer scientists for the applications of health care assessment, human affect analysis, and human computer interaction. Human express their emotions in a number of ways including body gesture, word, vocal and facial expressions. Expression is the important channel to convey emotion information of different people because face can express mainly human emotion. This paper surveys the current research works related to facial expression recognition. The study attends to explored details of the facial datasets, feature extraction methods, the comparison results and futures studies of the facial emotion system