8 research outputs found
Transfer Learning from Audio-Visual Grounding to Speech Recognition
Transfer learning aims to reduce the amount of data required to excel at a
new task by re-using the knowledge acquired from learning other related tasks.
This paper proposes a novel transfer learning scenario, which distills robust
phonetic features from grounding models that are trained to tell whether a pair
of image and speech are semantically correlated, without using any textual
transcripts. As semantics of speech are largely determined by its lexical
content, grounding models learn to preserve phonetic information while
disregarding uncorrelated factors, such as speaker and channel. To study the
properties of features distilled from different layers, we use them as input
separately to train multiple speech recognition models. Empirical results
demonstrate that layers closer to input retain more phonetic information, while
following layers exhibit greater invariance to domain shift. Moreover, while
most previous studies include training data for speech recognition for feature
extractor training, our grounding models are not trained on any of those data,
indicating more universal applicability to new domains.Comment: Accepted to Interspeech 2019. 4 pages, 2 figure
Exploiting the Vulnerability of Deep Learning-Based Artificial Intelligence Models in Medical Imaging: Adversarial Attacks
Due to rapid developments in the deep learning model, artificial intelligence (AI) models are expected to enhance clinical diagnostic ability and work efficiency by assisting physicians. Therefore, many hospitals and private companies are competing to develop AI-based automatic diagnostic systems using medical images. In the near future, many deep learning-based automatic diagnostic systems would be used clinically. However, the possibility of adversarial attacks exploiting certain vulnerabilities of the deep learning algorithm is a major obstacle to deploying deep learning-based systems in clinical practice. In this paper, we will examine in detail the kinds of principles and methods of adversarial attacks that can be made to deep learning models dealing with medical images, the problems that can arise, and the preventive measures that can be taken against them.ope