1,746 research outputs found
Understanding deep learning
Deep neural networks have reached impressive performance in many tasks in computer vision and its applications. However, research into understanding deep neural networks is challenging due to the evaluation. Since it is unknown which features deep neural networks use, it is hard to empirically evaluate whether a result for which feature is used by a deep neural network is correct. The state- of-the-art for understanding which features a deep neural network uses to reach its prediction is sailiency maps. However, all methods built on sailiency maps share shortcomings that open a gap between the current state-of-the-art and the requirements for understanding deep neural networks. This work describes a method that does not suffer from these shortcomings. To this end, we employ the framework of causal modeling to determine whether a feature is used by the neural network. We present theoretical evidence that our method is able to correctly identify if a feature is used. Furthermore, we demonstrate two studies as empirical evidence. First, we show that our method can further the understanding of automatic skin lesion classifiers. There, we find that some of the features in the ABCD rule are used by the classifiers to identify melanoma but not to identify seborrheic keratosis. In contrast, all classifiers highly rely on the bias variables, particularly the age of the patient and the existence of colorful patches in the input image. Second we apply our method to adversarial debiasing. In adversarial debiasing, we want to stop a neural network from using a known bias variable. We demonstrate in a toy example and an example on real- world images that our approach outperforms the state-of-the-art in adversarial debiasing
Feature-Guided Black-Box Safety Testing of Deep Neural Networks
Despite the improved accuracy of deep neural networks, the discovery of
adversarial examples has raised serious safety concerns. Most existing
approaches for crafting adversarial examples necessitate some knowledge
(architecture, parameters, etc.) of the network at hand. In this paper, we
focus on image classifiers and propose a feature-guided black-box approach to
test the safety of deep neural networks that requires no such knowledge. Our
algorithm employs object detection techniques such as SIFT (Scale Invariant
Feature Transform) to extract features from an image. These features are
converted into a mutable saliency distribution, where high probability is
assigned to pixels that affect the composition of the image with respect to the
human visual system. We formulate the crafting of adversarial examples as a
two-player turn-based stochastic game, where the first player's objective is to
minimise the distance to an adversarial example by manipulating the features,
and the second player can be cooperative, adversarial, or random. We show that,
theoretically, the two-player game can con- verge to the optimal strategy, and
that the optimal strategy represents a globally minimal adversarial image. For
Lipschitz networks, we also identify conditions that provide safety guarantees
that no adversarial examples exist. Using Monte Carlo tree search we gradually
explore the game state space to search for adversarial examples. Our
experiments show that, despite the black-box setting, manipulations guided by a
perception-based saliency distribution are competitive with state-of-the-art
methods that rely on white-box saliency matrices or sophisticated optimization
procedures. Finally, we show how our method can be used to evaluate robustness
of neural networks in safety-critical applications such as traffic sign
recognition in self-driving cars.Comment: 35 pages, 5 tables, 23 figure
An Integrated Architecture and Feature Selection Algorithm for Radial Basis Neural Networks
There are two basic ways to control an Unmanned Combat Aerial Vehicle (UCAV) as it searches for targets: allow the UCAV to act autonomously or employ man-in-the-loop control. There are also two target sets of interest: fixed or mobile targets. This research focuses on UCAV-based targeting of mobile targets using man-in-the-loop control. In particular, the interest is in how levels of satellite signal latency or signal degradation affect the ability to accurately track, target, and attack mobile targets. This research establishes a weapon effectiveness model assessing targeting inaccuracies as a function of signal latency and/or signal degradation. The research involved three phases. The first phase in the research was to identify the levels of signal latency associated with satellite communications. A literature review, supplemented by interviews with UAV operators, provided insight into the expected range latency values. The second phase of the research identified those factors whose value, in the presence of satellite signal latency, could influence targeting errors during UCAV employment. The final phase involved developing and testing a weapon effectiveness model explicitly modeling satellite signal latency in UCAV targeting against mobile targets. This phase included an effectiveness analysis study
Operator State Estimation for Adaptive Aiding in Uninhabited Combat Air Vehicles
This research demonstrated the first closed-loop implementation of adaptive automation using operator functional state in an operationally relevant environment. In the Uninhabited Combat Air Vehicle (UCAV) environment, operators can become cognitively overloaded and their performance may decrease during mission critical events. This research demonstrates an unprecedented closed-loop system, one that adaptively aids UCAV operators based on their cognitive functional state A series of experiments were conducted to 1) determine the best classifiers for estimating operator functional state, 2) determine if physiological measures can be used to develop multiple cognitive models based on information processing demands and task type, 3) determine the salient psychophysiological measures in operator functional state, and 4) demonstrate the benefits of intelligent adaptive aiding using operator functional state. Aiding the operator actually improved performance and increased mission effectiveness by 67%
Fast and accurate classification of echocardiograms using deep learning
Echocardiography is essential to modern cardiology. However, human
interpretation limits high throughput analysis, limiting echocardiography from
reaching its full clinical and research potential for precision medicine. Deep
learning is a cutting-edge machine-learning technique that has been useful in
analyzing medical images but has not yet been widely applied to
echocardiography, partly due to the complexity of echocardiograms' multi view,
multi modality format. The essential first step toward comprehensive computer
assisted echocardiographic interpretation is determining whether computers can
learn to recognize standard views. To this end, we anonymized 834,267
transthoracic echocardiogram (TTE) images from 267 patients (20 to 96 years, 51
percent female, 26 percent obese) seen between 2000 and 2017 and labeled them
according to standard views. Images covered a range of real world clinical
variation. We built a multilayer convolutional neural network and used
supervised learning to simultaneously classify 15 standard views. Eighty
percent of data used was randomly chosen for training and 20 percent reserved
for validation and testing on never seen echocardiograms. Using multiple images
from each clip, the model classified among 12 video views with 97.8 percent
overall test accuracy without overfitting. Even on single low resolution
images, test accuracy among 15 views was 91.7 percent versus 70.2 to 83.5
percent for board-certified echocardiographers. Confusional matrices, occlusion
experiments, and saliency mapping showed that the model finds recognizable
similarities among related views and classifies using clinically relevant image
features. In conclusion, deep neural networks can classify essential
echocardiographic views simultaneously and with high accuracy. Our results
provide a foundation for more complex deep learning assisted echocardiographic
interpretation.Comment: 31 pages, 8 figure
- …