19,675 research outputs found
Convolutional Color Constancy
Color constancy is the problem of inferring the color of the light that
illuminated a scene, usually so that the illumination color can be removed.
Because this problem is underconstrained, it is often solved by modeling the
statistical regularities of the colors of natural objects and illumination. In
contrast, in this paper we reformulate the problem of color constancy as a 2D
spatial localization task in a log-chrominance space, thereby allowing us to
apply techniques from object detection and structured prediction to the color
constancy problem. By directly learning how to discriminate between correctly
white-balanced images and poorly white-balanced images, our model is able to
improve performance on standard benchmarks by nearly 40%
Generating Counterfactual Explanations with Natural Language
Natural language explanations of deep neural network decisions provide an
intuitive way for a AI agent to articulate a reasoning process. Current textual
explanations learn to discuss class discriminative features in an image.
However, it is also helpful to understand which attributes might change a
classification decision if present in an image (e.g., "This is not a Scarlet
Tanager because it does not have black wings.") We call such textual
explanations counterfactual explanations, and propose an intuitive method to
generate counterfactual explanations by inspecting which evidence in an input
is missing, but might contribute to a different classification decision if
present in the image. To demonstrate our method we consider a fine-grained
image classification task in which we take as input an image and a
counterfactual class and output text which explains why the image does not
belong to a counterfactual class. We then analyze our generated counterfactual
explanations both qualitatively and quantitatively using proposed automatic
metrics.Comment: presented at 2018 ICML Workshop on Human Interpretability in Machine
Learning (WHI 2018), Stockholm, Swede
Formal Modeling of Connectionism using Concurrency Theory, an Approach Based on Automata and Model Checking
This paper illustrates a framework for applying formal methods techniques, which are symbolic in nature, to specifying and verifying neural networks, which are sub-symbolic in nature. The paper describes a communicating automata [Bowman & Gomez, 2006] model of neural networks. We also implement the model using timed automata [Alur & Dill, 1994] and then undertake a verification of these models using the model checker Uppaal [Pettersson, 2000] in order to evaluate the performance of learning algorithms. This paper also presents discussion of a number of broad issues concerning cognitive neuroscience and the debate as to whether symbolic processing or connectionism is a suitable representation of cognitive systems. Additionally, the issue of integrating symbolic techniques, such as formal methods, with complex neural networks is discussed. We then argue that symbolic verifications may give theoretically well-founded ways to evaluate and justify neural learning systems in the field of both theoretical research and real world applications
- …