3 research outputs found
Interpretable Convolutional Neural Networks
This paper proposes a method to modify traditional convolutional neural
networks (CNNs) into interpretable CNNs, in order to clarify knowledge
representations in high conv-layers of CNNs. In an interpretable CNN, each
filter in a high conv-layer represents a certain object part. We do not need
any annotations of object parts or textures to supervise the learning process.
Instead, the interpretable CNN automatically assigns each filter in a high
conv-layer with an object part during the learning process. Our method can be
applied to different types of CNNs with different structures. The clear
knowledge representation in an interpretable CNN can help people understand the
logics inside a CNN, i.e., based on which patterns the CNN makes the decision.
Experiments showed that filters in an interpretable CNN were more semantically
meaningful than those in traditional CNNs.Comment: In this version, we release the website of the code. Compared to the
previous version, we have corrected all values of location instability in
Table 3--6 by dividing the values by sqrt(2), i.e., a=a/sqrt(2). Such
revisions do NOT decrease the significance of the superior performance of our
method, because we make the same correction to location-instability values of
all baseline