1,263 research outputs found
Negative Results in Computer Vision: A Perspective
A negative result is when the outcome of an experiment or a model is not what
is expected or when a hypothesis does not hold. Despite being often overlooked
in the scientific community, negative results are results and they carry value.
While this topic has been extensively discussed in other fields such as social
sciences and biosciences, less attention has been paid to it in the computer
vision community. The unique characteristics of computer vision, particularly
its experimental aspect, call for a special treatment of this matter. In this
paper, I will address what makes negative results important, how they should be
disseminated and incentivized, and what lessons can be learned from cognitive
vision research in this regard. Further, I will discuss issues such as computer
vision and human vision interaction, experimental design and statistical
hypothesis testing, explanatory versus predictive modeling, performance
evaluation, model comparison, as well as computer vision research culture
Exploring Robustness of Neural Networks through Graph Measures
Motivated by graph theory, artificial neural networks (ANNs) are
traditionally structured as layers of neurons (nodes), which learn useful
information by the passage of data through interconnections (edges). In the
machine learning realm, graph structures (i.e., neurons and connections) of
ANNs have recently been explored using various graph-theoretic measures linked
to their predictive performance. On the other hand, in network science
(NetSci), certain graph measures including entropy and curvature are known to
provide insight into the robustness and fragility of real-world networks. In
this work, we use these graph measures to explore the robustness of various
ANNs to adversarial attacks. To this end, we (1) explore the design space of
inter-layer and intra-layers connectivity regimes of ANNs in the graph domain
and record their predictive performance after training under different types of
adversarial attacks, (2) use graph representations for both inter-layer and
intra-layers connectivity regimes to calculate various graph-theoretic
measures, including curvature and entropy, and (3) analyze the relationship
between these graph measures and the adversarial performance of ANNs. We show
that curvature and entropy, while operating in the graph domain, can quantify
the robustness of ANNs without having to train these ANNs. Our results suggest
that the real-world networks, including brain networks, financial networks, and
social networks may provide important clues to the neural architecture search
for robust ANNs. We propose a search strategy that efficiently finds robust
ANNs amongst a set of well-performing ANNs without having a need to train all
of these ANNs.Comment: 18 pages, 15 figure
- …