66,405 research outputs found
To which extend is the "neural code" a metric ?
Here is proposed a review of the different choices to structure spike trains,
using deterministic metrics. Temporal constraints observed in biological or
computational spike trains are first taken into account. The relation with
existing neural codes (rate coding, rank coding, phase coding, ..) is then
discussed. To which extend the "neural code" contained in spike trains is
related to a metric appears to be a key point, a generalization of the
Victor-Purpura metric family being proposed for temporal constrained causal
spike trainsComment: 5 pages 5 figures Proceeding of the conference NeuroComp200
To which extend is the "neural code'' a metric ?
ISBN : 978-2-9532965-0-1Here is proposed a review of the different choices to structure spike trains, using deterministic metrics. Temporal constraints observed in biological or computational spike trains are first taken into account The relation with existing neural codes (rate coding, rank coding, phase coding, ..) is then discussed. To which extend the ``neural code'' contained in spike trains is related to a metric appears to be a key point, a generalization of the Victor-Purpura metric family being proposed for temporal constrained causal spike trains
The Effect of Intrinsic Dataset Properties on Generalization: Unraveling Learning Differences Between Natural and Medical Images
This paper investigates discrepancies in how neural networks learn from
different imaging domains, which are commonly overlooked when adopting computer
vision techniques from the domain of natural images to other specialized
domains such as medical images. Recent works have found that the generalization
error of a trained network typically increases with the intrinsic dimension
() of its training set. Yet, the steepness of this relationship
varies significantly between medical (radiological) and natural imaging
domains, with no existing theoretical explanation. We address this gap in
knowledge by establishing and empirically validating a generalization scaling
law with respect to , and propose that the substantial scaling
discrepancy between the two considered domains may be at least partially
attributed to the higher intrinsic ``label sharpness'' () of
medical imaging datasets, a metric which we propose. Next, we demonstrate an
additional benefit of measuring the label sharpness of a training set: it is
negatively correlated with the trained model's adversarial robustness, which
notably leads to models for medical images having a substantially higher
vulnerability to adversarial attack. Finally, we extend our
formalism to the related metric of learned representation intrinsic dimension
(), derive a generalization scaling law with respect to ,
and show that serves as an upper bound for . Our
theoretical results are supported by thorough experiments with six models and
eleven natural and medical imaging datasets over a range of training set sizes.
Our findings offer insights into the influence of intrinsic dataset properties
on generalization, representation learning, and robustness in deep neural
networks. Code link: https://github.com/mazurowski-lab/intrinsic-propertiesComment: ICLR 2024. Code:
https://github.com/mazurowski-lab/intrinsic-propertie
CodNN -- Robust Neural Networks From Coded Classification
Deep Neural Networks (DNNs) are a revolutionary force in the ongoing
information revolution, and yet their intrinsic properties remain a mystery. In
particular, it is widely known that DNNs are highly sensitive to noise, whether
adversarial or random. This poses a fundamental challenge for hardware
implementations of DNNs, and for their deployment in critical applications such
as autonomous driving. In this paper we construct robust DNNs via error
correcting codes. By our approach, either the data or internal layers of the
DNN are coded with error correcting codes, and successful computation under
noise is guaranteed. Since DNNs can be seen as a layered concatenation of
classification tasks, our research begins with the core task of classifying
noisy coded inputs, and progresses towards robust DNNs. We focus on binary data
and linear codes. Our main result is that the prevalent parity code can
guarantee robustness for a large family of DNNs, which includes the recently
popularized binarized neural networks. Further, we show that the coded
classification problem has a deep connection to Fourier analysis of Boolean
functions. In contrast to existing solutions in the literature, our results do
not rely on altering the training process of the DNN, and provide
mathematically rigorous guarantees rather than experimental evidence.Comment: To appear in ISIT '2
- …