4,028 research outputs found
Supervised Classification: Quite a Brief Overview
The original problem of supervised classification considers the task of
automatically assigning objects to their respective classes on the basis of
numerical measurements derived from these objects. Classifiers are the tools
that implement the actual functional mapping from these measurements---also
called features or inputs---to the so-called class label---or output. The
fields of pattern recognition and machine learning study ways of constructing
such classifiers. The main idea behind supervised methods is that of learning
from examples: given a number of example input-output relations, to what extent
can the general mapping be learned that takes any new and unseen feature vector
to its correct class? This chapter provides a basic introduction to the
underlying ideas of how to come to a supervised classification problem. In
addition, it provides an overview of some specific classification techniques,
delves into the issues of object representation and classifier evaluation, and
(very) briefly covers some variations on the basic supervised classification
task that may also be of interest to the practitioner
A Spectral Theory of Neural Prediction and Alignment
The representations of neural networks are often compared to those of
biological systems by performing regression between the neural network
responses and those measured from biological systems. Many different
state-of-the-art deep neural networks yield similar neural predictions, but it
remains unclear how to differentiate among models that perform equally well at
predicting neural responses. To gain insight into this, we use a recent
theoretical framework that relates the generalization error from regression to
the spectral bias of the model activations and the alignment of the neural
responses onto the learnable subspace of the model. We extend this theory to
the case of regression between model activations and neural responses, and
define geometrical properties describing the error embedding geometry. We test
a large number of deep neural networks that predict visual cortical activity
and show that there are multiple types of geometries that result in low neural
prediction error as measured via regression. The work demonstrates that
carefully decomposing representational metrics can provide interpretability of
how models are capturing neural activity and points the way towards improved
models of neural activity.Comment: First two authors contributed equally. To appear at NeurIPS 202
- …