16,800 research outputs found
Synthetic-Neuroscore: Using A Neuro-AI Interface for Evaluating Generative Adversarial Networks
Generative adversarial networks (GANs) are increasingly attracting attention
in the computer vision, natural language processing, speech synthesis and
similar domains. Arguably the most striking results have been in the area of
image synthesis. However, evaluating the performance of GANs is still an open
and challenging problem. Existing evaluation metrics primarily measure the
dissimilarity between real and generated images using automated statistical
methods. They often require large sample sizes for evaluation and do not
directly reflect human perception of image quality. In this work, we describe
an evaluation metric we call Neuroscore, for evaluating the performance of
GANs, that more directly reflects psychoperceptual image quality through the
utilization of brain signals. Our results show that Neuroscore has superior
performance to the current evaluation metrics in that: (1) It is more
consistent with human judgment; (2) The evaluation process needs much smaller
numbers of samples; and (3) It is able to rank the quality of images on a per
GAN basis. A convolutional neural network (CNN) based neuro-AI interface is
proposed to predict Neuroscore from GAN-generated images directly without the
need for neural responses. Importantly, we show that including neural responses
during the training phase of the network can significantly improve the
prediction capability of the proposed model. Materials related to this work are
provided at https://github.com/villawang/Neuro-AI-Interface
Improving classification accuracy of feedforward neural networks for spiking neuromorphic chips
Deep Neural Networks (DNN) achieve human level performance in many image
analytics tasks but DNNs are mostly deployed to GPU platforms that consume a
considerable amount of power. New hardware platforms using lower precision
arithmetic achieve drastic reductions in power consumption. More recently,
brain-inspired spiking neuromorphic chips have achieved even lower power
consumption, on the order of milliwatts, while still offering real-time
processing.
However, for deploying DNNs to energy efficient neuromorphic chips the
incompatibility between continuous neurons and synaptic weights of traditional
DNNs, discrete spiking neurons and synapses of neuromorphic chips need to be
overcome. Previous work has achieved this by training a network to learn
continuous probabilities, before it is deployed to a neuromorphic architecture,
such as IBM TrueNorth Neurosynaptic System, by random sampling these
probabilities.
The main contribution of this paper is a new learning algorithm that learns a
TrueNorth configuration ready for deployment. We achieve this by training
directly a binary hardware crossbar that accommodates the TrueNorth axon
configuration constrains and we propose a different neuron model.
Results of our approach trained on electroencephalogram (EEG) data show a
significant improvement with previous work (76% vs 86% accuracy) while
maintaining state of the art performance on the MNIST handwritten data set.Comment: IJCAI-2017. arXiv admin note: text overlap with arXiv:1605.0774
One-shot Learning for iEEG Seizure Detection Using End-to-end Binary Operations: Local Binary Patterns with Hyperdimensional Computing
This paper presents an efficient binarized algorithm for both learning and
classification of human epileptic seizures from intracranial
electroencephalography (iEEG). The algorithm combines local binary patterns
with brain-inspired hyperdimensional computing to enable end-to-end learning
and inference with binary operations. The algorithm first transforms iEEG time
series from each electrode into local binary pattern codes. Then atomic
high-dimensional binary vectors are used to construct composite representations
of seizures across all electrodes. For the majority of our patients (10 out of
16), the algorithm quickly learns from one or two seizures (i.e., one-/few-shot
learning) and perfectly generalizes on 27 further seizures. For other patients,
the algorithm requires three to six seizures for learning. Overall, our
algorithm surpasses the state-of-the-art methods for detecting 65 novel
seizures with higher specificity and sensitivity, and lower memory footprint.Comment: Published as a conference paper at the IEEE BioCAS 201
- …