871 research outputs found

    Synthetic-Neuroscore: Using A Neuro-AI Interface for Evaluating Generative Adversarial Networks

    Full text link
    Generative adversarial networks (GANs) are increasingly attracting attention in the computer vision, natural language processing, speech synthesis and similar domains. Arguably the most striking results have been in the area of image synthesis. However, evaluating the performance of GANs is still an open and challenging problem. Existing evaluation metrics primarily measure the dissimilarity between real and generated images using automated statistical methods. They often require large sample sizes for evaluation and do not directly reflect human perception of image quality. In this work, we describe an evaluation metric we call Neuroscore, for evaluating the performance of GANs, that more directly reflects psychoperceptual image quality through the utilization of brain signals. Our results show that Neuroscore has superior performance to the current evaluation metrics in that: (1) It is more consistent with human judgment; (2) The evaluation process needs much smaller numbers of samples; and (3) It is able to rank the quality of images on a per GAN basis. A convolutional neural network (CNN) based neuro-AI interface is proposed to predict Neuroscore from GAN-generated images directly without the need for neural responses. Importantly, we show that including neural responses during the training phase of the network can significantly improve the prediction capability of the proposed model. Materials related to this work are provided at https://github.com/villawang/Neuro-AI-Interface

    Use of neural signals to evaluate the quality of generative adversarial network performance in facial image generation

    Get PDF
    There is a growing interest in using generative adversarial networks (GANs) to produce image content that is indistinguishable from real images as judged by a typical person. A number of GAN variants for this purpose have been proposed; however, evaluating GAN performance is inherently difficult because current methods for measuring the quality of their output are not always consistent with what a human perceives. We propose a novel approach that combines a brain-computer interface (BCI) with GANs to generate a measure we call Neuroscore, which closely mirrors the behavioral ground truth measured from participants tasked with discerning real from synthetic images. This technique we call a neuro-AI interface, as it provides an interface between a human’s neural systems and an AI process. In this paper, we first compare the three most widely used metrics in the literature for evaluating GANs in terms of visual quality and compare their outputs with human judgments. Secondly, we propose and demonstrate a novel approach using neural signals and rapid serial visual presentation (RSVP) that directly measures a human perceptual response to facial production quality, independent of a behavioral response measurement. The correlation between our proposed Neuroscore and human perceptual judgments has Pearson correlation statistics: r(48) = − 0.767, p = 2.089e − 10. We also present the bootstrap result for the correlation i.e., p ≤ 0.0001. Results show that our Neuroscore is more consistent with human judgment compared with the conventional metrics we evaluated. We conclude that neural signals have potential applications for high-quality, rapid evaluation of GANs in the context of visual image synthesis

    Cortically coupled image computing

    Get PDF
    In the 1970s, researchers at the University of California started to investigate communication between humans and computers using neural signals, which lead to the emergence of brain- computer interfaces (BCIs). In the past 40 years, significant progress has been achieved in ap- plication areas such as neuroprosthetics and rehabilitation. BCIs have been recently applied to media analytics (e.g., image search and information retrieval) as we are surrounded by tremen- dous amounts of media information today. A cortically coupled computer vision (CCCV) sys- tem is a type of BCI that exposes users to high throughput image streams via the rapid serial visual presentation (RSVP) protocol. Media analytics has also been transformed through the enormous advances in artificial intelligence (AI) in recent times. Understanding and presenting the nature of the human-AI relationship will play an important role in our society in the future. This thesis explores two lines of research in the context of traditional BCIs and AI. Firstly, we study and investigate the fundamental processing methods such as feature extraction and clas- sification for CCCV systems. Secondly, we discuss the feasibility of interfacing neural systems with AI technology through CCCV, an area we identify as neuro-AI interfacing. We have made two electroencephalography (EEG) datasets available to the community that support our inves- tigation of these two research directions. These are the neurally augmented image labelling strategies (NAILS) dataset and the neural indices for face perception analysis (NIFPA) dataset, which are introduced in Chapter 2. The first line of research focuses on studying and investigating fundamental processing methods for CCCV. In Chapter 3, we present a review on recent developments in processing methods for CCCV. This review introduces CCCV related components, specifically the RSVP experimental setup, RSVP-EEG phenomena such as the P300 and N170, evaluation metrics, feature extraction and classification. We then provide a detailed study and an analysis on spatial filtering pipelines in Chapter 4, which are the most widely used feature extraction and reduction methods in a CCCV system. In this context, we propose a spatial filtering technique named multiple time window LDA beamformers (MTWLB) and compare it to two other well-known techniques in the literature, namely xDAWN and common spatial patterns (CSP). Importantly, we demonstrate the efficacy of MTWLB for time-course source signal reconstruction compared to existing methods, which we then use as a source signal information extraction method to support a neuro-AI interface. This will be further discussed in this thesis i.e. Chapter 6 and Chapter 7. The latter part of this thesis investigates the feasibility of neuro-AI interfaces. We present two research studies which contribute to this direction. Firstly, we explore the idea of neuro- AI interfaces based on stimulus and neural systems i.e., observation of the effects of stimuli produced by different AI systems on neural signals. We use generative adversarial networks (GANs) to produce image stimuli in this case as GANs are able to produce higher quality images compared to other deep generative models. Chapter 5 provides a review on GAN-variants in terms of loss functions and architectures. In Chapter 6, we design a comprehensive experiment to verify the effects of images produced by different GANs on participants’ EEG responses. In this we propose a biologically-produced metric called Neuroscore for evaluating GAN per- formance. We highlight the consistency between Neuroscore and human perceptual judgment, which is superior to conventional metrics (i.e., Inception Score (IS), Fre ́chet Inception Distance (FID) and Kernel Maximum Mean Discrepancy (MMD) discussed in this thesis). Secondly, in order to generalize Neuroscore, we explore the use of a neuro-AI interface to help convolutional neural networks (CNNs) predict a Neuroscore with only an image as the input. In this scenario, we feed the reconstructed P300 source signals to the intermediate layer as supervisory informa- tion. We demonstrate that including biological neural information can improve the prediction performance for our proposed CNN models and the predicted Neuroscore is highly correlated with the real Neuroscore (as directly calculated from human neural signals)

    Whole brain Probabilistic Generative Model toward Realizing Cognitive Architecture for Developmental Robots

    Get PDF
    Building a humanlike integrative artificial cognitive system, that is, an artificial general intelligence, is one of the goals in artificial intelligence and developmental robotics. Furthermore, a computational model that enables an artificial cognitive system to achieve cognitive development will be an excellent reference for brain and cognitive science. This paper describes the development of a cognitive architecture using probabilistic generative models (PGMs) to fully mirror the human cognitive system. The integrative model is called a whole-brain PGM (WB-PGM). It is both brain-inspired and PGMbased. In this paper, the process of building the WB-PGM and learning from the human brain to build cognitive architectures is described.Comment: 55 pages, 8 figures, submitted to Neural Network
    corecore