3,770 research outputs found
Comparing brain-like representations learned by vanilla, residual, and recurrent CNN architectures
Though it has been hypothesized that state-of-the art residual networks approximate the recurrent visual system, it is yet to be seen if the representations learned by these biologically inspired CNNs actually have closer representations to neural data. It is likely that CNNs and DNNs that are most functionally similar to the brain will contain mechanisms that are most like those used by the brain. In this thesis, we investigate how different CNN architectures approximate the representations learned through the ventral-object recognition and processing-stream of the brain. We specifically evaluate how recent approximations of biological neural recurrence-such as residual connections, dense residual connections, and a biologically-inspired implemen- tation of recurrence-affect the representations learned by each CNN. We first investigate the representations learned by layers throughout a few state-of-the-art CNNs-VGG-19 (vanilla CNN), ResNet-152 (CNN with residual connections), and DenseNet-161 (CNN with dense connections). To control for differences in model depth, we then extend this analysis to the CORnet family of biologically-inspired CNN models with matching high-level architectures. The CORnet family has three models: a vanilla CNN (CORnet-Z), a CNN with biologically-valid recurrent dynamics (CORnet-R), and a CNN with both recurrent and residual connections (CORnet-S). We compare the representations of these six models to functionally aligned (with hyperalignment) fMRI brain data acquired during a naturalistic visual task. We take two approaches to comparing these CNN and brain representations. We first use forward encoding, a predictive approach that uses CNN features to predict neural responses across the whole brain. We next use representational similarity analysis (RSA) and centered kernel alignment (CKA) to measure the similarities in representation within CNN layers and specific brain ROIs. We show that, compared to vanilla CNNs, CNNs with residual and recurrent connections exhibit representations that are even more similar to those learned by the human ventral visual stream. We also achieve state-of-the-art forward encoding and RSA performance with the residual and recurrent CNN models
Classification-based prediction of effective connectivity between timeseries with a realistic cortical network model
Effective connectivity measures the pattern of causal interactions between brain regions. Traditionally, these patterns of causality are inferred from brain recordings using either non-parametric, i.e., model-free, or parametric, i.e., model-based, approaches. The latter approaches, when based on biophysically plausible models, have the advantage that they may facilitate the interpretation of causality in terms of underlying neural mechanisms. Recent biophysically plausible neural network models of recurrent microcircuits have shown the ability to reproduce well the characteristics of real neural activity and can be applied to model interacting cortical circuits. Unfortunately, however, it is challenging to invert these models in order to estimate effective connectivity from observed data. Here, we propose to use a classification-based method to approximate the result of such complex model inversion. The classifier predicts the pattern of causal interactions given a multivariate timeseries as input. The classifier is trained on a large number of pairs of multivariate timeseries and the respective pattern of causal interactions, which are generated by simulation from the neural network model. In simulated experiments, we show that the proposed method is much more accurate in detecting the causal structure of timeseries than current best practice methods. Additionally, we present further results to characterize the validity of the neural network model and the ability of the classifier to adapt to the generative model of the data
Receptive Field Block Net for Accurate and Fast Object Detection
Current top-performing object detectors depend on deep CNN backbones, such as
ResNet-101 and Inception, benefiting from their powerful feature
representations but suffering from high computational costs. Conversely, some
lightweight model based detectors fulfil real time processing, while their
accuracies are often criticized. In this paper, we explore an alternative to
build a fast and accurate detector by strengthening lightweight features using
a hand-crafted mechanism. Inspired by the structure of Receptive Fields (RFs)
in human visual systems, we propose a novel RF Block (RFB) module, which takes
the relationship between the size and eccentricity of RFs into account, to
enhance the feature discriminability and robustness. We further assemble RFB to
the top of SSD, constructing the RFB Net detector. To evaluate its
effectiveness, experiments are conducted on two major benchmarks and the
results show that RFB Net is able to reach the performance of advanced very
deep detectors while keeping the real-time speed. Code is available at
https://github.com/ruinmessi/RFBNet.Comment: Accepted by ECCV 201
Feature representations useful for predicting image memorability
Predicting image memorability has attracted interest in various fields.
Consequently, prediction accuracy with convolutional neural network (CNN)
models has been approaching the empirical upper bound estimated based on human
consistency. However, identifying which feature representations embedded in CNN
models are responsible for such high prediction accuracy of memorability
remains an open question. To tackle this problem, this study sought to identify
memorability-related feature representations in CNN models using brain
similarity. Specifically, memorability prediction accuracy and brain similarity
were examined and assessed by Brain-Score across 16,860 layers in 64 CNN models
pretrained for object recognition. A clear tendency was shown in this
comprehensive analysis that layers with high memorability prediction accuracy
had higher brain similarity with the inferior temporal (IT) cortex, which is
the highest stage in the ventral visual pathway. Furthermore, fine-tuning the
64 CNN models revealed that brain similarity with the IT cortex at the
penultimate layer was positively correlated with memorability prediction
accuracy. This analysis also showed that the best fine-tuned model provided
accuracy comparable to the state-of-the-art CNN models developed specifically
for memorability prediction. Overall, this study's results indicated that the
CNN models' great success in predicting memorability relies on feature
representation acquisition similar to the IT cortex. This study advanced our
understanding of feature representations and its use for predicting image
memorability
- …