181 research outputs found
Topographic Deep Artificial Neural Networks (TDANNs) predict face selectivity topography in primate inferior temporal (IT) cortex
Deep convolutional neural networks are biologically driven models that
resemble the hierarchical structure of primate visual cortex and are the
current best predictors of the neural responses measured along the ventral
stream. However, the networks lack topographic properties that are present in
the visual cortex, such as orientation maps in primary visual cortex and
category-selective maps in inferior temporal (IT) cortex. In this work, the
minimum wiring cost constraint was approximated as an additional learning rule
in order to generate topographic maps of the networks. We found that our
topographic deep artificial neural networks (ANNs) can reproduce the category
selectivity maps of the primate IT cortex.Comment: 2018 Conference on Cognitive Computational Neuroscienc
Robustified ANNs Reveal Wormholes Between Human Category Percepts
The visual object category reports of artificial neural networks (ANNs) are
notoriously sensitive to tiny, adversarial image perturbations. Because human
category reports (aka human percepts) are thought to be insensitive to those
same small-norm perturbations -- and locally stable in general -- this argues
that ANNs are incomplete scientific models of human visual perception.
Consistent with this, we show that when small-norm image perturbations are
generated by standard ANN models, human object category percepts are indeed
highly stable. However, in this very same "human-presumed-stable" regime, we
find that robustified ANNs reliably discover low-norm image perturbations that
strongly disrupt human percepts. These previously undetectable human perceptual
disruptions are massive in amplitude, approaching the same level of sensitivity
seen in robustified ANNs. Further, we show that robustified ANNs support
precise perceptual state interventions: they guide the construction of low-norm
image perturbations that strongly alter human category percepts toward specific
prescribed percepts. These observations suggest that for arbitrary starting
points in image space, there exists a set of nearby "wormholes", each leading
the subject from their current category perceptual state into a semantically
very different state. Moreover, contemporary ANN models of biological visual
processing are now accurate enough to consistently guide us to those portals.Comment: *Equal contributio
The Neural Representation Benchmark and its Evaluation on Brain and Machine
A key requirement for the development of effective learning representations
is their evaluation and comparison to representations we know to be effective.
In natural sensory domains, the community has viewed the brain as a source of
inspiration and as an implicit benchmark for success. However, it has not been
possible to directly test representational learning algorithms directly against
the representations contained in neural systems. Here, we propose a new
benchmark for visual representations on which we have directly tested the
neural representation in multiple visual cortical areas in macaque (utilizing
data from [Majaj et al., 2012]), and on which any computer vision algorithm
that produces a feature space can be tested. The benchmark measures the
effectiveness of the neural or machine representation by computing the
classification loss on the ordered eigendecomposition of a kernel matrix
[Montavon et al., 2011]. In our analysis we find that the neural representation
in visual area IT is superior to visual area V4. In our analysis of
representational learning algorithms, we find that three-layer models approach
the representational performance of V4 and the algorithm in [Le et al., 2012]
surpasses the performance of V4. Impressively, we find that a recent supervised
algorithm [Krizhevsky et al., 2012] achieves performance comparable to that of
IT for an intermediate level of image variation difficulty, and surpasses IT at
a higher difficulty level. We believe this result represents a major milestone:
it is the first learning algorithm we have found that exceeds our current
estimate of IT representation performance. We hope that this benchmark will
assist the community in matching the representational performance of visual
cortex and will serve as an initial rallying point for further correspondence
between representations derived in brains and machines.Comment: The v1 version contained incorrectly computed kernel analysis curves
and KA-AUC values for V4, IT, and the HT-L3 models. They have been corrected
in this versio
Why is Real-World Visual Object Recognition Hard?
Progress in understanding the brain mechanisms underlying vision requires the construction of computational models that not only emulate the brain's anatomy and physiology, but ultimately match its performance on visual tasks. In recent years, “natural” images have become popular in the study of vision and have been used to show apparently impressive progress in building such models. Here, we challenge the use of uncontrolled “natural” images in guiding that progress. In particular, we show that a simple V1-like model—a neuroscientist's “null” model, which should perform poorly at real-world visual object recognition tasks—outperforms state-of-the-art object recognition systems (biologically inspired and otherwise) on a standard, ostensibly natural image recognition test. As a counterpoint, we designed a “simpler” recognition test to better span the real-world variation in object pose, position, and scale, and we show that this test correctly exposes the inadequacy of the V1-like model. Taken together, these results demonstrate that tests based on uncontrolled natural images can be seriously misleading, potentially guiding progress in the wrong direction. Instead, we reexamine what it means for images to be natural and argue for a renewed focus on the core problem of object recognition—real-world image variation
Deep Neural Networks Rival the Representation of Primate IT Cortex for Core Visual Object Recognition
The primate visual system achieves remarkable visual object recognition
performance even in brief presentations and under changes to object exemplar,
geometric transformations, and background variation (a.k.a. core visual object
recognition). This remarkable performance is mediated by the representation
formed in inferior temporal (IT) cortex. In parallel, recent advances in
machine learning have led to ever higher performing models of object
recognition using artificial deep neural networks (DNNs). It remains unclear,
however, whether the representational performance of DNNs rivals that of the
brain. To accurately produce such a comparison, a major difficulty has been a
unifying metric that accounts for experimental limitations such as the amount
of noise, the number of neural recording sites, and the number trials, and
computational limitations such as the complexity of the decoding classifier and
the number of classifier training examples. In this work we perform a direct
comparison that corrects for these experimental limitations and computational
considerations. As part of our methodology, we propose an extension of "kernel
analysis" that measures the generalization accuracy as a function of
representational complexity. Our evaluations show that, unlike previous
bio-inspired models, the latest DNNs rival the representational performance of
IT cortex on this visual object recognition task. Furthermore, we show that
models that perform well on measures of representational performance also
perform well on measures of representational similarity to IT and on measures
of predicting individual IT multi-unit responses. Whether these DNNs rely on
computational mechanisms similar to the primate visual system is yet to be
determined, but, unlike all previous bio-inspired models, that possibility
cannot be ruled out merely on representational performance grounds.Comment: 35 pages, 12 figures, extends and expands upon arXiv:1301.353
Fine-Scale Spatial Organization of Face and Object Selectivity in the Temporal Lobe: Do Functional Magnetic Resonance Imaging, Optical Imaging, and Electrophysiology Agree?
The spatial organization of the brain's object and face representations in the temporal lobe is critical for understanding high-level vision and cognition but is poorly understood. Recently, exciting progress has been made using advanced imaging and physiology methods in humans and nonhuman primates, and the combination of such methods may be particularly powerful. Studies applying these methods help us to understand how neuronal activity, optical imaging, and functional magnetic resonance imaging signals are related within the temporal lobe, and to uncover the fine-grained and large-scale spatial organization of object and face representations in the primate brain
Materials for engine applications above 3000 deg F: An overview
Materials for future generations of aeropropulsion systems will be required to perform at ever-increasing temperatures and have properties superior to the current state of the art. Improved engine efficiency can reduce specific fuel consumption and thus increase range and reduce operating costs. The ultimate payoff gain is expected to come when materials are developed which can perform without cooling at gas temperatures to 2200 C (4000 F). An overview is presented of materials for applications above 1650 C (3000 F), some pertinent physical property data, and the rationale used: (1) to arrive at recommendations of material systems that qualify for further investigation, and (2) to develop a proposed plan of research. From an analysis of available thermochemical data it was included that such materials systems must be composed of oxide ceramics. The required structural integrity will be achieved by developing these materials into fiber-reinforced ceramic composites
Ventricular Tachycardia Detection Using Bipolar Electrogram Analysis is Site Specific
Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/75656/1/j.1540-8159.1992.tb03039.x.pd
- …