8,072 research outputs found
Empiricism without Magic: Transformational Abstraction in Deep Convolutional Neural Networks
In artificial intelligence, recent research has demonstrated the remarkable potential of Deep Convolutional Neural Networks (DCNNs), which seem to exceed state-of-the-art performance in new domains weekly, especially on the sorts of very difficult perceptual discrimination tasks that skeptics thought would remain beyond the reach of artificial intelligence. However, it has proven difficult to explain why DCNNs perform so well. In philosophy of mind, empiricists have long suggested that complex cognition is based on information derived from sensory experience, often appealing to a faculty of abstraction. Rationalists have frequently complained, however, that empiricists never adequately explained how this faculty of abstraction actually works. In this paper, I tie these two questions together, to the mutual benefit of both disciplines. I argue that the architectural features that distinguish DCNNs from earlier neural networks allow them to implement a form of hierarchical processing that I call “transformational abstraction”. Transformational abstraction iteratively converts sensory-based representations of category exemplars into new formats that are increasingly tolerant to “nuisance variation” in input. Reflecting upon the way that DCNNs leverage a combination of linear and non-linear processing to efficiently accomplish this feat allows us to understand how the brain is capable of bi-directional travel between exemplars and abstractions, addressing longstanding problems in empiricist philosophy of mind. I end by considering the prospects for future research on DCNNs, arguing that rather than simply implementing 80s connectionism with more brute-force computation, transformational abstraction counts as a qualitatively distinct form of processing ripe with philosophical and psychological significance, because it is significantly better suited to depict the generic mechanism responsible for this important kind of psychological processing in the brain
Efficient On-the-fly Category Retrieval using ConvNets and GPUs
We investigate the gains in precision and speed, that can be obtained by
using Convolutional Networks (ConvNets) for on-the-fly retrieval - where
classifiers are learnt at run time for a textual query from downloaded images,
and used to rank large image or video datasets.
We make three contributions: (i) we present an evaluation of state-of-the-art
image representations for object category retrieval over standard benchmark
datasets containing 1M+ images; (ii) we show that ConvNets can be used to
obtain features which are incredibly performant, and yet much lower dimensional
than previous state-of-the-art image representations, and that their
dimensionality can be reduced further without loss in performance by
compression using product quantization or binarization. Consequently, features
with the state-of-the-art performance on large-scale datasets of millions of
images can fit in the memory of even a commodity GPU card; (iii) we show that
an SVM classifier can be learnt within a ConvNet framework on a GPU in parallel
with downloading the new training images, allowing for a continuous refinement
of the model as more images become available, and simultaneous training and
ranking. The outcome is an on-the-fly system that significantly outperforms its
predecessors in terms of: precision of retrieval, memory requirements, and
speed, facilitating accurate on-the-fly learning and ranking in under a second
on a single GPU.Comment: Published in proceedings of ACCV 201
Model-based Cognitive Neuroscience: Multifield Mechanistic Integration in Practice
Autonomist accounts of cognitive science suggest that cognitive model building and theory construction (can or should) proceed independently of findings in neuroscience. Common functionalist justifications of autonomy rely on there being relatively few constraints between neural structure and cognitive function (e.g., Weiskopf, 2011). In contrast, an integrative mechanistic perspective stresses the mutual constraining of structure and function (e.g., Piccinini & Craver, 2011; Povich, 2015). In this paper, I show how model-based cognitive neuroscience (MBCN) epitomizes the integrative mechanistic perspective and concentrates the most revolutionary elements of the cognitive neuroscience revolution (Boone & Piccinini, 2016). I also show how the prominent subset account of functional realization supports the integrative mechanistic perspective I take on MBCN and use it to clarify the intralevel and interlevel components of integration
Recommended from our members
The role of HG in the analysis of temporal iteration and interaural correlation
Using Topological Data Analysis for diagnosis pulmonary embolism
Pulmonary Embolism (PE) is a common and potentially lethal condition. Most
patients die within the first few hours from the event. Despite diagnostic
advances, delays and underdiagnosis in PE are common.To increase the diagnostic
performance in PE, current diagnostic work-up of patients with suspected acute
pulmonary embolism usually starts with the assessment of clinical pretest
probability using plasma d-Dimer measurement and clinical prediction rules. The
most validated and widely used clinical decision rules are the Wells and Geneva
Revised scores. We aimed to develop a new clinical prediction rule (CPR) for PE
based on topological data analysis and artificial neural network. Filter or
wrapper methods for features reduction cannot be applied to our dataset: the
application of these algorithms can only be performed on datasets without
missing data. Instead, we applied Topological data analysis (TDA) to overcome
the hurdle of processing datasets with null values missing data. A topological
network was developed using the Iris software (Ayasdi, Inc., Palo Alto). The PE
patient topology identified two ares in the pathological group and hence two
distinct clusters of PE patient populations. Additionally, the topological
netowrk detected several sub-groups among healthy patients that likely are
affected with non-PE diseases. TDA was further utilized to identify key
features which are best associated as diagnostic factors for PE and used this
information to define the input space for a back-propagation artificial neural
network (BP-ANN). It is shown that the area under curve (AUC) of BP-ANN is
greater than the AUCs of the scores (Wells and revised Geneva) used among
physicians. The results demonstrate topological data analysis and the BP-ANN,
when used in combination, can produce better predictive models than Wells or
revised Geneva scores system for the analyzed cohortComment: 18 pages, 5 figures, 6 tables. arXiv admin note: text overlap with
arXiv:cs/0308031 by other authors without attributio
Neural Nets via Forward State Transformation and Backward Loss Transformation
This article studies (multilayer perceptron) neural networks with an emphasis
on the transformations involved --- both forward and backward --- in order to
develop a semantical/logical perspective that is in line with standard program
semantics. The common two-pass neural network training algorithms make this
viewpoint particularly fitting. In the forward direction, neural networks act
as state transformers. In the reverse direction, however, neural networks
change losses of outputs to losses of inputs, thereby acting like a
(real-valued) predicate transformer. In this way, backpropagation is functorial
by construction, as shown earlier in recent other work. We illustrate this
perspective by training a simple instance of a neural network
- …