34,969 research outputs found
Models of Cognition: Neurological possibility does not indicate neurological plausibility
Many activities in Cognitive Science involve complex computer models and simulations of both theoretical and real entities. Artificial Intelligence and the study of artificial neural nets in particular, are seen as major contributors in the quest for understanding the human mind. Computational models serve as objects of experimentation, and results from these virtual experiments are tacitly included in the framework of empirical science. Cognitive functions, like learning to speak, or discovering syntactical structures in language, have been modeled and these models are the basis for many claims about human cognitive capacities. Artificial neural nets (ANNs) have had some successes in the field of Artificial Intelligence, but the results from experiments with simple ANNs may have little value in explaining cognitive functions. The problem seems to be in relating cognitive concepts that belong in the `top-down' approach to models grounded in the `bottom-up' connectionist methodology. Merging the two fundamentally different paradigms within a single model can obfuscate what is really modeled. When the tools (simple artificial neural networks) to solve the problems (explaining aspects of higher cognitive functions) are mismatched, models with little value in terms of explaining functions of the human mind are produced. The ability to learn functions from data-points makes ANNs very attractive analytical tools. These tools can be developed into valuable models, if the data is adequate and a meaningful interpretation of the data is possible. The problem is, that with appropriate data and labels that fit the desired level of description, almost any function can be modeled. It is my argument that small networks offer a universal framework for modeling any conceivable cognitive theory, so that neurological possibility can be demonstrated easily with relatively simple models. However, a model demonstrating the possibility of implementation of a cognitive function using a distributed methodology, does not necessarily add support to any claims or assumptions that the cognitive function in question, is neurologically plausible
A biologically inspired spiking model of visual processing for image feature detection
To enable fast reliable feature matching or tracking in scenes, features need to be discrete and meaningful, and hence edge or corner features, commonly called interest points are often used for this purpose. Experimental research has illustrated that biological vision systems use neuronal circuits to extract particular features such as edges or corners from visual scenes. Inspired by this biological behaviour, this paper proposes a biologically inspired spiking neural network for the purpose of image feature extraction. Standard digital images are processed and converted to spikes in a manner similar to the processing that transforms light into spikes in the retina. Using a hierarchical spiking network, various types of biologically inspired receptive fields are used to extract progressively complex image features. The performance of the network is assessed by examining the repeatability of extracted features with visual results presented using both synthetic and real images
DeepLOB: Deep Convolutional Neural Networks for Limit Order Books
We develop a large-scale deep learning model to predict price movements from
limit order book (LOB) data of cash equities. The architecture utilises
convolutional filters to capture the spatial structure of the limit order books
as well as LSTM modules to capture longer time dependencies. The proposed
network outperforms all existing state-of-the-art algorithms on the benchmark
LOB dataset [1]. In a more realistic setting, we test our model by using one
year market quotes from the London Stock Exchange and the model delivers a
remarkably stable out-of-sample prediction accuracy for a variety of
instruments. Importantly, our model translates well to instruments which were
not part of the training set, indicating the model's ability to extract
universal features. In order to better understand these features and to go
beyond a "black box" model, we perform a sensitivity analysis to understand the
rationale behind the model predictions and reveal the components of LOBs that
are most relevant. The ability to extract robust features which translate well
to other instruments is an important property of our model which has many other
applications.Comment: 12 pages, 9 figure
Deep Learning Techniques for Music Generation -- A Survey
This paper is a survey and an analysis of different ways of using deep
learning (deep artificial neural networks) to generate musical content. We
propose a methodology based on five dimensions for our analysis:
Objective - What musical content is to be generated? Examples are: melody,
polyphony, accompaniment or counterpoint. - For what destination and for what
use? To be performed by a human(s) (in the case of a musical score), or by a
machine (in the case of an audio file).
Representation - What are the concepts to be manipulated? Examples are:
waveform, spectrogram, note, chord, meter and beat. - What format is to be
used? Examples are: MIDI, piano roll or text. - How will the representation be
encoded? Examples are: scalar, one-hot or many-hot.
Architecture - What type(s) of deep neural network is (are) to be used?
Examples are: feedforward network, recurrent network, autoencoder or generative
adversarial networks.
Challenge - What are the limitations and open challenges? Examples are:
variability, interactivity and creativity.
Strategy - How do we model and control the process of generation? Examples
are: single-step feedforward, iterative feedforward, sampling or input
manipulation.
For each dimension, we conduct a comparative analysis of various models and
techniques and we propose some tentative multidimensional typology. This
typology is bottom-up, based on the analysis of many existing deep-learning
based systems for music generation selected from the relevant literature. These
systems are described and are used to exemplify the various choices of
objective, representation, architecture, challenge and strategy. The last
section includes some discussion and some prospects.Comment: 209 pages. This paper is a simplified version of the book: J.-P.
Briot, G. Hadjeres and F.-D. Pachet, Deep Learning Techniques for Music
Generation, Computational Synthesis and Creative Systems, Springer, 201
Recommended from our members
Layerwise symbolic knowledge extraction from deep neural networks
We examine the feasibility of rule extraction as a method of explanation for neural networks with an emphasis on deep neural networks. This is done by establishing a framework for neural-symbolic computing which gives precise meaning to notions such as fidelity, neural encoding, and rule extraction. Using this framework, we establish semantic and syntactic relationships between different classes of neural networks and different logical systems. This shows that there is nothing inherently different about the computations done by deep neural networks and logical systems. We use this to argue that complexity is the primary difference between neural and symbolic approaches. We develop a measure of complexity and two different rule extraction algorithms using M-of- N rules. The first extraction algorithm is a fast decompositional algorithm for Deep Belief Networks that builds on the optimal confidence extraction algorithm. The second algorithm is a parallel search for optimal M-of-N rules that implements a hyperparameter that controls the complexity of the extracted rules. We apply this algorithm to a variety of deep networks and find that although differences in architecture, dataset, and learning algorithm influence the complexity of extracted rules, generally only the final softmax layer can be represented simply and accurately with M-of-N rules. We conclude by experimenting with the combination of rule extraction from the final layer and importance methods to visualize the inputs to the final layer
- âŠ