1,898 research outputs found
When and where do feed-forward neural networks learn localist representations?
According to parallel distributed processing (PDP) theory in psychology,
neural networks (NN) learn distributed rather than interpretable localist
representations. This view has been held so strongly that few researchers have
analysed single units to determine if this assumption is correct. However,
recent results from psychology, neuroscience and computer science have shown
the occasional existence of local codes emerging in artificial and biological
neural networks. In this paper, we undertake the first systematic survey of
when local codes emerge in a feed-forward neural network, using generated input
and output data with known qualities. We find that the number of local codes
that emerge from a NN follows a well-defined distribution across the number of
hidden layer neurons, with a peak determined by the size of input data, number
of examples presented and the sparsity of input data. Using a 1-hot output code
drastically decreases the number of local codes on the hidden layer. The number
of emergent local codes increases with the percentage of dropout applied to the
hidden layer, suggesting that the localist encoding may offer a resilience to
noisy networks. This data suggests that localist coding can emerge from
feed-forward PDP networks and suggests some of the conditions that may lead to
interpretable localist representations in the cortex. The findings highlight
how local codes should not be dismissed out of hand
A network model of referent identification by toddlers in a visual world task
We present a neural network model of referent identification in a visual world task. Inputs are visual representations of item pairs unfolding with sequences of phonemes identifying the target item. The model is trained to output the semantic representation of the target and to suppress the distractor. The training set uses a 200-word lexicon typically known by toddlers. The phonological, visual, and semantic representations are derived from real corpora. Successful performance requires correct association between labels and visual and semantic representations, as well as correct location identification. The model reproduces experimental evidence that phonological, perceptual, and categorical relationships modulate item preferences. The model provides an account of how language can drive visual attention in the inter-modal preferential looking task
3D terrain generation using neural networks
With the increase in computation power, coupled with the advancements in the field in the form of
GANs and cGANs, Neural Networks have become an attractive proposition for content generation. This
opened opportunities for Procedural Content Generation algorithms (PCG) to tap Neural Networks
generative power to create tools that allow developers to remove part of creative and developmental
burden imposed throughout the gaming industry, be it from investors looking for a return on their
investment and from consumers that want more and better content, fast. This dissertation sets out to
develop a PCG mixed-initiative tool, leveraging cGANs, to create authored 3D terrains, allowing users
to directly influence the resulting generated content without the need for formal training on terrain
generation or complex interactions with the tool to influence the generative output, as opposed to
state of the art generative algorithms that only allow for random content generation or are needlessly
complex. Testing done to 113 people online, as well as in-person testing done to 30 people, revealed
that it is indeed possible to develop a tool that allows users from any level of terrain creation
knowledge, and minimal tool training, to easily create a 3D terrain that is more realistic looking than
those generated by state-of-the-art solutions such as Perlin Noise.Com o aumento do poder de computação, juntamente com os avanços neste campo na forma de GANs
e cGANs, as Redes Neurais tornaram-se numa proposta atrativa para a geração de conteúdos. Graças
a estes avanços, abriram-se oportunidades para os algoritmos de Geração de Conteúdos
Procedimentais(PCG) explorarem o poder generativo das Redes Neurais para a criação de ferramentas
que permitam aos programadores remover parte da carga criativa e de desenvolvimento imposta em
toda a indústria dos jogos, seja por parte dos investidores que procuram um retorno do seu
investimento ou por parte dos consumidores que querem mais e melhor conteúdo, o mais rápido
possível. Esta dissertação pretende desenvolver uma ferramenta de iniciativa mista PCG, alavancando
cGANs, para criar terrenos 3D cocriados, permitindo aos utilizadores influenciarem diretamente o
conteúdo gerado sem necessidade de terem formação formal sobre a criação de terrenos 3D ou
interações complexas com a ferramenta para influenciar a produção generativa, opondo-se assim a
algoritmos generativos comummente utilizados, que apenas permitem a geração de conteúdo
aleatório ou que são desnecessariamente complexos. Um conjunto de testes feitos a 113 pessoas
online e a 30 pessoas presencialmente, revelaram que é de facto possível desenvolver uma ferramenta
que permita aos utilizadores, de qualquer nível de conhecimento sobre criação de terrenos, e com
uma formação mínima na ferramenta, criar um terreno 3D mais realista do que os terrenos gerados a
partir da solução de estado da arte, como o Perlin Noise, e de uma forma fácil
Grounding semantic cognition using computational modelling and network analysis
The overarching objective of this thesis is to further the field of grounded semantics using a range of computational and empirical studies. Over the past thirty years, there have been many algorithmic advances in the
modelling of semantic cognition. A commonality across these cognitive models is a reliance on hand-engineering “toy-models”. Despite incorporating newer
techniques (e.g. Long short-term memory), the model inputs remain unchanged. We argue that the inputs to these traditional semantic models have little resemblance with real human experiences. In this dissertation, we ground our neural network models by training them with real-world visual scenes using naturalistic photographs. Our approach is an alternative to both hand-coded
features and embodied raw sensorimotor signals.
We conceptually replicate the mutually reinforcing nature of hybrid (feature-based and grounded) representations using silhouettes of concrete concepts as model inputs. We next gradually develop a novel grounded cognitive semantic representation which we call scene2vec, starting with object co-occurrences and then adding emotions and language-based tags. Limitations of our scene-based representation are identified for more abstract concepts (e.g. freedom). We further present a large-scale human semantics study, which reveals small-world semantic network topologies are context-dependent and
that scenes are the most dominant cognitive dimension. This finding leads us to conclude that there is no meaning without context. Lastly, scene2vec shows
promising human-like context-sensitive stereotypes (e.g. gender role bias), and we explore how such stereotypes are reduced by targeted debiasing. In conclusion, this thesis provides support for a novel computational
viewpoint on investigating meaning - scene-based grounded semantics. Future research scaling scene-based semantic models to human-levels through virtual grounding has the potential to unearth new insights into the human mind and
concurrently lead to advancements in artificial general intelligence by enabling robots, embodied or otherwise, to acquire and represent meaning directly from the environment
CLASSIFICATION OF COMPLEX TWO-DIMENSIONAL IMAGES IN A PARALLEL DISTRIBUTED PROCESSING ARCHITECTURE
Neural network analysis is proposed and evaluated as a method of analysis of
marine biological data, specifically images of plankton specimens. The
quantification of the various plankton species is of great scientific importance, from
modelling global climatic change to predicting the economic effects of toxic red
tides. A preliminary evaluation of the neural network technique is made by the
development of a back-propagation system that successfully learns to distinguish
between two co-occurring morphologically similar species from the North Atlantic
Ocean, namely Ceratium arcticum and C. longipes. Various techniques are
developed to handle the indeterminately labelled source data, pre-process the
images and successfully train the networks. An analysis of the network solutions
is made, and some consideration given to how the system might be extended.Plymouth Marine Laborator
- …