3 research outputs found

    Perspectives of the high-dimensional dynamics of neural microcircuits from the point of view of low-dimensional readouts

    No full text
    We investigate generic models for cortical microcircuits, i.e., recurrent circuits of integrate-and-fire neurons with dynamic synapses. These complex dynamic systems subserve the amazing information processing capabilities of the cortex, but are at the present time very little understood. We analyze the transient dynamics of models for neural microcircuits from the point of view of one or two readout neurons that collapse the high-dimensional transient dynamics of a neural circuit into a one- or two-dimensional output stream. This stream may for example represent the information that is projected from such circuit to some particular other brain area or actuators. It is shown that simple local learning rules enable a readout neuron to extract from the high-dimensional transient dynamics of a recurrent neural circuit quite different low-dimensional projections, which even may contain virtual attractors that are not apparent in the high-dimensional dynamics of the circuit itself. Furthermore it is demonstrated that the information extraction capabilities of linear readout neurons are boosted by the computational operations of a sufficiently large preceding neural microcircuit. Hence a generic neural microcircuit may play a similar role for information processing as a kernel for support vector machines in machine learning. We demonstrate that the projection of time-varying inputs into a large recurrent neural circuit enables a linear readout neuron to classify the time-varying circuit inputs with the same power as complex nonlinear classifiers, such as a pool of perceptrons trained by the p-delta rule or a feedforward sigmoidal neural net trained by backprop, provided that the size of the recurrent circuit is sufficiently large. At the same time such readout neurons can exploit the stability and speed of learning rules for linear classifiers, thereby overcoming the problems caused by local minima in the error function of nonlinear classifiers. In addition it is demonstrated that pairs of readout neurons can transform the complex trajectory of transient states of a large neural circuit into a simple and clearly structured two-dimensional trajectory. This two-dimensional projection of the high-dimensional trajectory can even exhibit convergence to virtual attractors that are not apparent in the high-dimensional trajector

    Máquina de estado líquido para previsão de séries temporais contínuas: aplicação na demanda de energia elétrica

    Get PDF
    Among of several aspects of the natural intelligence is its ability to process temporal information. One of major challenges to be addresses is how to efficiently develop intelligent systems that integrate the complexities of human behavior. In this context, appear the Liquid State Machines (LSMs), a pulsed neural architecture (liquid) that projects the input data in a high-dimensional dynamical space and therefore makes the analysis of input data all through a classical neural network (readout). Thus, this thesis presents an innovative solution for forecasting continuous time series through LSMs with reset mechanism and analog inputs, applied to the electric energy demand. The methodology was applied in the short-term and long-term forecasting of electrical energy demand. Results are promising, considering the high error to stop training the readout, the low number of iterations of training of the readout, and that no strategy of seasonal adjustment or preprocessing of input data was achieved. So far, it can be notice that the LSMs have been studied as a new and promising approach in the Artificial Neural Networks paradigm, emergent from cognitive science.CAPESUm dos aspectos fundamentais da inteligência natural é sua aptidão no processamento de informações temporais. O grande desafio proposto é o de desenvolver sistemas inteligentes que mapeiem essa aptidão do comportamento humano. Neste contexto, aportam as Máquinas de Estado Líquido (LSMs), uma arquitetura neural pulsada (meio líquido) que projeta os dados de entrada em um espaço dinâmico de alta dimensão e, por conseguinte, realiza a análise do conjunto de dados de entrada através de uma rede neural clássica (unidade de leitura). Desta maneira, esta tese apresenta uma solução inovadora para a previsão de séries temporais contínuas através das LSMs com mecanismo de reinicialização e entradas analógicas, contemplando a área da demanda de energia elétrica. A metodologia desenvolvida foi aplicada no horizonte de previsão a curto prazo e a longo prazo. Os resultados obtidos são promissores, considerando o alto erro estabelecido para parada do treinamento da unidade de leitura, o baixo número de iterações do treinamento da unidade de leitura e que nenhuma estratégia de ajustamento sazonal, ou pré-processamento, sob os dados de entrada foi realizado. Até o momento, percebe-se que as LSMs têm despontado como uma nova e promissora abordagem dentro do paradigma conexionista, emergente da ciência cognitiva
    corecore