3,132 research outputs found

    Modeling user navigation

    Get PDF
    This paper proposes the use of neural networks as a tool for studying navigation within virtual worlds. Results indicate that the network learned to predict the next step for a given trajectory. The analysis of hidden layer shows that the network was able to differentiate between two groups of users identified on the basis of their performance for a spatial task. Time series analysis of hidden node activation values and input vectors suggested that certain hidden units become specialised for place and heading, respectively. The benefits of this approach and the possibility of extending the methodology to the study of navigation in Human Computer Interaction applications are discussed

    Incremental construction of LSTM recurrent neural network

    Get PDF
    Long Short--Term Memory (LSTM) is a recurrent neural network that uses structures called memory blocks to allow the net remember significant events distant in the past input sequence in order to solve long time lag tasks, where other RNN approaches fail. Throughout this work we have performed experiments using LSTM networks extended with growing abilities, which we call GLSTM. Four methods of training growing LSTM has been compared. These methods include cascade and fully connected hidden layers as well as two different levels of freezing previous weights in the cascade case. GLSTM has been applied to a forecasting problem in a biomedical domain, where the input/output behavior of five controllers of the Central Nervous System control has to be modelled. We have compared growing LSTM results against other neural networks approaches, and our work applying conventional LSTM to the task at hand.Postprint (published version

    MLP and Elman recurrent neural network modelling for the TRMS

    Get PDF
    This paper presents a scrutinized investigation on system identification using artificial neural network (ANNs). The main goal for this work is to emphasis the potential benefits of this architecture for real system identification. Among the most prevalent networks are multi-layered perceptron NNs using Levenberg-Marquardt (LM) training algorithm and Elman recurrent NNs. These methods are used for the identification of a twin rotor multi-input multi-output system (TRMS). The TRMS can be perceived as a static test rig for an air vehicle with formidable control challenges. Therefore, an analysis in modeling of nonlinear aerodynamic function is needed and carried out in both time and frequency domains based on observed input and output data. Experimental results are obtained using a laboratory set-up system, confirming the viability and effectiveness of the proposed methodology

    Models of Cognition: Neurological possibility does not indicate neurological plausibility

    Get PDF
    Many activities in Cognitive Science involve complex computer models and simulations of both theoretical and real entities. Artificial Intelligence and the study of artificial neural nets in particular, are seen as major contributors in the quest for understanding the human mind. Computational models serve as objects of experimentation, and results from these virtual experiments are tacitly included in the framework of empirical science. Cognitive functions, like learning to speak, or discovering syntactical structures in language, have been modeled and these models are the basis for many claims about human cognitive capacities. Artificial neural nets (ANNs) have had some successes in the field of Artificial Intelligence, but the results from experiments with simple ANNs may have little value in explaining cognitive functions. The problem seems to be in relating cognitive concepts that belong in the `top-down' approach to models grounded in the `bottom-up' connectionist methodology. Merging the two fundamentally different paradigms within a single model can obfuscate what is really modeled. When the tools (simple artificial neural networks) to solve the problems (explaining aspects of higher cognitive functions) are mismatched, models with little value in terms of explaining functions of the human mind are produced. The ability to learn functions from data-points makes ANNs very attractive analytical tools. These tools can be developed into valuable models, if the data is adequate and a meaningful interpretation of the data is possible. The problem is, that with appropriate data and labels that fit the desired level of description, almost any function can be modeled. It is my argument that small networks offer a universal framework for modeling any conceivable cognitive theory, so that neurological possibility can be demonstrated easily with relatively simple models. However, a model demonstrating the possibility of implementation of a cognitive function using a distributed methodology, does not necessarily add support to any claims or assumptions that the cognitive function in question, is neurologically plausible

    A space-time neural network

    Get PDF
    Introduced here is a novel technique which adds the dimension of time to the well known back propagation neural network algorithm. Cited here are several reasons why the inclusion of automated spatial and temporal associations are crucial to effective systems modeling. An overview of other works which also model spatiotemporal dynamics is furnished. A detailed description is given of the processes necessary to implement the space-time network algorithm. Several demonstrations that illustrate the capabilities and performance of this new architecture are given
    • …
    corecore