106 research outputs found

    統計モデルとニューラルネットワークを用いた時序列の予測研究

    Get PDF
    富山大学・富理工博甲第119号・虞瑩・2017/03/23富山大学201

    The narrative of dream reports

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Two questions are addressed: 1) whether a dream is meaningful as a whole, or whether the scenes are separate and unconnected, and 2) whether dream images are an epiphenomenon of a functional physiologicaL process of REM sleep, or whether they are akin to waking thought. Theories of REM sleep as a period of information-processing are reviewed. This is Linked with work on the relationship between dreaming and creativity, and between memory and imagery. Because of the persuasive evidence that REM sleep is implicated in the consolidation of memories there is a review of recent work on neural associative network models of memory. Two theories of dreams based on these models are described, and predictions with regard to the above two questions are made. Psychological evidence of relevance to the neural network theories is extensively reviewed. These predictions are compared with those of the recent application of structuralism to the study of dreams, which is an extension from its usual field of mythology and anthropology. The different theories are tested against four nights of dreams recorded in a sleep Lab. The analysis shows that not only do dreams concretise waking concerns as metaphors but that these concerns are depicted in oppositional terms, such as, for example, inside/outside or revolving/static. These oppositions are then permuted from one dream to the next until a resolution of the initial concern is achieved at the end of the night. An account of the use of the single case-study methodology in psychology is given, in addition to a replication of the analysis of one night's dreams by five independent judges. There is an examination of objections to the structuralist methodology, and of objections to the paradigm of multiple dream awakenings. The conclusion is drawn that dreams involve the unconscious dialectical step-by-step resolution of conflicts which to a great extent are consciously known to the subject. The similarity of dreams to day-dreams is explored, with the conclusion that the content of dreams is better explained by an account of metaphors we use when awake and by our daily concerns, than by reference to the physiology of REM sleep. It is emphasised that dreams can be meaningful even if they do not have a function.Ann Murray Award Fun

    Cerebellar models of associative memory: Three papers from IEEE COMPCON spring 1989

    Get PDF
    Three papers are presented on the following topics: (1) a cerebellar-model associative memory as a generalized random-access memory; (2) theories of the cerebellum - two early models of associative memory; and (3) intelligent network management and functional cerebellum synthesis

    Artificial neural networks for problems in computational cognition

    Get PDF
    Computationally modelling human level cognitive abilities is one of the principal goals of artificial intelligence research, one that draws together work from the human neurosciences, psychology, cognitive science, computer science, and mathematics. In the past 30 years, work towards this goal has been substantially accelerated by the development of neural network approaches, at least in part due to advances in algorithms that can train these networks efficiently [Rumelhart et al., 1986b] and computer hardware that is optimised for matrix computations [Krizhevsky et al., 2012]. Parallel to this body of work, research in social robotics has developed to the extent that embodied and socially intelligent artificial agents are becoming parts of our everyday lives. Where robots were traditionally placed as tools to be used to improve the efficiency of a number of industrial tasks, now they are increasingly expected to emulate humans in complex, dynamic, and unpredictable social environments. In such cases, endowing these robotic platforms with (approaching) human–like cognitive capabilities will significantly improve the efficacy of these systems, and likely see their uptake quicken as they come to be seen as safe, effective, and flexible partners in socially oriented situations such as physical healthcare, education, mental well–being, and commerce. Taken together, it would seem that neural network approaches are well placed to allow us to bestow these agents with the kinds of cognitive abilities that they require to meet this goal. However, the nascent nature of the interaction of these two fields and the risk that comes along with integrating social robots too quickly into high risk social areas, means that there is significant work still to be done before we can convince ourselves that neural networks are the right approach to this problem. In this thesis I contribute theoretical and empirical work that lends weight to the argument that neural network approaches are well suited to modelling human cognition for use in social robots. In Chapter 1 I provide a general introduction to human cognition and neural networks and motivate the use of these approaches to problems in social robotics and human–robot interaction. This chapter is written in such a way that readers with no technical background can get a good understanding of the concepts that are at the center of the thesis’ aims. In Chapter 2, I provide a more in–depth and technical overview of the mathematical concepts that are at the heart of modern neural networks, specifically detailing the logic behind the deep learning approaches that are used in the empirical chapters of the thesis. While a full understanding of this chapter requires a stronger mathematical background than the previous chapter, the concepts are explained in such a way that a non–technical reader should come out of it with a solid high level understanding of these ideas. Chapters Chapter 3 through Chapter 5 contain the empirical work that was carried out in order to attempt to answer the above questions. Specifically, Chapter 3 explores the viability of using deep learning as an approach to modelling human social–cognitive abilities by looking at the problems of subjective psychological stress and self–disclosure. I test a number of “off-the-shelf” deep learning architectures on a novel dataset and find that in all cases these models are able to score significantly above average on the task of classifying audio segments in relation to how much the person performing the contained utterance believed themselves to be stressed and performing an act of self-disclosure. In Chapter 4, I develop the work on subjective-self disclosure modelling in human–robot social interaction by collecting a much larger multi modal dataset that contains video recorded interactions between participants and a Pepper robot. I provide a novel multi-modal deep learning attention architecture, and a custom loss function, and compare the performance of our model to a number of non-neural network approach baselines. I find that all versions of our model significantly outperform the baseline approaches, and that our novel loss improves on performance when compared to other standard loss functions for regression and classification problems for subjective self-disclosure modelling. In Chapter 5, I move away from deep learning and consider how neural network models based more concretely on contemporary computational neuroscience might be used to bestow artificial agents with human like cognitive abilities. Here, I detail a novel biological neural network algorithm that is able to solve cognitive planning problems by producing short path solutions on graphs. I show how a number of such planning problems can be framed as graph traversal problem and show how our algorithm is able to form solutions to these problems in a number of experimental settings. Finally, in Chapter 6 I provide a final overview of this empirical work and explain its impact both within and without academia before outlining a number of limitations of the approaches that were used and discuss some potentially fruitful avenues for future research in these areas

    History and Philosophy of Neural Networks

    Get PDF
    This chapter conceives the history of neural networks emerging from two millennia of attempts to rationalise and formalise the operation of mind. It begins with a brief review of early classical conceptions of the soul, seating the mind in the heart; then discusses the subsequent Cartesian split of mind and body, before moving to analyse in more depth the twentieth century hegemony identifying mind with brain; the identity that gave birth to the formal abstractions of brain and intelligence we know as ‘neural networks’. The chapter concludes by analysing this identity - of intelligence and mind with mere abstractions of neural behaviour - by reviewing various philosophical critiques of formal connectionist explanations of ‘human understanding’, ‘mathematical insight’ and ‘consciousness’; critiques which, if correct, in an echo of Aristotelian insight, sug- gest that cognition may be more profitably understood not just as a result of [mere abstractions of] neural firings, but as a consequence of real, embodied neural behaviour, emerging in a brain, seated in a body, embedded in a culture and rooted in our world; the so called 4Es approach to cognitive science: the Embodied, Embedded, Enactive, and Ecological conceptions of mind

    Recurrent neural network for optimization with application to computer vision.

    Get PDF
    by Cheung Kwok-wai.Thesis (M.Phil.)--Chinese University of Hong Kong, 1993.Includes bibliographical references (leaves [146-154]).Chapter Chapter 1 --- IntroductionChapter 1.1 --- Programmed computing vs. neurocomputing --- p.1-1Chapter 1.2 --- Development of neural networks - feedforward and feedback models --- p.1-2Chapter 1.3 --- State of art of applying recurrent neural network towards computer vision problem --- p.1-3Chapter 1.4 --- Objective of the Research --- p.1-6Chapter 1.5 --- Plan of the thesis --- p.1-7Chapter Chapter 2 --- BackgroundChapter 2.1 --- Short history on development of Hopfield-like neural network --- p.2-1Chapter 2.2 --- Hopfield network model --- p.2-3Chapter 2.2.1 --- Neuron's transfer function --- p.2-3Chapter 2.2.2 --- Updating sequence --- p.2-6Chapter 2.3 --- Hopfield energy function and network convergence properties --- p.2-1Chapter 2.4 --- Generalized Hopfield network --- p.2-13Chapter 2.4.1 --- Network order and generalized Hopfield network --- p.2-13Chapter 2.4.2 --- Associated energy function and network convergence property --- p.2-13Chapter 2.4.3 --- Hardware implementation consideration --- p.2-15Chapter Chapter 3 --- Recurrent neural network for optimizationChapter 3.1 --- Mapping to Neural Network formulation --- p.3-1Chapter 3.2 --- Network stability verse Self-reinforcement --- p.3-5Chapter 3.2.1 --- Quadratic problem and Hopfield network --- p.3-6Chapter 3.2.2 --- Higher-order case and reshaping strategy --- p.3-8Chapter 3.2.3 --- Numerical Example --- p.3-10Chapter 3.3 --- Local minimum limitation and existing solutions in the literature --- p.3-12Chapter 3.3.1 --- Simulated Annealing --- p.3-13Chapter 3.3.2 --- Mean Field Annealing --- p.3-15Chapter 3.3.3 --- Adaptively changing neural network --- p.3-16Chapter 3.3.4 --- Correcting Current Method --- p.3-16Chapter 3.4 --- Conclusions --- p.3-17Chapter Chapter 4 --- A Novel Neural Network for Global Optimization - Tunneling NetworkChapter 4.1 --- Tunneling Algorithm --- p.4-1Chapter 4.1.1 --- Description of Tunneling Algorithm --- p.4-1Chapter 4.1.2 --- Tunneling Phase --- p.4-2Chapter 4.2 --- A Neural Network with tunneling capability Tunneling network --- p.4-8Chapter 4.2.1 --- Network Specifications --- p.4-8Chapter 4.2.2 --- Tunneling function for Hopfield network and the corresponding updating rule --- p.4-9Chapter 4.3 --- Tunneling network stability and global convergence property --- p.4-12Chapter 4.3.1 --- Tunneling network stability --- p.4-12Chapter 4.3.2 --- Global convergence property --- p.4-15Chapter 4.3.2.1 --- Markov chain model for Hopfield network --- p.4-15Chapter 4.3.2.2 --- Classification of the Hopfield markov chain --- p.4-16Chapter 4.3.2.3 --- Markov chain model for tunneling network and its convergence towards global minimum --- p.4-18Chapter 4.3.3 --- Variation of pole strength and its effect --- p.4-20Chapter 4.3.3.1 --- Energy Profile analysis --- p.4-21Chapter 4.3.3.2 --- Size of attractive basin and pole strength required --- p.4-24Chapter 4.3.3.3 --- A new type of pole eases the implementation problem --- p.4-30Chapter 4.4 --- Simulation Results and Performance comparison --- p.4-31Chapter 4.4.1 --- Simulation Experiments --- p.4-32Chapter 4.4.2 --- Simulation Results and Discussions --- p.4-37Chapter 4.4.2.1 --- Comparisons on optimal path obtained and the convergence rate --- p.4-37Chapter 4.4.2.2 --- On decomposition of Tunneling network --- p.4-38Chapter 4.5 --- Suggested hardware implementation of Tunneling network --- p.4-48Chapter 4.5.1 --- Tunneling network hardware implementation --- p.4-48Chapter 4.5.2 --- Alternative implementation theory --- p.4-52Chapter 4.6 --- Conclusions --- p.4-54Chapter Chapter 5 --- Recurrent Neural Network for Gaussian FilteringChapter 5.1 --- Introduction --- p.5-1Chapter 5.1.1 --- Silicon Retina --- p.5-3Chapter 5.1.2 --- An Active Resistor Network for Gaussian Filtering of Image --- p.5-5Chapter 5.1.3 --- Motivations of using recurrent neural network --- p.5-7Chapter 5.1.4 --- Difference between the active resistor network model and recurrent neural network model for gaussian filtering --- p.5-8Chapter 5.2 --- From Problem formulation to Neural Network formulation --- p.5-9Chapter 5.2.1 --- One Dimensional Case --- p.5-9Chapter 5.2.2 --- Two Dimensional Case --- p.5-13Chapter 5.3 --- Simulation Results and Discussions --- p.5-14Chapter 5.3.1 --- Spatial impulse response of the 1-D network --- p.5-14Chapter 5.3.2 --- Filtering property of the 1-D network --- p.5-14Chapter 5.3.3 --- Spatial impulse response of the 2-D network and some filtering results --- p.5-15Chapter 5.4 --- Conclusions --- p.5-16Chapter Chapter 6 --- Recurrent Neural Network for Boundary DetectionChapter 6.1 --- Introduction --- p.6-1Chapter 6.2 --- From Problem formulation to Neural Network formulation --- p.6-3Chapter 6.2.1 --- Problem Formulation --- p.6-3Chapter 6.2.2 --- Recurrent Neural Network Model used --- p.6-4Chapter 6.2.3 --- Neural Network formulation --- p.6-5Chapter 6.3 --- Simulation Results and Discussions --- p.6-7Chapter 6.3.1 --- Feasibility study and Performance comparison --- p.6-7Chapter 6.3.2 --- Smoothing and Boundary Detection --- p.6-9Chapter 6.3.3 --- Convergence improvement by network decomposition --- p.6-10Chapter 6.3.4 --- Hardware implementation consideration --- p.6-10Chapter 6.4 --- Conclusions --- p.6-11Chapter Chapter 7 --- Conclusions and Future ResearchesChapter 7.1 --- Contributions and Conclusions --- p.7-1Chapter 7.2 --- Limitations and Suggested Future Researches --- p.7-3References --- p.R-lAppendix I The assignment of the boundary connection of 2-D recurrent neural network for gaussian filtering --- p.Al-1Appendix II Formula for connection weight assignment of 2-D recurrent neural network for gaussian filtering and the proof on symmetric property --- p.A2-1Appendix III Details on reshaping strategy --- p.A3-
    corecore