10 research outputs found

    Impact of Architecture on Forming Our Personal Memories

    Get PDF
    Time underlies many interesting human behaviors. Thus, the question of how to represent time in connectionist models is critical. One approach is to represent time implicitly by its effects on processing rather than explicitly (as in a spatial representation). Memory is the primary processing of man through time.Many studies have focused on the memory of architecture. The "Art of Memory" written by Frances Yates (1966). This classic study of how people learned to retain vast stores of knowledge before the invention of the printed page, Frances traced the art of memory from its treatment by Greek orators, through its Gothic transformations in the Middle Ages, to the occult forms it took in the Renaissance, and finally to its use in the seventeenth century. This study was the first to relate the art of memory to the history of Architecture and culture as a whole was revolutionary when it first appeared and continued to mesmerize readers with those clear insights.In another word, he described how people used architecture to help their memories. (Architecture can bring back a lot of memories). The aim of this study is to focus on the relationship between our individual memories and architecture and if there is a continuous familiarity of the place within our minds, presents as an inextricably bound. Keywords: Architecture, Spatial representation, Memory, Encoding Images, Retrieval Images

    Locally connected recurrent neural networks.

    Get PDF
    by Evan, Fung-yu Young.Thesis (M.Phil.)--Chinese University of Hong Kong, 1993.Includes bibliographical references (leaves 161-166).List of Figures --- p.viList of Tables --- p.viiList of Graphs --- p.viiiAbstract --- p.ixChapter Part I --- Learning AlgorithmsChapter 1 --- Representing Time in Connectionist Models --- p.1Chapter 1.1 --- Introduction --- p.1Chapter 1.2 --- Temporal Sequences --- p.2Chapter 1.2.1 --- Recognition Tasks --- p.2Chapter 1.2.2 --- Reproduction Tasks --- p.3Chapter 1.2.3 --- Generation Tasks --- p.4Chapter 1.3 --- Discrete Time v.s. Continuous Time --- p.4Chapter 1.4 --- Time Delay Neural Network (TDNN) --- p.4Chapter 1.4.1 --- Delay Elements in the Connections --- p.5Chapter 1.4.2 --- NETtalk: An Application of TDNN --- p.7Chapter 1.4.3 --- Drawbacks of TDNN --- p.8Chapter 1.5 --- Networks with Context Units --- p.8Chapter 1.5.1 --- Jordan's Network --- p.9Chapter 1.5.2 --- Elman's Network --- p.10Chapter 1.5.3 --- Other Architectures --- p.14Chapter 1.5.4 --- Drawbacks of Using Context Units --- p.15Chapter 1.6 --- Recurrent Neural Networks --- p.16Chapter 1.6.1 --- Hopfield Models --- p.17Chapter 1.6.2 --- Fully Recurrent Neural Networks --- p.20Chapter A. --- EXAMPLES OF USING RECURRENT NETWORKS --- p.22Chapter 1.7 --- Our Objective --- p.25Chapter 2 --- Learning Algorithms for Recurrent Neural Networks --- p.27Chapter 2.1 --- Introduction --- p.27Chapter 2.2 --- Gradient Descent Methods --- p.29Chapter 2.2.1 --- Backpropagation Through Time (BPTT) --- p.29Chapter 2.2.2 --- Real Time Recurrent Learning Rule (RTRL) --- p.30Chapter A. --- RTRL WITH TEACHER FORCING --- p.32Chapter B. --- TERMINAL TEACHER FORCING --- p.33Chapter C. --- CONTINUOUS TIME RTRL --- p.33Chapter 2.2.3 --- Variants of RTRL --- p.34Chapter A. --- SUB GROUPED RTRL --- p.34Chapter B. --- A FIXED SIZE STORAGE 0(n3) TIME COMPLEXITY LEARNGING RULE --- p.35Chapter 2.3 --- Non-Gradient Descent Methods --- p.37Chapter 2.3.1 --- Neural Bucket Brigade (NBB) --- p.37Chapter 2.3.2 --- Temporal Driven Method (TO) --- p.38Chapter 2.4 --- Comparison between Different Approaches --- p.39Chapter 2.5 --- Conclusion --- p.41Chapter 3 --- Locally Connected Recurrent Networks --- p.43Chapter 3.1 --- Introduction --- p.43Chapter 3.2 --- Locally Connected Recurrent Networks --- p.44Chapter 3.2.1 --- Network Topology --- p.44Chapter 3.2.2 --- Subgrouping --- p.46Chapter 3.2.3 --- Learning Algorithm --- p.47Chapter 3.2.4 --- Continuous Time Learning Algorithm --- p.50Chapter 3.3 --- Analysis --- p.51Chapter 3.3.1 --- Time Complexity --- p.51Chapter 3.3.2 --- Space Complexity --- p.51Chapter 3.3.3 --- Local Computations in Time and Space --- p.51Chapter 3.4 --- Running on Parallel Architectures --- p.52Chapter 3.4.1 --- Mapping the Algorithm to Parallel Architectures --- p.52Chapter 3.4.2 --- Parallel Learning Algorithm --- p.53Chapter 3.4.3 --- Analysis --- p.54Chapter 3.5 --- Ring-Structured Recurrent Network (RRN) --- p.55Chapter 3.6 --- Comparison between RRN and RTRL in Sequence Recognition --- p.55Chapter 3.6.1 --- Training Sets and Testing Sequences --- p.56Chapter 3.6.2 --- Comparison in Training Speed --- p.58Chapter 3.6.3 --- Comparison in Recalling Power --- p.59Chapter 3.7 --- Comparison between RRN and RTRL in Time Series Prediction --- p.59Chapter 3.7.1 --- Comparison in Training Speed --- p.62Chapter 3.7.2 --- Comparison in Predictive Power --- p.63Chapter 3.8 --- Conclusion --- p.65Chapter Part II --- ApplicationsChapter 4 --- Sequence Recognition by Ring-Structured Recurrent Networks --- p.67Chapter 4.1 --- Introduction --- p.67Chapter 4.2 --- Related Works --- p.68Chapter 4.2.1 --- Feedback Multilayer Perceptron (FMLP) --- p.68Chapter 4.2.2 --- Back Propagation Unfolded Recurrent Rule (BURR) --- p.69Chapter 4.3 --- Experimental Details --- p.71Chapter 4.3.1 --- Network Architecture --- p.71Chapter 4.3.2 --- Input/Output Representations --- p.72Chapter 4.3.3 --- Training Phase --- p.73Chapter 4.3.4 --- Recalling Phase --- p.73Chapter 4.4 --- Experimental Results --- p.74Chapter 4.4.1 --- Temporal Memorizing Power --- p.74Chapter 4.4.2 --- Time Warping Performance --- p.80Chapter 4.4.3 --- Fault Tolerance --- p.85Chapter 4.4.4 --- Learning Rate --- p.87Chapter 4.5 --- Time Delay --- p.88Chapter 4.6 --- Conclusion --- p.91Chapter 5 --- Time Series Prediction --- p.92Chapter 5.1 --- Introduction --- p.92Chapter 5.2 --- Modelling in Feedforward Networks --- p.93Chapter 5.3 --- Methodology with Recurrent Networks --- p.94Chapter 5.3.1 --- Network Structure --- p.94Chapter 5.3.2 --- Model Building - Training --- p.95Chapter 5.3.3 --- Model Diagnosis - Testing --- p.95Chapter 5.4 --- Training Paradigms --- p.96Chapter 5.4.1 --- A Quasiperiodic Series with White Noise --- p.96Chapter 5.4.2 --- A Chaotic Series --- p.97Chapter 5.4.3 --- Sunspots Numbers --- p.98Chapter 5.4.4 --- Hang Seng Index --- p.99Chapter 5.5 --- Experimental Results and Discussions --- p.99Chapter 5.5.1 --- A Quasiperiodic Series with White Noise --- p.101Chapter 5.5.2 --- Logistic Map --- p.103Chapter 5.5.3 --- Sunspots Numbers --- p.105Chapter 5.5.4 --- Hang Seng Index --- p.109Chapter 5.6 --- Conclusion --- p.112Chapter 6 --- Chaos in Recurrent Networks --- p.114Chapter 6.1 --- Introduction --- p.114Chapter 6.2 --- Important Features of Chaos --- p.115Chapter 6.2.1 --- First Return Map --- p.115Chapter 6.2.2 --- Long Term Unpredictability --- p.117Chapter 6.2.3 --- Sensitivity to Initial Conditions (SIC) --- p.118Chapter 6.2.4 --- Strange Attractor --- p.119Chapter 6.3 --- Chaotic Behaviour in Recurrent Networks --- p.120Chapter 6.3.1 --- Network Structure --- p.121Chapter 6.3.2 --- Dynamics in Training --- p.121Chapter 6.3.3 --- Dynamics in Testing --- p.122Chapter 6.4 --- Experiments and Discussions --- p.123Chapter 6.4.1 --- Henon Model --- p.123Chapter 6.4.2 --- Lorenz Model --- p.127Chapter 6.5 --- Conclusion --- p.134Chapter 7 --- Conclusion --- p.135Appendix A Series 1 Sine Function with White Noise --- p.137Appendix B Series 2 Logistic Map --- p.138Appendix C Series 3 Sunspots Numbers from 1700 to 1979 --- p.139Appendix D A Quasiperiodic Series with White Noise --- p.141Appendix E Hang Seng Daily Closing Index in 1991 --- p.142Appendix F Network Model for the Quasiperiodic Series with White Noise --- p.143Appendix G Network Model for the Logistic Map --- p.144Appendix H Network Model for the Sunspots Numbers --- p.145Appendix I Network Model for the Hang Seng Index --- p.146Appendix J Henon Model --- p.147Appendix K Network Model for the Henon Map --- p.150Appendix L Lorenz Model --- p.151Appendix M Network Model for the Lorenz Map --- p.159Bibliography --- p.16

    Hierarchical Temporal Memory Cortical Learning Algorithm for Pattern Recognition on Multi-core Architectures

    Get PDF
    Strongly inspired by an understanding of mammalian cortical structure and function, the Hierarchical Temporal Memory Cortical Learning Algorithm (HTM CLA) is a promising new approach to problems of recognition and inference in space and time. Only a subset of the theoretical framework of this algorithm has been studied, but it is already clear that there is a need for more information about the performance of HTM CLA with real data and the associated computational costs. For the work presented here, a complete implementation of Numenta\u27s current algorithm was done in C++. In validating the implementation, first and higher order sequence learning was briefly examined, as was algorithm behavior with noisy data doing simple pattern recognition. A pattern recognition task was created using sequences of handwritten digits and performance analysis of the sequential implementation was performed. The analysis indicates that the resulting rapid increase in computing load may impact algorithm scalability, which may, in turn, be an obstacle to widespread adoption of the algorithm. Two critical hotspots in the sequential code were identified and a parallelized version was developed using OpenMP multi-threading. Scalability analysis of the parallel implementation was performed on a state of the art multi-core computing platform. Modest speedup was readily achieved with straightforward parallelization. Parallelization on multi-core systems is an attractive choice for moderate sized applications, but significantly larger ones are likely to remain infeasible without more specialized hardware acceleration accompanied by optimizations to the algorithm

    ON META-NETWORKS, DEEP LEARNING, TIME AND JIHADISM

    Get PDF
    Il terrorismo di stampo jihadista rappresenta una minaccia per la società e una sfida per gli scienziati interessati a comprenderne la complessità. Questa complessità richiede costantemente nuovi sviluppi in termini di ricerca sul terrorismo. Migliorare la conoscenza empirica rispetto a tale fenomeno può potenzialmente contribuire a sviluppare applicazioni concrete e, in ultima istanza, a prevenire danni all’uomo. In considerazione di tali aspetti, questa tesi presenta un nuovo quadro metodologico che integra scienza delle reti, modelli stocastici e apprendimento profondo per far luce sul terrorismo jihadista sia a livello esplicativo che predittivo. In particolare, questo lavoro compara e analizza le organizzazioni jihadiste più attive a livello mondiale (ovvero lo Stato Islamico, i Talebani, Al Qaeda, Boko Haram e Al Shabaab) per studiarne i pattern comportamentali e predirne le future azioni. Attraverso un impianto teorico che si poggia sulla concentrazione spaziale del crimine e sulle prospettive strategiche del comportamento terroristico, questa tesi persegue tre obiettivi collegati utilizzando altrettante tecniche ibride. In primo luogo, verrà esplorata la complessità operativa delle organizzazioni jihadiste attraverso l’analisi di matrici stocastiche di transizione e verrà presentato un nuovo coefficiente, denominato “Normalized Transition Similarity”, che misura la somiglianza fra paia di gruppi in termini di dinamiche operative. In secondo luogo, i processi stocastici di Hawkes aiuteranno a testare la presenza di meccanismi di dipendenza temporale all’interno delle più comuni sotto-sequenze strategiche di ciascun gruppo. Infine, il framework integrerà la meta-reti complesse e l’apprendimento profondo per classificare e prevedere i target a maggiore rischio di essere colpiti dalle organizzazioni jihadiste durante i loro futuri attacchi. Per quanto riguarda i risultati, le matrici stocastiche di transizione mostrano che i gruppi terroristici possiedono un ricco e complesso repertorio di combinazioni in termini di armi e obiettivi. Inoltre, i processi di Hawkes indicano la presenza di diffusa self-excitability nelle sequenze di eventi. Infine, i modelli predittivi che sfruttano la flessibilità delle serie temporali derivanti da grafi dinamici e le reti neurali Long Short-Term Memory forniscono risultati promettenti rispetto ai target più a rischio. Nel complesso, questo lavoro ambisce a dimostrare come connessioni astratte e nascoste fra eventi possano essere fondamentali nel rivelare le meccaniche del comportamento jihadista e come processi memory-like (ovvero molteplici comportamenti ricorrenti, interconnessi e non randomici) possano risultare estremamente utili nel comprendere le modalità attraverso cui tali organizzazioni operano.Jihadist terrorism represents a global threat for societies and a challenge for scientists interested in understanding its complexity. This complexity continuously calls for developments in terrorism research. Enhancing the empirical knowledge on the phenomenon can potentially contribute to developing concrete real-world applications and, ultimately, to the prevention of societal damages. In light of these aspects, this work presents a novel methodological framework that integrates network science, mathematical modeling, and deep learning to shed light on jihadism, both at the explanatory and predictive levels. Specifically, this dissertation will compare and analyze the world's most active jihadist terrorist organizations (i.e. The Islamic State, the Taliban, Al Qaeda, Boko Haram, and Al Shabaab) to investigate their behavioral patterns and forecast their future actions. Building upon a theoretical framework that relies on the spatial concentration of terrorist violence and the strategic perspective of terrorist behavior, this dissertation will pursue three linked tasks, employing as many hybrid techniques. Firstly, explore the operational complexity of jihadist organizations using stochastic transition matrices and present Normalized Transition Similarity, a novel coefficient of pairwise similarity in terms of strategic behavior. Secondly, investigate the presence of time-dependent dynamics in attack sequences using Hawkes point processes. Thirdly, integrate complex meta-networks and deep learning to rank and forecast most probable future targets attacked by the jihadist groups. Concerning the results, stochastic transition matrices show that terrorist groups possess a complex repertoire of combinations in the use of weapons and targets. Furthermore, Hawkes models indicate the diffused presence of self-excitability in attack sequences. Finally, forecasting models that exploit the flexibility of graph-derived time series and Long Short-Term Memory networks provide promising results in terms of correct predictions of most likely terrorist targets. Overall, this research seeks to reveal how hidden abstract connections between events can be exploited to unveil jihadist mechanics and how memory-like processes (i.e. multiple non-random parallel and interconnected recurrent behaviors) might illuminate the way in which these groups act

    Conceptual-associative system in Aboriginal English : a study of Aboriginal children attending primary schools in metropolitan Perth

    Get PDF
    National measures of achievement among Australian school children suggest that Aboriginal students, considered as a group, are those most likely to end their schooling without achieving minimal acceptable levels of literacy and numeracy. In view of the fact that many Aboriginal students dwell in metropolitan areas and speak English as a first language, many educators have been unconvinced that linguistic and cultural difference have been significant factors in this underachievement. This study explores the possibility that, despite intensive exposure to non-Aboriginal society, Aboriginal students in metropolitan Perth may maintain, through a distinctive variety of English, distinctive conceptualisation which may help to account for their lack of success in education. The study first develops a model of conceptualisations that emerge at the group level of cognition. The model draws on the notion of distributed representation to depict what are here termed cultural conceptualisations. Cultural conceptualisations are conceptual structures such as schemas and categories that members of a cultural group draw on in approaching experience. The study employs this model with regard to Aboriginal and non-Aboriginal students attending schools in the Perth Metropolitan area. A group of 30 Aboriginal primary school students and a matching group of non-Aboriginal students participated in this study. A research technique called Association-Interpretation was developed to tap into cultural conceptualisations across the two groups of participants. The technique was composed of two phases: a) the \u27association\u27 phase, in which the participants gave associative responses to a list of 30 everyday words such as \u27home\u27 and \u27family\u27, and b) the \u27interpretation\u27 phase, in which the responses were interpreted from an ethnic viewpoint and compared within and between the two groups. The informants participated in the task individually. The analysis of the data provided evidence for the operation of two distinct, but overlapping, conceptual systems among the two cultural groups studied. The two systems are integrally related to the dialects spoken by Aboriginal and non-Aboriginal Australians, that is, Aboriginal English and Australian English. The discrepancies between the two systems largely appear to be rooted in the cultural systems which give rise to the two dialects while the overlap between the two conceptual systems appears to arise from several phenomena such as experience in similar physical environments and access to \u27modem\u27 life style. A number of responses from non-Aboriginal informants suggest a case of what may be termed conceptual seepage, or a permeation of conceptualisation from one group to another due to contact. It is argued, in the light of the data from this study, that the notions of dialect and \u27code-switching\u27 need to be revisited in that their characterisation has traditionally ignored the level of conceptualisation. It is also suggested that the results of this study have implications for the professional preparation of educators dealing with Aboriginal students

    Time in Connectionist Models

    No full text

    Time in connectionist models

    No full text

    Finding structure in time

    No full text
    Time underlies many interesting human behaviors. Thus, the question of how to represent time in connectionist models is very important. One approach is to represent time implicitly by its effects on processing rather than explicitly (as in a spatial representation). The current report develops a proposal along these lines first described by Jordan (1986) which involves the use of recurrent links in order to provide networks with a dynamic memory. In this approach, hidden unit patterns are fed back to themselves; the internal representations which develop thus reflect task demands in the context of prior internal states. A set of simulations is reported which range from relatively simple problems (temporal version of XOR) to discovering syntactic/semantic features for words. The networks are able to learn interesting internal representations which incorporate task demands with memory demands; indeed, in this approach the notion of memory is inextricably bound up with task processing. These representations reveal a rich structure, which allows them to be highly context-dependent while also expressing generalizations across classes of items. These representations suggest a method for representing lexical categories and the type/token distinction
    corecore