364 research outputs found

    Pattern memory analysis based on stability theory of cellular neural networks

    Get PDF
    AbstractIn this paper, several sufficient conditions are obtained to guarantee that the n-dimensional cellular neural network can have even (⩽2n) memory patterns. In addition, the estimations of attractive domain of such stable memory patterns are obtained. These conditions, which can be directly derived from the parameters of the neural networks, are easily verified. A new design procedure for cellular neural networks is developed based on stability theory (rather than the well-known perceptron training algorithm), and the convergence in the new design procedure is guaranteed by the obtained local stability theorems. Finally, the validity and performance of the obtained results are illustrated by two examples

    Split and Shift Methodology: Overcoming Hardware Limitations on Cellular Processor Arrays for Image Processing

    Get PDF
    Na era multimedia, o procesado de imaxe converteuse nun elemento de singular importancia nos dispositivos electrónicos. Dende as comunicacións (p.e. telemedicina), a seguranza (p.e. recoñecemento retiniano) ou control de calidade e de procesos industriais (p.e. orientación de brazos articulados, detección de defectos do produto), pasando pola investigación (p.e. seguimento de partículas elementais) e diagnose médica (p.e. detección de células estrañas, identificaciónn de veas retinianas), hai un sinfín de aplicacións onde o tratamento e interpretación automáticas de imaxe e fundamental. O obxectivo último será o deseño de sistemas de visión con capacidade de decisión. As tendencias actuais requiren, ademais, a combinación destas capacidades en dispositivos pequenos e portátiles con resposta en tempo real. Isto propón novos desafíos tanto no deseño hardware como software para o procesado de imaxe, buscando novas estruturas ou arquitecturas coa menor area e consumo de enerxía posibles sen comprometer a funcionalidade e o rendemento

    Non-Uniform Cellular Neural Network and its Applications

    Get PDF
    セルラーニューラルネットワーク(CNN)には連続時間的な ものと,離散時間的なものがあり,本研究は主に後者について議論 する. CNNは1988年にカリフォルニア大学バークレ校のL.O.Chua 教授らによって提案され,現在,アメリカ,ヨーロッパを中心に盛 んに研究が進められている. CNNは従来のニューラルネットワー クと異なり,近傍のセルとのみ結合しているため集積回路としての 実現が容易であり,画像処理用CNNとして注目されている. 第一章では,ニューラルネットワークに関する研究の動向,お よび,人間の目と同様な処理機能を持つ連続時間CNNに関する研 究の動向と,この論文で議論している離散時間CNNの背景につい て簡単に述べている.  第二章では,離散時間的な非均一CNNとして,二相同期信号 の回路モデルを提案し,その安定性等について議論してある. この モデルは各セルについて二相同期信号1個で実現できるため,VLS1 の実現が容易であると云う特徴がある. まず,モデルの動作原理か ら状態電圧,出力電圧の動作領域を明かにした. このことは物理的 に実現可能なCNNを設計するために重要である.つぎに,安定性 を議論するためにエネルギ一関数からリアフノフ関数を定義し,そ の関数の時間単調減少の条件を利用して,大域的な安定性を持つ離 散時間CNNの設計方法を明らかにした. 第三章では,非線形システムにおける平衡点の求解法について 議論している.連想記憶に用いられるCNNは多くの平衡点をもち, 入力信号によってどの平衡点に到達するかが決定せられる. ロバス トな連想記憶用CNNを設計するためには,このような平衡点を調 べることが必要である. ここでは,解曲線追跡法に基づいた複数解 の求解アルゴリズムを提案している. このアルゴリズムは急激な解 曲線の変化を効率よく追跡できるように,エルミー卜予測子とBDF 積分公式に基づいている. また,大規模系に適用できるようにニュ ートン・ラフソン法の代わりにブラウンの反復法を採用している. このようなアルゴリズを採用することによりロバストなCNNの設 計が可能となる. 第四章では, 離散時間CNNによる連想記憶について述べてい る. 連想記憶は人間の脳の基本的な機能であり,ニューラルネット ワーク応用研究の一つとして古くから盛んに研究されている.本章 では, 離散的なCNNを用いた外積学習アルゴリズムと中点写像ア ルゴリズムの2種類の記憶方式を提案し,その性質を解明している. まず,前者は,入力パターンに対して,エネルギー関数の値が最少 になるようにニューロン間の接続を表す重み行列を設定しようと云 うものであり,これはHebbの理論に基づいている. また,上のよう な手法で学習されたパターンを連想記憶できる条件について議論し た.中点写像アルゴリズムは重み行列の設定方法に対して, いま考 えている中心セルからの近傍を定義し,近傍に存在するセルの状態、 をベクトル表示する.これを全てパターンについて実行し,このよ うにして決定された行列によって写像されるセルのパターンが,元 の中心セルと同一のパターンを持つように重み行列を設定しようと いうもので,数学的には一般化逆行列の理論に基づいている.この ような学習方法の特徴は入力された画像が全て連想されると云うこ とである. 本章では,さらに,このことを応用例によって実証した. 第五章では,画像処理への応用として,輪郭抽出,雑音除去, 視覚パターンの認識に対する離散的なCNNについて述べている. 多くの結果から処理時間は従来のものと比較して極端に短縮される ことが分かった. また,不均一離散時間CNNによって,一つ画面 中に多数の異なる視覚パターンを同時に認識できることも分かった。 第六章では,不均一離散的なCNNの特徴と今後の問題点につ いて述べている

    Learning as a Nonlinear Line of Attraction for Pattern Association, Classification and Recognition

    Get PDF
    Development of a mathematical model for learning a nonlinear line of attraction is presented in this dissertation, in contrast to the conventional recurrent neural network model in which the memory is stored in an attractive fixed point at discrete location in state space. A nonlinear line of attraction is the encapsulation of attractive fixed points scattered in state space as an attractive nonlinear line, describing patterns with similar characteristics as a family of patterns. It is usually of prime imperative to guarantee the convergence of the dynamics of the recurrent network for associative learning and recall. We propose to alter this picture. That is, if the brain remembers by converging to the state representing familiar patterns, it should also diverge from such states when presented by an unknown encoded representation of a visual image. The conception of the dynamics of the nonlinear line attractor network to operate between stable and unstable states is the second contribution in this dissertation research. These criteria can be used to circumvent the plasticity-stability dilemma by using the unstable state as an indicator to create a new line for an unfamiliar pattern. This novel learning strategy utilizes stability (convergence) and instability (divergence) criteria of the designed dynamics to induce self-organizing behavior. The self-organizing behavior of the nonlinear line attractor model can manifest complex dynamics in an unsupervised manner. The third contribution of this dissertation is the introduction of the concept of manifold of color perception. The fourth contribution of this dissertation is the development of a nonlinear dimensionality reduction technique by embedding a set of related observations into a low-dimensional space utilizing the result attained by the learned memory matrices of the nonlinear line attractor network. Development of a system for affective states computation is also presented in this dissertation. This system is capable of extracting the user\u27s mental state in real time using a low cost computer. It is successfully interfaced with an advanced learning environment for human-computer interaction

    Spatial Representations in the Entorhino-Hippocampal Circuit

    Get PDF
    After a general introduction and a brief review of the available experimental data on spatial representations (chapter 2), this thesis is divided into two main parts. The first part, comprising the chapters from 3 to 6, is dedicated to grid cells. In chapter 3 we present and discuss the various models proposed for explaining grid cells formation. In chapter 4 and 5 we study our model of grid cells generation based on adaptation in the case of non-planar environments, namely in the case of a spherical environment and of three-dimensional space. In chapter 6 we propose a variant of the model where the alignment of the grid axes is induced through reciprocal inhibition, and we suggest that that the inhibitory connections obtained during this learning process can be used to implement a continuous attractor in mEC. The second part, comprising chapters from 7 to 10 is instead focused on place cell representations. In chapter 7 we analyze the differences between place cells and grid cells in terms on information content, in chapter 8 we describe the properties of attractor dynamics in our model of the Ca3 net- work, and in the following chapter we study the effects of theta oscillations on network dynamics. Finally, in Chapter 10 we analyze to what extent the learning of a new representation, can preserve the topology and the exact metric of physical space

    A new class of neural architectures to model episodic memory : computational studies of distal reward learning

    Get PDF
    A computational cognitive neuroscience model is proposed, which models episodic memory based on the mammalian brain. A computational neural architecture instantiates the proposed model and is tested on a particular task of distal reward learning. Categorical Neural Semantic Theory informs the architecture design. To experiment upon the computational brain model, embodiment and an environment in which the embodiment exists are simulated. This simulated environment realizes the Morris Water Maze task, a well established biological experimental test of distal reward learning. The embodied neural architecture is treated as a virtual rat and the environment it acts in as a virtual water tank. Performance levels of the neural architectures are evaluated through analysis of embodied behavior in the distal reward learning task. Comparison is made to biological rat experimental data, as well as comparison to other published models. In addition, differences in performance are compared between the normal and categorically informed versions of the architecture

    Memristive Computing

    Get PDF
    Memristive computing refers to the utilization of the memristor, the fourth fundamental passive circuit element, in computational tasks. The existence of the memristor was theoretically predicted in 1971 by Leon O. Chua, but experimentally validated only in 2008 by HP Labs. A memristor is essentially a nonvolatile nanoscale programmable resistor — indeed, memory resistor — whose resistance, or memristance to be precise, is changed by applying a voltage across, or current through, the device. Memristive computing is a new area of research, and many of its fundamental questions still remain open. For example, it is yet unclear which applications would benefit the most from the inherent nonlinear dynamics of memristors. In any case, these dynamics should be exploited to allow memristors to perform computation in a natural way instead of attempting to emulate existing technologies such as CMOS logic. Examples of such methods of computation presented in this thesis are memristive stateful logic operations, memristive multiplication based on the translinear principle, and the exploitation of nonlinear dynamics to construct chaotic memristive circuits. This thesis considers memristive computing at various levels of abstraction. The first part of the thesis analyses the physical properties and the current-voltage behaviour of a single device. The middle part presents memristor programming methods, and describes microcircuits for logic and analog operations. The final chapters discuss memristive computing in largescale applications. In particular, cellular neural networks, and associative memory architectures are proposed as applications that significantly benefit from memristive implementation. The work presents several new results on memristor modeling and programming, memristive logic, analog arithmetic operations on memristors, and applications of memristors. The main conclusion of this thesis is that memristive computing will be advantageous in large-scale, highly parallel mixed-mode processing architectures. This can be justified by the following two arguments. First, since processing can be performed directly within memristive memory architectures, the required circuitry, processing time, and possibly also power consumption can be reduced compared to a conventional CMOS implementation. Second, intrachip communication can be naturally implemented by a memristive crossbar structure.Siirretty Doriast

    Network analysis of the cellular circuits of memory

    Get PDF
    Intuitively, memory is conceived as a collection of static images that we accumulate as we experience the world. But actually, memories are constantly changing through our life, shaped by our ongoing experiences. Assimilating new knowledge without corrupting pre-existing memories is then a critical brain function. However, learning and memory interact: prior knowledge can proactively influence learning, and new information can retroactively modify memories of past events. The hippocampus is a brain region essential for learning and memory, but the network-level operations that underlie the continuous integration of new experiences into memory, segregating them as discrete traces while enabling their interaction, are unknown. Here I show a network mechanism by which two distinct memories interact. Hippocampal CA1 neuron ensembles were monitored in mice as they explored a familiar environment before and after forming a new place-reward memory in a different environment. By employing a network science representation of the co-firing relationships among principal cells, I first found that new associative learning modifies the topology of the cells’ co-firing patterns representing the unrelated familiar environment. I fur- ther observed that these neuronal co-firing graphs evolved along three functional axes: the first segregated novelty; the second distinguished individual novel be- havioural experiences; while the third revealed cross-memory interaction. Finally, I found that during this process, high activity principal cells rapidly formed the core representation of each memory; whereas low activity principal cells gradually joined co-activation motifs throughout individual experiences, enabling cross-memory in- teractions. These findings reveal an organizational principle of brain networks where high and low activity cells are differentially recruited into coactivity motifs as build- ing blocks for the flexible integration and interaction of memories. Finally, I employ a set of manifold learning and related approaches to explore and characterise the complex neural population dynamics within CA1 that underlie sim- ple exploration.Open Acces

    A Decade of Neural Networks: Practical Applications and Prospects

    Get PDF
    The Jet Propulsion Laboratory Neural Network Workshop, sponsored by NASA and DOD, brings together sponsoring agencies, active researchers, and the user community to formulate a vision for the next decade of neural network research and application prospects. While the speed and computing power of microprocessors continue to grow at an ever-increasing pace, the demand to intelligently and adaptively deal with the complex, fuzzy, and often ill-defined world around us remains to a large extent unaddressed. Powerful, highly parallel computing paradigms such as neural networks promise to have a major impact in addressing these needs. Papers in the workshop proceedings highlight benefits of neural networks in real-world applications compared to conventional computing techniques. Topics include fault diagnosis, pattern recognition, and multiparameter optimization

    A Novel Cloning Template Designing Method by Using an Artificial Bee Colony Algorithm for Edge Detection of CNN Based Imaging Sensors

    Get PDF
    Cellular Neural Networks (CNNs) have been widely used recently in applications such as edge detection, noise reduction and object detection, which are among the main computer imaging processes. They can also be realized as hardware based imaging sensors. The fact that hardware CNN models produce robust and effective results has attracted the attention of researchers using these structures within image sensors. Realization of desired CNN behavior such as edge detection can be achieved by correctly setting a cloning template without changing the structure of the CNN. To achieve different behaviors effectively, designing a cloning template is one of the most important research topics in this field. In this study, the edge detecting process that is used as a preliminary process for segmentation, identification and coding applications is conducted by using CNN structures. In order to design the cloning template of goal-oriented CNN architecture, an Artificial Bee Colony (ABC) algorithm which is inspired from the foraging behavior of honeybees is used and the performance analysis of ABC for this application is examined with multiple runs. The CNN template generated by the ABC algorithm is tested by using artificial and real test images. The results are subjectively and quantitatively compared with well-known classical edge detection methods, and other CNN based edge detector cloning templates available in the imaging literature. The results show that the proposed method is more successful than other methods
    corecore