723 research outputs found

    Modeling multiple object scenarios for feature recognition and classification using cellular neural networks

    Get PDF
    Cellular neural networks (CNNs) have been adopted in the spatio-temporal processing research field as a paradigm of complexity. This is due to the ease of designs for complex spatio-temporal tasks introduced by these networks. This has led to an increase in the adoption of CNNs for on-chip VLSI implementations. This dissertation proposes the use of a Cellular Neural Network to model, detect and classify objects appearing in multiple object scenes. The algorithm proposed is based on image scene enhancement through anisotropic diffusion; object detection and extraction through binary edge detection and boundary tracing; and object classification through genetically optimised associative networks and texture histograms. The first classification method is based on optimizing the space-invariant feedback template of the zero-input network through genetic operators, while the second method is based on computing diffusion filtered and modified histograms for object classes to generate decision boundaries that can be used to classify the objects. The primary goal is to design analogic algorithms that can be used to perform these tasks. While the use of genetically optimized associative networks for object learning yield an efficiency of over 95%, the use texture histograms has been found very accurate though there is a need to develop a better technique for histogram comparisons. The results found using these analogic algorithms affirm CNNs as well-suited for image processing tasks

    Non-Uniform Cellular Neural Network and its Applications

    Get PDF
    セルラーニューラルネットワーク(CNN)には連続時間的な ものと,離散時間的なものがあり,本研究は主に後者について議論 する. CNNは1988年にカリフォルニア大学バークレ校のL.O.Chua 教授らによって提案され,現在,アメリカ,ヨーロッパを中心に盛 んに研究が進められている. CNNは従来のニューラルネットワー クと異なり,近傍のセルとのみ結合しているため集積回路としての 実現が容易であり,画像処理用CNNとして注目されている. 第一章では,ニューラルネットワークに関する研究の動向,お よび,人間の目と同様な処理機能を持つ連続時間CNNに関する研 究の動向と,この論文で議論している離散時間CNNの背景につい て簡単に述べている.  第二章では,離散時間的な非均一CNNとして,二相同期信号 の回路モデルを提案し,その安定性等について議論してある. この モデルは各セルについて二相同期信号1個で実現できるため,VLS1 の実現が容易であると云う特徴がある. まず,モデルの動作原理か ら状態電圧,出力電圧の動作領域を明かにした. このことは物理的 に実現可能なCNNを設計するために重要である.つぎに,安定性 を議論するためにエネルギ一関数からリアフノフ関数を定義し,そ の関数の時間単調減少の条件を利用して,大域的な安定性を持つ離 散時間CNNの設計方法を明らかにした. 第三章では,非線形システムにおける平衡点の求解法について 議論している.連想記憶に用いられるCNNは多くの平衡点をもち, 入力信号によってどの平衡点に到達するかが決定せられる. ロバス トな連想記憶用CNNを設計するためには,このような平衡点を調 べることが必要である. ここでは,解曲線追跡法に基づいた複数解 の求解アルゴリズムを提案している. このアルゴリズムは急激な解 曲線の変化を効率よく追跡できるように,エルミー卜予測子とBDF 積分公式に基づいている. また,大規模系に適用できるようにニュ ートン・ラフソン法の代わりにブラウンの反復法を採用している. このようなアルゴリズを採用することによりロバストなCNNの設 計が可能となる. 第四章では, 離散時間CNNによる連想記憶について述べてい る. 連想記憶は人間の脳の基本的な機能であり,ニューラルネット ワーク応用研究の一つとして古くから盛んに研究されている.本章 では, 離散的なCNNを用いた外積学習アルゴリズムと中点写像ア ルゴリズムの2種類の記憶方式を提案し,その性質を解明している. まず,前者は,入力パターンに対して,エネルギー関数の値が最少 になるようにニューロン間の接続を表す重み行列を設定しようと云 うものであり,これはHebbの理論に基づいている. また,上のよう な手法で学習されたパターンを連想記憶できる条件について議論し た.中点写像アルゴリズムは重み行列の設定方法に対して, いま考 えている中心セルからの近傍を定義し,近傍に存在するセルの状態、 をベクトル表示する.これを全てパターンについて実行し,このよ うにして決定された行列によって写像されるセルのパターンが,元 の中心セルと同一のパターンを持つように重み行列を設定しようと いうもので,数学的には一般化逆行列の理論に基づいている.この ような学習方法の特徴は入力された画像が全て連想されると云うこ とである. 本章では,さらに,このことを応用例によって実証した. 第五章では,画像処理への応用として,輪郭抽出,雑音除去, 視覚パターンの認識に対する離散的なCNNについて述べている. 多くの結果から処理時間は従来のものと比較して極端に短縮される ことが分かった. また,不均一離散時間CNNによって,一つ画面 中に多数の異なる視覚パターンを同時に認識できることも分かった。 第六章では,不均一離散的なCNNの特徴と今後の問題点につ いて述べている

    Memristive Computing

    Get PDF
    Memristive computing refers to the utilization of the memristor, the fourth fundamental passive circuit element, in computational tasks. The existence of the memristor was theoretically predicted in 1971 by Leon O. Chua, but experimentally validated only in 2008 by HP Labs. A memristor is essentially a nonvolatile nanoscale programmable resistor — indeed, memory resistor — whose resistance, or memristance to be precise, is changed by applying a voltage across, or current through, the device. Memristive computing is a new area of research, and many of its fundamental questions still remain open. For example, it is yet unclear which applications would benefit the most from the inherent nonlinear dynamics of memristors. In any case, these dynamics should be exploited to allow memristors to perform computation in a natural way instead of attempting to emulate existing technologies such as CMOS logic. Examples of such methods of computation presented in this thesis are memristive stateful logic operations, memristive multiplication based on the translinear principle, and the exploitation of nonlinear dynamics to construct chaotic memristive circuits. This thesis considers memristive computing at various levels of abstraction. The first part of the thesis analyses the physical properties and the current-voltage behaviour of a single device. The middle part presents memristor programming methods, and describes microcircuits for logic and analog operations. The final chapters discuss memristive computing in largescale applications. In particular, cellular neural networks, and associative memory architectures are proposed as applications that significantly benefit from memristive implementation. The work presents several new results on memristor modeling and programming, memristive logic, analog arithmetic operations on memristors, and applications of memristors. The main conclusion of this thesis is that memristive computing will be advantageous in large-scale, highly parallel mixed-mode processing architectures. This can be justified by the following two arguments. First, since processing can be performed directly within memristive memory architectures, the required circuitry, processing time, and possibly also power consumption can be reduced compared to a conventional CMOS implementation. Second, intrachip communication can be naturally implemented by a memristive crossbar structure.Siirretty Doriast

    A neural network model of adaptively timed reinforcement learning and hippocampal dynamics

    Full text link
    A neural model is described of how adaptively timed reinforcement learning occurs. The adaptive timing circuit is suggested to exist in the hippocampus, and to involve convergence of dentate granule cells on CA3 pyramidal cells, and NMDA receptors. This circuit forms part of a model neural system for the coordinated control of recognition learning, reinforcement learning, and motor learning, whose properties clarify how an animal can learn to acquire a delayed reward. Behavioral and neural data are summarized in support of each processing stage of the system. The relevant anatomical sites are in thalamus, neocortex, hippocampus, hypothalamus, amygdala, and cerebellum. Cerebellar influences on motor learning are distinguished from hippocampal influences on adaptive timing of reinforcement learning. The model simulates how damage to the hippocampal formation disrupts adaptive timing, eliminates attentional blocking, and causes symptoms of medial temporal amnesia. It suggests how normal acquisition of subcortical emotional conditioning can occur after cortical ablation, even though extinction of emotional conditioning is retarded by cortical ablation. The model simulates how increasing the duration of an unconditioned stimulus increases the amplitude of emotional conditioning, but does not change adaptive timing; and how an increase in the intensity of a conditioned stimulus "speeds up the clock", but an increase in the intensity of an unconditioned stimulus does not. Computer simulations of the model fit parametric conditioning data, including a Weber law property and an inverted U property. Both primary and secondary adaptively timed conditioning are simulated, as are data concerning conditioning using multiple interstimulus intervals (ISIs), gradually or abruptly changing ISis, partial reinforcement, and multiple stimuli that lead to time-averaging of responses. Neurobiologically testable predictions are made to facilitate further tests of the model.Air Force Office of Scientific Research (90-0175, 90-0128); Defense Advanced Research Projects Agency (90-0083); National Science Foundation (IRI-87-16960); Office of Naval Research (N00014-91-J-4100

    Energy efficient hybrid computing systems using spin devices

    Get PDF
    Emerging spin-devices like magnetic tunnel junctions (MTJ\u27s), spin-valves and domain wall magnets (DWM) have opened new avenues for spin-based logic design. This work explored potential computing applications which can exploit such devices for higher energy-efficiency and performance. The proposed applications involve hybrid design schemes, where charge-based devices supplement the spin-devices, to gain large benefits at the system level. As an example, lateral spin valves (LSV) involve switching of nanomagnets using spin-polarized current injection through a metallic channel such as Cu. Such spin-torque based devices possess several interesting properties that can be exploited for ultra-low power computation. Analog characteristic of spin current facilitate non-Boolean computation like majority evaluation that can be used to model a neuron. The magneto-metallic neurons can operate at ultra-low terminal voltage of ∼20mV, thereby resulting in small computation power. Moreover, since nano-magnets inherently act as memory elements, these devices can facilitate integration of logic and memory in interesting ways. The spin based neurons can be integrated with CMOS and other emerging devices leading to different classes of neuromorphic/non-Von-Neumann architectures. The spin-based designs involve `mixed-mode\u27 processing and hence can provide very compact and ultra-low energy solutions for complex computation blocks, both digital as well as analog. Such low-power, hybrid designs can be suitable for various data processing applications like cognitive computing, associative memory, and currentmode on-chip global interconnects. Simulation results for these applications based on device-circuit co-simulation framework predict more than ∼100x improvement in computation energy as compared to state of the art CMOS design, for optimal spin-device parameters

    Computers from plants we never made. Speculations

    Full text link
    We discuss possible designs and prototypes of computing systems that could be based on morphological development of roots, interaction of roots, and analog electrical computation with plants, and plant-derived electronic components. In morphological plant processors data are represented by initial configuration of roots and configurations of sources of attractants and repellents; results of computation are represented by topology of the roots' network. Computation is implemented by the roots following gradients of attractants and repellents, as well as interacting with each other. Problems solvable by plant roots, in principle, include shortest-path, minimum spanning tree, Voronoi diagram, α\alpha-shapes, convex subdivision of concave polygons. Electrical properties of plants can be modified by loading the plants with functional nanoparticles or coating parts of plants of conductive polymers. Thus, we are in position to make living variable resistors, capacitors, operational amplifiers, multipliers, potentiometers and fixed-function generators. The electrically modified plants can implement summation, integration with respect to time, inversion, multiplication, exponentiation, logarithm, division. Mathematical and engineering problems to be solved can be represented in plant root networks of resistive or reaction elements. Developments in plant-based computing architectures will trigger emergence of a unique community of biologists, electronic engineering and computer scientists working together to produce living electronic devices which future green computers will be made of.Comment: The chapter will be published in "Inspired by Nature. Computing inspired by physics, chemistry and biology. Essays presented to Julian Miller on the occasion of his 60th birthday", Editors: Susan Stepney and Andrew Adamatzky (Springer, 2017
    corecore