4 research outputs found

    Continuous-time recurrent neural networks for quadratic programming: theory and engineering applications.

    Get PDF
    Liu Shubao.Thesis (M.Phil.)--Chinese University of Hong Kong, 2005.Includes bibliographical references (leaves 90-98).Abstracts in English and Chinese.Abstract --- p.i摘要 --- p.iiiAcknowledgement --- p.ivChapter 1 --- Introduction --- p.1Chapter 1.1 --- Time-Varying Quadratic Optimization --- p.1Chapter 1.2 --- Recurrent Neural Networks --- p.3Chapter 1.2.1 --- From Feedforward to Recurrent Networks --- p.3Chapter 1.2.2 --- Computational Power and Complexity --- p.6Chapter 1.2.3 --- Implementation Issues --- p.7Chapter 1.3 --- Thesis Organization --- p.9Chapter I --- Theory and Models --- p.11Chapter 2 --- Linearly Constrained QP --- p.13Chapter 2.1 --- Model Description --- p.14Chapter 2.2 --- Convergence Analysis --- p.17Chapter 3 --- Quadratically Constrained QP --- p.26Chapter 3.1 --- Problem Formulation --- p.26Chapter 3.2 --- Model Description --- p.27Chapter 3.2.1 --- Model 1 (Dual Model) --- p.28Chapter 3.2.2 --- Model 2 (Improved Dual Model) --- p.28Chapter II --- Engineering Applications --- p.29Chapter 4 --- KWTA Network Circuit Design --- p.31Chapter 4.1 --- Introduction --- p.31Chapter 4.2 --- Equivalent Reformulation --- p.32Chapter 4.3 --- KWTA Network Model --- p.36Chapter 4.4 --- Simulation Results --- p.40Chapter 4.5 --- Conclusions --- p.40Chapter 5 --- Dynamic Control of Manipulators --- p.43Chapter 5.1 --- Introduction --- p.43Chapter 5.2 --- Problem Formulation --- p.44Chapter 5.3 --- Simplified Dual Neural Network --- p.47Chapter 5.4 --- Simulation Results --- p.51Chapter 5.5 --- Concluding Remarks --- p.55Chapter 6 --- Robot Arm Obstacle Avoidance --- p.56Chapter 6.1 --- Introduction --- p.56Chapter 6.2 --- Obstacle Avoidance Scheme --- p.58Chapter 6.2.1 --- Equality Constrained Formulation --- p.58Chapter 6.2.2 --- Inequality Constrained Formulation --- p.60Chapter 6.3 --- Simplified Dual Neural Network Model --- p.64Chapter 6.3.1 --- Existing Approaches --- p.64Chapter 6.3.2 --- Model Derivation --- p.65Chapter 6.3.3 --- Convergence Analysis --- p.67Chapter 6.3.4 --- Model Comparision --- p.69Chapter 6.4 --- Simulation Results --- p.70Chapter 6.5 --- Concluding Remarks --- p.71Chapter 7 --- Multiuser Detection --- p.77Chapter 7.1 --- Introduction --- p.77Chapter 7.2 --- Problem Formulation --- p.78Chapter 7.3 --- Neural Network Architecture --- p.82Chapter 7.4 --- Simulation Results --- p.84Chapter 8 --- Conclusions and Future Works --- p.88Chapter 8.1 --- Concluding Remarks --- p.88Chapter 8.2 --- Future Prospects --- p.88Bibliography --- p.8

    Intégration CMOS analogique de réseaux de neurones à cliques

    Get PDF
    Artificial neural networks solve problems that classical processors cannot solve without using a huge amount of resources. For instance, multiple-signal analysis and classification are such problems. Moreover, artificial neural networks are more and more integrated on-chip. They aim therefore at increasing processors computational abilities or processing data in embedded systems. In embedded systems, circuit area and energy consumption are critical parameters. However, the amount of connections between neurons is very high. Besides, circuit integration is difficult due to weighted connections and complex activation functions. These limitations exist for most artificial neural networks models and are thus an issue for the integration of a neural network composed of a high number of neurons (hundreds of them or more). Clique-based neural networks are a model of artificial neural networks reducing the network density, in terms of connections between neurons. Its information storage capacity is moreover greater than that of a standard artificial neural networks model such as Hopfield neural networks. This model is therefore suited to implement a high number of neurons on chip, leading to low-complexity and low-energy consumption circuits. In this document, we introduce a mixed-signal circuit implementing clique-based neural networks. We also show several generic network architectures implementing a network of any number of neurons. We can therefore implement clique-based neural networks of up to thousands of neurons consuming little energy. In order to validate the proposed implementation, we have fabricated a 30-neuron clique-based neural network prototype integrated on chip for the Si 65-nm CMOS 1-V supply process. The circuit shows decoding performances similar to the theoretical model and executes the message recovery process in 58 ns. Moreover, the entire network occupies a silicon area of 16,470 µm² and consumes 145 µW, yielding a measured energy consumption per neuron of 423 fJ maximum. These results show that the fabricated circuit is ten times more efficient in terms of occupied silicon area and latency than a digital equivalent circuit.Les réseaux de neurones artificiels permettent de résoudre des problèmes que des processeurs classiques ne peuvent pas résoudre sans utiliser une quantité considérable de ressources matérielles. L'analyse et la classification de multiples signaux en sont des exemples. Ces réseaux sont de plus en plus implantés sur des circuits intégrés. Ils ont ainsi pour but d'augmenter les capacités de calcul de processeurs ou d'effectuer leur traitement dans des systèmes embarqués. Dans un contexte d'application embarquée, la surface et la consommation d'énergie du circuit sont prépondérantes. Cependant, le nombre de connexions entre les neurones est élevé. De plus, les poids synaptiques ainsi que les fonctions d'activation utilisées rendent les implantations sur circuit complexes. Ces aspects, communs dans la plupart des modèles de réseaux de neurones, limitent l'intégration d'un réseau contenant un nombre de neurones de l'ordre de la centaine. Le modèle des réseaux de neurones à cliques permet de réduire la densité de connexions au sein d'un réseau, tout en gardant une capacité de stockage d'information plus grande que les réseaux de Hopfield, qui est un modèle standard de réseaux de neurones. Ce modèle est donc approprié pour implanter un réseau de grande taille, à condition de l'intégrer de façon à garder la faible complexité de ses fonctions, pour consommer un minimum d'énergie. Dans ce document, nous proposons un circuit mixte analogique/numérique implantant le modèle des réseaux de neurones à cliques. Nous proposons également plusieurs architectures de réseau pouvant contenir un nombre indéterminé de neurones. Cela nous permet de construire des réseaux de neurones à cliques contenant jusqu'à plusieurs milliers de neurones et consommant peu d'énergie. Pour valider les concepts décrits dans ce document, nous avons fabriqué et testé un prototype d'un réseau de neurones à cliques contenant trente neurones sur puce. Nous utilisons pour cela la technologie Si CMOS 65 nm, avec une tension d'alimentation de 1 V. Le circuit a des performances de récupération de l'information similaires à celles du modèle théorique, et effectue la récupération d'un message en 58 ns. Le réseau de neurones occupe une surface de silicium de 16 470 µm² et consomme 145 µW. Ces mesures attestent une consommation d'énergie par neurone de 423 fJ au maximum. Ces résultats montrent que le circuit produit est dix fois plus efficace qu'un équivalent numérique en termes de surface de silicium occupée et de latence

    Long-Term Memory for Cognitive Architectures: A Hardware Approach Using Resistive Devices

    Get PDF
    A cognitive agent capable of reliably performing complex tasks over a long time will acquire a large store of knowledge. To interact with changing circumstances, the agent will need to quickly search and retrieve knowledge relevant to its current context. Real time knowledge search and cognitive processing like this is a challenge for conventional computers, which are not optimised for such tasks. This thesis describes a new content-addressable memory, based on resistive devices, that can perform massively parallel knowledge search in the memory array. The fundamental circuit block that supports this capability is a memory cell that closely couples comparison logic with non-volatile storage. By using resistive devices instead of transistors in both the comparison circuit and storage elements, this cell improves area density by over an order of magnitude compared to state of the art CMOS implementations. The resulting memory does not need power to maintain stored information, and is therefore well suited to cognitive agents with large long-term memories. The memory incorporates activation circuits, which bias the knowledge retrieval process according to past memory access patterns. This is achieved by approximating the widely used base-level activation function using resistive devices to store, maintain and compare activation values. By distributing an instance of this circuit to every row in memory, the activation for all memory objects can be updated in parallel. A test using the word sense disambiguation task shows this circuit-based activation model only incurs a small loss in accuracy compared to exact base-level calculations. A variation of spreading activation can also be achieved in-memory. Memory objects are encoded with high-dimensional vectors that create association between correlated representations. By storing these high-dimensional vectors in the new content-addressable memory, activation can be spread to related objects during search operations. The new memory is scalable, power and area efficient, and performs operations in parallel that are infeasible in real-time for a sequential processor with a conventional memory hierarchy.Thesis (Ph.D.) -- University of Adelaide, School of Electrical and Electronic Engineering, 201
    corecore