368 research outputs found

    Autonomously Reconfigurable Artificial Neural Network on a Chip

    Get PDF
    Artificial neural network (ANN), an established bio-inspired computing paradigm, has proved very effective in a variety of real-world problems and particularly useful for various emerging biomedical applications using specialized ANN hardware. Unfortunately, these ANN-based systems are increasingly vulnerable to both transient and permanent faults due to unrelenting advances in CMOS technology scaling, which sometimes can be catastrophic. The considerable resource and energy consumption and the lack of dynamic adaptability make conventional fault-tolerant techniques unsuitable for future portable medical solutions. Inspired by the self-healing and self-recovery mechanisms of human nervous system, this research seeks to address reliability issues of ANN-based hardware by proposing an Autonomously Reconfigurable Artificial Neural Network (ARANN) architectural framework. Leveraging the homogeneous structural characteristics of neural networks, ARANN is capable of adapting its structures and operations, both algorithmically and microarchitecturally, to react to unexpected neuron failures. Specifically, we propose three key techniques --- Distributed ANN, Decoupled Virtual-to-Physical Neuron Mapping, and Dual-Layer Synchronization --- to achieve cost-effective structural adaptation and ensure accurate system recovery. Moreover, an ARANN-enabled self-optimizing workflow is presented to adaptively explore a "Pareto-optimal" neural network structure for a given application, on the fly. Implemented and demonstrated on a Virtex-5 FPGA, ARANN can cover and adapt 93% chip area (neurons) with less than 1% chip overhead and O(n) reconfiguration latency. A detailed performance analysis has been completed based on various recovery scenarios

    Measuring fault resilience in neural networks

    Get PDF
    In an extension to research into modeling a biological network of neurons this expands the basic characteristics of an Artificial Neural Network (ANN)computational model to measure functional compensation exhibited by a biological neural network during damage or loss of structure. Whilst current research has highlighted the availability of various technologies and methods relevant to this area of study, none provide a sufficient description as to how fault tolerance is measured nor how damage is evaluated. Such metrics must be consistent, reproducible, and applicable to a plethora of neural network architectures and techniques. Furthermore, measuring fault resilience of biologically inspired ANN architectures provides insight into how biological networks are able to exhibit this amazing ability. This research brings together previous works into a comprehensive damage resilient ANN framework as well as, and more importantly, provides consistent measurement of fault tolerance within this framework. The proposed set of fault resilience metrics provides the means to evaluate the efficacy of networks which are subjectable to damage. These metrics and their source algorithms rely on the modification of various statistical methods and observations currently used for network training optimization

    Soft Computing Techniques and Their Applications in Intel-ligent Industrial Control Systems: A Survey

    Get PDF
    Soft computing involves a series of methods that are compatible with imprecise information and complex human cognition. In the face of industrial control problems, soft computing techniques show strong intelligence, robustness and cost-effectiveness. This study dedicates to providing a survey on soft computing techniques and their applications in industrial control systems. The methodologies of soft computing are mainly classified in terms of fuzzy logic, neural computing, and genetic algorithms. The challenges surrounding modern industrial control systems are summarized based on the difficulties in information acquisition, the difficulties in modeling control rules, the difficulties in control system optimization, and the requirements for robustness. Then, this study reviews soft-computing-related achievements that have been developed to tackle these challenges. Afterwards, we present a retrospect of practical industrial control applications in the fields including transportation, intelligent machines, process industry as well as energy engineering. Finally, future research directions are discussed from different perspectives. This study demonstrates that soft computing methods can endow industry control processes with many merits, thus having great application potential. It is hoped that this survey can serve as a reference and provide convenience for scholars and practitioners in the fields of industrial control and computer science

    Intrinsically Evolvable Artificial Neural Networks

    Get PDF
    Dedicated hardware implementations of neural networks promise to provide faster, lower power operation when compared to software implementations executing on processors. Unfortunately, most custom hardware implementations do not support intrinsic training of these networks on-chip. The training is typically done using offline software simulations and the obtained network is synthesized and targeted to the hardware offline. The FPGA design presented here facilitates on-chip intrinsic training of artificial neural networks. Block-based neural networks (BbNN), the type of artificial neural networks implemented here, are grid-based networks neuron blocks. These networks are trained using genetic algorithms to simultaneously optimize the network structure and the internal synaptic parameters. The design supports online structure and parameter updates, and is an intrinsically evolvable BbNN platform supporting functional-level hardware evolution. Functional-level evolvable hardware (EHW) uses evolutionary algorithms to evolve interconnections and internal parameters of functional modules in reconfigurable computing systems such as FPGAs. Functional modules can be any hardware modules such as multipliers, adders, and trigonometric functions. In the implementation presented, the functional module is a neuron block. The designed platform is suitable for applications in dynamic environments, and can be adapted and retrained online. The online training capability has been demonstrated using a case study. A performance characterization model for RC implementations of BbNNs has also been presented

    Hardware Learning in Analogue VLSI Neural Networks

    Get PDF

    PROPOSED METHODOLOGY FOR OPTIMIZING THE TRAINING PARAMETERS OF A MULTILAYER FEED-FORWARD ARTIFICIAL NEURAL NETWORKS USING A GENETIC ALGORITHM

    Get PDF
    An artificial neural network (ANN), or shortly "neural network" (NN), is a powerful mathematical or computational model that is inspired by the structure and/or functional characteristics of biological neural networks. Despite the fact that ANN has been developing rapidly for many years, there are still some challenges concerning the development of an ANN model that performs effectively for the problem at hand. ANN can be categorized into three main types: single layer, recurrent network and multilayer feed-forward network. In multilayer feed-forward ANN, the actual performance is highly dependent on the selection of architecture and training parameters. However, a systematic method for optimizing these parameters is still an active research area. This work focuses on multilayer feed-forward ANNs due to their generalization capability, simplicity from the viewpoint of structure, and ease of mathematical analysis. Even though, several rules for the optimization of multilayer feed-forward ANN parameters are available in the literature, most networks are still calibrated via a trial-and-error procedure, which depends mainly on the type of problem, and past experience and intuition of the expert. To overcome these limitations, there have been attempts to use genetic algorithm (GA) to optimize some of these parameters. However most, if not all, of the existing approaches are focused partially on the part of architecture and training parameters. On the contrary, the GAANN approach presented here has covered most aspects of multilayer feed-forward ANN in a more comprehensive way. This research focuses on the use of binaryencoded genetic algorithm (GA) to implement efficient search strategies for the optimal architecture and training parameters of a multilayer feed-forward ANN. Particularly, GA is utilized to determine the optimal number of hidden layers, number of neurons in each hidden layer, type of training algorithm, type of activation function of hidden and output neurons, initial weight, learning rate, momentum term, and epoch size of a multilayer feed-forward ANN. In this thesis, the approach has been analyzed and algorithms that simulate the new approach have been mapped out
    corecore