301 research outputs found
Fault-tolerant evolvable hardware using field-programmable transistor arrays
The paper presents an evolutionary approach to the design of fault-tolerant VLSI (very large scale integrated) circuits using EHW (evolvable hardware). The EHW research area comprises a set of applications where GA (genetic algorithms) are used for the automatic synthesis and adaptation of electronic circuits. EHW is particularly suitable for applications requiring changes in task requirements and in the environment or faults, through its ability to reconfigure the hardware structure dynamically and autonomously. This capacity for adaptation is achieved via the use of GA search techniques, in our experiments, a fine-grained CMOS (complementary metal-oxide silicon) FPTA (field-programmable FPGA transistor array) architecture is used to synthesize electronic circuits. The FPTA is a reconfigurable architecture, programmable at the transistor level and specifically designed for EHW applications. The paper demonstrates the power of EA to design analog and digital fault-tolerant circuits. It compares two methods to achieve fault-tolerant design, one based on fitness definition and the other based on population. The fitness approach defines, explicitly, the faults that the component can encounter during its life, and evaluates the average behavior of the individuals. The population approach, on the other hand, uses the implicit information of the population statistics accumulated by the GA over many generations. The paper presents experiment results obtained using both approaches for the synthesis of a fault-tolerant digital circuit (XNOR) and a fault-tolerant analog circuit (multiplier)
Recommended from our members
Combinatorial optimization and metaheuristics
Today, combinatorial optimization is one of the youngest and most active areas of discrete mathematics. It is a branch of optimization in applied mathematics and computer science, related to operational research, algorithm theory and computational complexity theory. It sits at the intersection of several fields, including artificial intelligence, mathematics and software engineering. Its increasing interest arises for the fact that a large number of scientific and industrial problems can be formulated as abstract combinatorial optimization problems, through graphs and/or (integer) linear programs. Some of these problems have polynomial-time (âefficientâ) algorithms, while most of them are NP-hard, i.e. it is not proved that they can be solved in polynomial-time. Mainly, it means that it is not possible to guarantee that an exact solution to the problem can be found and one has to settle for an approximate solution with known performance guarantees. Indeed, the goal of approximate methods is to find âquicklyâ (reasonable run-times), with âhighâ probability, provable âgoodâ solutions (low error from the real optimal solution). In the last 20 years, a new kind of algorithm commonly called metaheuristics have emerged in this class, which basically try to combine heuristics in high level frameworks aimed at efficiently and effectively exploring the search space. This report briefly outlines the components, concepts, advantages and disadvantages of different metaheuristic approaches from a conceptual point of view, in order to analyze their similarities and differences. The two very significant forces of intensification and diversification, that mainly determine the behavior of a metaheuristic, will be pointed out. The report concludes by exploring the importance of hybridization and integration methods
Born to learn: The inspiration, progress, and future of evolved plastic artificial neural networks
Biological plastic neural networks are systems of extraordinary computational
capabilities shaped by evolution, development, and lifetime learning. The
interplay of these elements leads to the emergence of adaptive behavior and
intelligence. Inspired by such intricate natural phenomena, Evolved Plastic
Artificial Neural Networks (EPANNs) use simulated evolution in-silico to breed
plastic neural networks with a large variety of dynamics, architectures, and
plasticity rules: these artificial systems are composed of inputs, outputs, and
plastic components that change in response to experiences in an environment.
These systems may autonomously discover novel adaptive algorithms, and lead to
hypotheses on the emergence of biological adaptation. EPANNs have seen
considerable progress over the last two decades. Current scientific and
technological advances in artificial neural networks are now setting the
conditions for radically new approaches and results. In particular, the
limitations of hand-designed networks could be overcome by more flexible and
innovative solutions. This paper brings together a variety of inspiring ideas
that define the field of EPANNs. The main methods and results are reviewed.
Finally, new opportunities and developments are presented
FPGA dynamic and partial reconfiguration : a survey of architectures, methods, and applications
Dynamic and partial reconfiguration are key differentiating capabilities of field programmable gate arrays (FPGAs). While they have been studied extensively in academic literature, they find limited use in deployed systems. We review FPGA reconfiguration, looking at architectures built for the purpose, and the properties of modern commercial architectures. We then investigate design flows, and identify the key challenges in making reconfigurable FPGA systems easier to design. Finally, we look at applications where reconfiguration has found use, as well as proposing new areas where this capability places FPGAs in a unique position for adoption
Stochastic optimisation of lookup table networks, for realtime inference on embedded systems
Neural networks running on FPGAs offer great potential for creative applications in realtime audio and sensor processing, but training models to run on these platforms can be challenging. Research in TinyML offers methods for transforming trained neural networks to run on embedded systems. Further gains might be made by training networks directly constructed from lookup tables (LUTs), the basic element of FPGA hardware. A novel method, Stochastic Logic Optimisation, is presented for supervised learning with feed-forward networks of LUTs. The method is found to significantly improve on the use of both a genetic algorithm and memorisation in a beat prediction task
Intrinsically Evolvable Artificial Neural Networks
Dedicated hardware implementations of neural networks promise to provide faster, lower power operation when compared to software implementations executing on processors. Unfortunately, most custom hardware implementations do not support intrinsic training of these networks on-chip. The training is typically done using offline software simulations and the obtained network is synthesized and targeted to the hardware offline. The FPGA design presented here facilitates on-chip intrinsic training of artificial neural networks. Block-based neural networks (BbNN), the type of artificial neural networks implemented here, are grid-based networks neuron blocks. These networks are trained using genetic algorithms to simultaneously optimize the network structure and the internal synaptic parameters. The design supports online structure and parameter updates, and is an intrinsically evolvable BbNN platform supporting functional-level hardware evolution. Functional-level evolvable hardware (EHW) uses evolutionary algorithms to evolve interconnections and internal parameters of functional modules in reconfigurable computing systems such as FPGAs. Functional modules can be any hardware modules such as multipliers, adders, and trigonometric functions. In the implementation presented, the functional module is a neuron block. The designed platform is suitable for applications in dynamic environments, and can be adapted and retrained online. The online training capability has been demonstrated using a case study. A performance characterization model for RC implementations of BbNNs has also been presented
- âŚ