401 research outputs found

    Design Space Re-Engineering for Power Minimization in Modern Embedded Systems

    Get PDF
    Power minimization is a critical challenge for modern embedded system design. Recently, due to the rapid increase of system's complexity and the power density, there is a growing need for power control techniques at various design levels. Meanwhile, due to technology scaling, leakage power has become a significant part of power dissipation in the CMOS circuits and new techniques are needed to reduce leakage power. As a result, many new power minimization techniques have been proposed such as voltage island, gate sizing, multiple supply and threshold voltage, power gating and input vector control, etc. These design options further enlarge the design space and make it prohibitively expensive to explore for the most energy efficient design solution. Consequently, heuristic algorithms and randomized algorithms are frequently used to explore the design space, seeking sub-optimal solutions to meet the time-to-market requirements. These algorithms are based on the idea of truncating the design space and restricting the search in a subset of the original design space. While this approach can effectively reduce the runtime of searching, it may also exclude high-quality design solutions and cause design quality degradation. When the solution to one problem is used as the base for another problem, such solution quality degradation will accumulate. In modern electronics system design, when several such algorithms are used in series to solve problems in different design levels, the final solution can be far off the optimal one. In my Ph.D. work, I develop a {\em re-engineering} methodology to facilitate exploring the design space of power efficient embedded systems design. The direct goal is to enhance the performance of existing low power techniques. The methodology is based on the idea that design quality can be improved via iterative ``re-shaping'' the design space based on the ``bad'' structure in the obtained design solutions; the searching run-time can be reduced by the guidance from previous exploration. This approach can be described in three phases: (1) apply the existing techniques to obtain a sub-optimal solution; (2) analyze the solution and expand the design space accordingly; and (3) re-apply the technique to re-explore the enlarged design space. We apply this methodology at different levels of embedded system design to minimize power: (i) switching power reduction in sequential logic synthesis; (ii) gate-level static leakage current reduction; (iii) dual threshold voltage CMOS circuits design; and (iv) system-level energy-efficient detection scheme for wireless sensor networks. An extensive amount of experiments have been conducted and the results have shown that this methodology can effectively enhance the power efficiency of the existing embedded system design flows with very little overhead

    Exploiting development to enhance the scalability of hardware evolution.

    Get PDF
    Evolutionary algorithms do not scale well to the large, complex circuit design problems typical of the real world. Although techniques based on traditional design decomposition have been proposed to enhance hardware evolution's scalability, they often rely on traditional domain knowledge that may not be appropriate for evolutionary search and might limit evolution's opportunity to innovate. It has been proposed that reliance on such knowledge can be avoided by introducing a model of biological development to the evolutionary algorithm, but this approach has not yet achieved its potential. Prior demonstrations of how development can enhance scalability used toy problems that are not indicative of evolving hardware. Prior attempts to apply development to hardware evolution have rarely been successful and have never explored its effect on scalability in detail. This thesis demonstrates that development can enhance scalability in hardware evolution, primarily through a statistical comparison of hardware evolution's performance with and without development using circuit design problems of various sizes. This is reinforced by proposing and demonstrating three key mechanisms that development uses to enhance scalability: the creation of modules, the reuse of modules, and the discovery of design abstractions. The thesis includes several minor contributions: hardware is evolved using a common reconfigurable architecture at a lower level of abstraction than reported elsewhere. It is shown that this can allow evolution to exploit the architecture more efficiently and perhaps search more effectively. Also the benefits of several features of developmental models are explored through the biases they impose on the evolutionary search. Features that are explored include the type of environmental context development uses and the constraints on symmetry and information transmission they impose, genetic operators that may improve the robustness of gene networks, and how development is mapped to hardware. Also performance is compared against contemporary developmental models

    An Adaptive Modular Redundancy Technique to Self-regulate Availability, Area, and Energy Consumption in Mission-critical Applications

    Get PDF
    As reconfigurable devices\u27 capacities and the complexity of applications that use them increase, the need for self-reliance of deployed systems becomes increasingly prominent. A Sustainable Modular Adaptive Redundancy Technique (SMART) composed of a dual-layered organic system is proposed, analyzed, implemented, and experimentally evaluated. SMART relies upon a variety of self-regulating properties to control availability, energy consumption, and area used, in dynamically-changing environments that require high degree of adaptation. The hardware layer is implemented on a Xilinx Virtex-4 Field Programmable Gate Array (FPGA) to provide self-repair using a novel approach called a Reconfigurable Adaptive Redundancy System (RARS). The software layer supervises the organic activities within the FPGA and extends the self-healing capabilities through application-independent, intrinsic, evolutionary repair techniques to leverage the benefits of dynamic Partial Reconfiguration (PR). A SMART prototype is evaluated using a Sobel edge detection application. This prototype is shown to provide sustainability for stressful occurrences of transient and permanent fault injection procedures while still reducing energy consumption and area requirements. An Organic Genetic Algorithm (OGA) technique is shown capable of consistently repairing hard faults while maintaining correct edge detector outputs, by exploiting spatial redundancy in the reconfigurable hardware. A Monte Carlo driven Continuous Markov Time Chains (CTMC) simulation is conducted to compare SMART\u27s availability to industry-standard Triple Modular Technique (TMR) techniques. Based on nine use cases, parameterized with realistic fault and repair rates acquired from publically available sources, the results indicate that availability is significantly enhanced by the adoption of fast repair techniques targeting aging-related hard-faults. Under harsh environments, SMART is shown to improve system availability from 36.02% with lengthy repair techniques to 98.84% with fast ones. This value increases to five nines (99.9998%) under relatively more favorable conditions. Lastly, SMART is compared to twenty eight standard TMR benchmarks that are generated by the widely-accepted BL-TMR tools. Results show that in seven out of nine use cases, SMART is the recommended technique, with power savings ranging from 22% to 29%, and area savings ranging from 17% to 24%, while still maintaining the same level of availability

    VLSI Design

    Get PDF
    This book provides some recent advances in design nanometer VLSI chips. The selected topics try to present some open problems and challenges with important topics ranging from design tools, new post-silicon devices, GPU-based parallel computing, emerging 3D integration, and antenna design. The book consists of two parts, with chapters such as: VLSI design for multi-sensor smart systems on a chip, Three-dimensional integrated circuits design for thousand-core processors, Parallel symbolic analysis of large analog circuits on GPU platforms, Algorithms for CAD tools VLSI design, A multilevel memetic algorithm for large SAT-encoded problems, etc

    The hardware implementation of an artificial neural network using stochastic pulse rate encoding principles

    Get PDF
    In this thesis the development of a hardware artificial neuron device and artificial neural network using stochastic pulse rate encoding principles is considered. After a review of neural network architectures and algorithmic approaches suitable for hardware implementation, a critical review of hardware techniques which have been considered in analogue and digital systems is presented. New results are presented demonstrating the potential of two learning schemes which adapt by the use of a single reinforcement signal. The techniques for computation using stochastic pulse rate encoding are presented and extended with new novel circuits relevant to the hardware implementation of an artificial neural network. The generation of random numbers is the key to the encoding of data into the stochastic pulse rate domain. The formation of random numbers and multiple random bit sequences from a single PRBS generator have been investigated. Two techniques, Simulated Annealing and Genetic Algorithms, have been applied successfully to the problem of optimising the configuration of a PRBS random number generator for the formation of multiple random bit sequences and hence random numbers. A complete hardware design for an artificial neuron using stochastic pulse rate encoded signals has been described, designed, simulated, fabricated and tested before configuration of the device into a network to perform simple test problems. The implementation has shown that the processing elements of the artificial neuron are small and simple, but that there can be a significant overhead for the encoding of information into the stochastic pulse rate domain. The stochastic artificial neuron has the capability of on-line weight adaption. The implementation of reinforcement schemes using the stochastic neuron as a basic element are discussed

    Modeling and Optimization of Chemical Mechanical Planarization (Cmp) Using Neural Networks, Anfis and Evolutionary Algorithms

    Get PDF
    Higher density nano-devices and more metallization layers in microelectronic chips are unceasing goals to the present semiconductor industry. However, topological imperfections (higher non-uniformity) on the wafer surfaces and lower material removal rates (MRR) seriously hamper these pursuing motivations. Since'90, industry has been using chemical mechanical planarization/polishing (CMP) to overcome these obstacles for fabricating integrated circuits (IC) with interconnect geometries of < 0.18 &amp;#956;m. Obviously, the much needed understanding of this new technique is derived basically on the ancient lapping process. Modeling and simulation are critical to transfer CMP from an engineering 'art' to an engineering 'science'. Many efforts in CMP modeling have been made in the last decade, but the available analytical MRR and surface uniformity models cannot precisely describe this highly complicated process, involving simultaneous chemical reactions (and etching), and mechanical abrasion. In this investigation, neural networks (NN), adaptive-based-network fuzzy inference system (ANFIS), and evolutionary algorithms (EA) techniques were applied to successfully overcome the aforementioned modeling and simulation problems. In addition, fine-tuning techniques for re-modifying ANFIS models for sparse-data case using are developed. Furthermore, multi-objective evolutionary algorithms (MOEA) are firstly applied to search for the optimal input settings for CMP process to trade-off the higher MRR and lower non-Uniformity by using the previously constructed models. The results also show the simulation of MOEA optimization can certainly provide accurate guidance to search the optimal input settings for CMP process to produce lower non-uniform wafer surfaces under higher MRR.Mechanical & Aerospace Engineerin

    A Practical Investigation into Achieving Bio-Plausibility in Evo-Devo Neural Microcircuits Feasible in an FPGA

    Get PDF
    Many researchers has conjectured, argued, or in some cases demonstrated, that bio-plausibility can bring about emergent properties such as adaptability, scalability, fault-tolerance, self-repair, reliability, and autonomy to bio-inspired intelligent systems. Evolutionary-developmental (evo-devo) spiking neural networks are a very bio-plausible mixture of such bio-inspired intelligent systems that have been proposed and studied by a few researchers. However, the general trend is that the complexity and thus the computational cost grow with the bio-plausibility of the system. FPGAs (Field- Programmable Gate Arrays) have been used and proved to be one of the flexible and cost efficient hardware platforms for research' and development of such evo-devo systems. However, mapping a bio-plausible evo-devo spiking neural network to an FPGA is a daunting task full of different constraints and trade-offs that makes it, if not infeasible, very challenging. This thesis explores the challenges, trade-offs, constraints, practical issues, and some possible approaches in achieving bio-plausibility in creating evolutionary developmental spiking neural microcircuits in an FPGA through a practical investigation along with a series of case studies. In this study, the system performance, cost, reliability, scalability, availability, and design and testing time and complexity are defined as measures for feasibility of a system and structural accuracy and consistency with the current knowledge in biology as measures for bio-plausibility. Investigation of the challenges starts with the hardware platform selection and then neuron, cortex, and evo-devo models and integration of these models into a whole bio-inspired intelligent system are examined one by one. For further practical investigation, a new PLAQIF Digital Neuron model, a novel Cortex model, and a new multicellular LGRN evo-devo model are designed, implemented and tested as case studies. Results and their implications for the researchers, designers of such systems, and FPGA manufacturers are discussed and concluded in form of general trends, trade-offs, suggestions, and recommendations
    corecore