69 research outputs found

    Optimizing Dynamic Logic Realizations For Partial Reconfiguration Of Field Programmable Gate Arrays

    Get PDF
    Many digital logic applications can take advantage of the reconfiguration capability of Field Programmable Gate Arrays (FPGAs) to dynamically patch design flaws, recover from faults, or time-multiplex between functions. Partial reconfiguration is the process by which a user modifies one or more modules residing on the FPGA device independently of the others. Partial Reconfiguration reduces the granularity of reconfiguration to be a set of columns or rectangular region of the device. Decreasing the granularity of reconfiguration results in reduced configuration filesizes and, thus, reduced configuration times. When compared to one bitstream of a non-partial reconfiguration implementation, smaller modules resulting in smaller bitstream filesizes allow an FPGA to implement many more hardware configurations with greater speed under similar storage requirements. To realize the benefits of partial reconfiguration in a wider range of applications, this thesis begins with a survey of FPGA fault-handling methods, which are compared using performance-based metrics. Performance analysis of the Genetic Algorithm (GA) Offline Recovery method is investigated and candidate solutions provided by the GA are partitioned by age to improve its efficiency. Parameters of this aging technique are optimized to increase the occurrence rate of complete repairs. Continuing the discussion of partial reconfiguration, the thesis develops a case-study application that implements one partial reconfiguration module to demonstrate the functionality and benefits of time multiplexing and reveal the improved efficiencies of the latest large-capacity FPGA architectures. The number of active partial reconfiguration modules implemented on a single FPGA device is increased from one to eight to implement a dynamic video-processing architecture for Discrete Cosine Transform and Motion Estimation functions to demonstrate a 55-fold reduction in bitstream storage requirements thus improving partial reconfiguration capability

    Hierarchical Strategies for Fault-Tolerance in Reconfigurable Architectures

    Get PDF
    This thesis presents a novel hierarchical fault-tolerance methodology for fault recovery in reconfigurable devices. As the semiconductor industry moves to producing ever smaller transistors, the number of faults occurring increases. At current technology nodes, unavoidable variations in production cause transistor devices to perform outside of ideal ranges. This variability manifests as faults at higher levels and has a knock-on effect for yields. In some ways, fault tolerance has never been more important. To better explore the area of variability, a novel reconfigurable architecture was designed: Programmable Analogue and Digital Array (PAnDA). By allowing reconfiguration from the transistor level to the logic block level, PAnDA allows for design space exploration, previously only available through simulation, in hardware. The main advantage of this is that design modifications can be tested almost instantaneously, as opposed to running time consuming transistor-level simulations. As a result of this design, each level of PAnDA’s configuration contains structural homogeneity, allowing multiple implementations of the same circuit on the same hardware. This potentially creates opportunities for fault tolerance through reconfiguration, and so experimental work is performed to discover how best to utilise these properties of PAnDA. The findings show that it is possible to optimise the reconfiguration in the event of a fault, even if the nature and location of the fault are unknown

    Leveraging Signal Transfer Characteristics and Parasitics of Spintronic Circuits for Area and Energy-Optimized Hybrid Digital and Analog Arithmetic

    Get PDF
    While Internet of Things (IoT) sensors offer numerous benefits in diverse applications, they are limited by stringent constraints in energy, processing area and memory. These constraints are especially challenging within applications such as Compressive Sensing (CS) and Machine Learning (ML) via Deep Neural Networks (DNNs), which require dot product computations on large data sets. A solution to these challenges has been offered by the development of crossbar array architectures, enabled by recent advances in spintronic devices such as Magnetic Tunnel Junctions (MTJs). Crossbar arrays offer a compact, low-energy and in-memory approach to dot product computation in the analog domain by leveraging intrinsic signal-transfer characteristics of the embedded MTJ devices. The first phase of this dissertation research seeks to build on these benefits by optimizing resource allocation within spintronic crossbar arrays. A hardware approach to non-uniform CS is developed, which dynamically configures sampling rates by deriving necessary control signals using circuit parasitics. Next, an alternate approach to non-uniform CS based on adaptive quantization is developed, which reduces circuit area in addition to energy consumption. Adaptive quantization is then applied to DNNs by developing an architecture allowing for layer-wise quantization based on relative robustness levels. The second phase of this research focuses on extension of the analog computation paradigm by development of an operational amplifier-based arithmetic unit for generalized scalar operations. This approach allows for 95% area reduction in scalar multiplications, compared to the state-of-the-art digital alternative. Moreover, analog computation of enhanced activation functions allows for significant improvement in DNN accuracy, which can be harnessed through triple modular redundancy to yield 81.2% reduction in power at the cost of only 4% accuracy loss, compared to a larger network. Together these results substantiate promising approaches to several challenges facing the design of future IoT sensors within the targeted applications of CS and ML

    Sound Based Positioning

    Get PDF
    With a growing interest in non-GPS positioning, navigation, and timing (PNT), sound based positioning provides a precise way to locate both sound sources and microphones through audible signals of opportunity (SoOPs). Exploiting SoOPs allows for passive location estimation. But, attributing each signal to a specific source location when signals are simultaneously emitting proves problematic. Using an array of microphones, unique SoOPs are identified and located through steered response beamforming. Sound source signals are then isolated through time-frequency masking to provide clear reference stations by which to estimate the location of a separate microphone through time difference of arrival measurements. Results are shown for real data

    CWI-evaluation - Progress Report 1993-1998

    Get PDF

    Perspective on unconventional computing using magnetic skyrmions

    Full text link
    Learning and pattern recognition inevitably requires memory of previous events, a feature that conventional CMOS hardware needs to artificially simulate. Dynamical systems naturally provide the memory, complexity, and nonlinearity needed for a plethora of different unconventional computing approaches. In this perspective article, we focus on the unconventional computing concept of reservoir computing and provide an overview of key physical reservoir works reported. We focus on the promising platform of magnetic structures and, in particular, skyrmions, which potentially allow for low-power applications. Moreover, we discuss skyrmion-based implementations of Brownian computing, which has recently been combined with reservoir computing. This computing paradigm leverages the thermal fluctuations present in many skyrmion systems. Finally, we provide an outlook on the most important challenges in this field.Comment: 19 pages and 3 figure

    Autonomously Reconfigurable Artificial Neural Network on a Chip

    Get PDF
    Artificial neural network (ANN), an established bio-inspired computing paradigm, has proved very effective in a variety of real-world problems and particularly useful for various emerging biomedical applications using specialized ANN hardware. Unfortunately, these ANN-based systems are increasingly vulnerable to both transient and permanent faults due to unrelenting advances in CMOS technology scaling, which sometimes can be catastrophic. The considerable resource and energy consumption and the lack of dynamic adaptability make conventional fault-tolerant techniques unsuitable for future portable medical solutions. Inspired by the self-healing and self-recovery mechanisms of human nervous system, this research seeks to address reliability issues of ANN-based hardware by proposing an Autonomously Reconfigurable Artificial Neural Network (ARANN) architectural framework. Leveraging the homogeneous structural characteristics of neural networks, ARANN is capable of adapting its structures and operations, both algorithmically and microarchitecturally, to react to unexpected neuron failures. Specifically, we propose three key techniques --- Distributed ANN, Decoupled Virtual-to-Physical Neuron Mapping, and Dual-Layer Synchronization --- to achieve cost-effective structural adaptation and ensure accurate system recovery. Moreover, an ARANN-enabled self-optimizing workflow is presented to adaptively explore a "Pareto-optimal" neural network structure for a given application, on the fly. Implemented and demonstrated on a Virtex-5 FPGA, ARANN can cover and adapt 93% chip area (neurons) with less than 1% chip overhead and O(n) reconfiguration latency. A detailed performance analysis has been completed based on various recovery scenarios

    Low-carbon Energy Transition and Planning for Smart Grids

    Get PDF
    With the growing concerns of climate change and energy crisis, the energy transition from fossil-based systems to a low-carbon society is an inevitable trend. Power system planning plays an essential role in the energy transition of the power sector to accommodate the integration of renewable energy and meet the goal of decreasing carbon emissions while maintaining the economical, secure, and reliable operations of power systems. In this thesis, a low-carbon energy transition framework and strategies are proposed for the future smart grid, which comprehensively consider the planning and operation of the electricity networks, the emission control strategies with the carbon response of the end-users, and carbon-related trading mechanisms. The planning approach considers the collaborative planning of different types of networks under the smart grid context. Transportation electrification is considered as a key segment in the energy transition of power systems, so the planning of charging infrastructure for electric vehicles (EVs) and hydrogen refueling infrastructure for fuel cell electric vehicles is jointly solved with the electricity network expansion. The vulnerability assessment tools are proposed to evaluate the coupled networks towards extreme events. Based on the carbon footprint tracking technologies, emission control can be realized from both the generation side and the demand side. The operation of the low-carbon oriented power system is modeled in a combined energy and carbon market, which fully considers the carbon emission right trading and renewable energy certificates trading of the market participants. Several benchmark systems have been used to demonstrate the effectiveness of the proposed planning approach. Comparative studies to existing approaches in the literature, where applicable, have also been conducted. The simulation results verify the practical applicability of this method

    Development of Thermoelastic Cooling Systems

    Get PDF
    Thermoelastic cooling, or elastocaloric cooling, is a cutting-edge solid-state based alternative cooling technology to the state-of-the-art vapor compression cooling systems that dominate the world today. Being environment friendly without any global warming potential, these thermoelastic cooling systems could reduce energy expenses and carbon emission since they are potentially more efficient than the vapor compression cycle (VCC). Nevertheless, as a result of its immature nature, its realistic application potential requires comprehensive research in material fundamentals, cycle design, system simulation, the proof-of-concept prototype development and testing. Therefore, understanding the performance potential and limitations of this emerging new cooling technology, building the theoretical framework and guiding future research are the objectives of this dissertation. Thermodynamic fundamentals of elastocaloric materials are introduced first. Cycle designs and theoretical performance evaluation are presented together with a detailed physics-based dynamic model for a water-chiller application. The baseline system coefficient of performance (COP) is 1.7 under 10 K system temperature lift. To enhance the system performance, a novel thermo-wave heat recovery process is proposed based on the analogy from the highly efficient “counter-flow” heat exchanger. Both the theoretical limit of the “counter-flow” thermo-wave heat recovery and the practical limitations by experimentation have been investigated. The results indicated that 100% efficiency is possible in theory, 60% ~ 80% heat recovery efficiency can be achieved in practice. The world first of-its-kind proof-of-concept prototype was designed based on the dynamic model, fabricated and tested using the proposed heat recovery method. Maximum cooling capacity of 65 W and maximum water-water temperature lift of 4.2 K were measured separately from the prototype. Using the validated model, performance improvement potentials without manufacturing constraints in the prototype are investigated and discussed. The COP is 3.4 with the plastic insulation tube and tube-in-tube design, which can be further improved to 4.1 by optimizing the system operating parameters. A quantitative comparison is made for thermoelastic cooling and other not-in-kind cooling technologies in order to provide insights on its limitations, potential applications, and directions for future research. Though under current research status, the system efficiency is only 0.14 of Carnot efficiency, which is less than 0.21 for conventional VCC systems, the framework carried out in this dissertation shows a technically viable alternative cooling technology that may change the future of our lives

    Robust design of deep-submicron digital circuits

    Get PDF
    Avec l'augmentation de la probabilité de fautes dans les circuits numériques, les systèmes développés pour les environnements critiques comme les centrales nucléaires, les avions et les applications spatiales doivent être certifies selon des normes industrielles. Cette thèse est un résultat d'une cooperation CIFRE entre l'entreprise Électricité de France (EDF) R&D et Télécom Paristech. EDF est l'un des plus gros producteurs d'énergie au monde et possède de nombreuses centrales nucléaires. Les systèmes de contrôle-commande utilisé dans les centrales sont basés sur des dispositifs électroniques, qui doivent être certifiés selon des normes industrielles comme la CEI 62566, la CEI 60987 et la CEI 61513 à cause de la criticité de l'environnement nucléaire. En particulier, l'utilisation des dispositifs programmables comme les FPGAs peut être considérée comme un défi du fait que la fonctionnalité du dispositif est définie par le concepteur seulement après sa conception physique. Le travail présenté dans ce mémoire porte sur la conception de nouvelles méthodes d'analyse de la fiabilité aussi bien que des méthodes d'amélioration de la fiabilité d'un circuit numérique.The design of circuits to operate at critical environments, such as those used in control-command systems at nuclear power plants, is becoming a great challenge with the technology scaling. These circuits have to pass through a number of tests and analysis procedures in order to be qualified to operate. In case of nuclear power plants, safety is considered as a very high priority constraint, and circuits designed to operate under such critical environment must be in accordance with several technical standards such as the IEC 62566, the IEC 60987, and the IEC 61513. In such standards, reliability is treated as a main consideration, and methods to analyze and improve the circuit reliability are highly required. The present dissertation introduces some methods to analyze and to improve the reliability of circuits in order to facilitate their qualification according to the aforementioned technical standards. Concerning reliability analysis, we first present a fault-injection based tool used to assess the reliability of digital circuits. Next, we introduce a method to evaluate the reliability of circuits taking into account the ability of a given application to tolerate errors. Concerning reliability improvement techniques, first two different strategies to selectively harden a circuit are proposed. Finally, a method to automatically partition a TMR design based on a given reliability requirement is introduced.PARIS-Télécom ParisTech (751132302) / SudocSudocFranceF
    • …
    corecore