135 research outputs found

    Testability of Switching Lattices in the Cellular Fault Model

    Get PDF
    A switching lattice is a two-dimensional array of four-terminal switches implemented in its cells. Each switch is linked to the four neighbors and is connected with them when the switch is ON, or is disconnected when the switch is OFF. Recently, with the advent of a variety of emerging nanoscale technologies based on regular arrays of switches, lattices of multi-terminal switches, originally introduced by Akers in 1972, have found a renewed interest. In this paper, the testability under the Cellular Fault Model (CFM) of switching lattices is defined and analyzed. Moreover, some techniques for improving the testability of lattices are discussed and experimentally evaluated

    Integrated Synthesis Methodology for Crossbar Arrays

    Get PDF
    Nano-crossbar arrays have emerged as area and power efficient structures with an aim of achieving high performance computing beyond the limits of current CMOS. Due to the stochastic nature of nano-fabrication, nano arrays show different properties both in structural and physical device levels compared to conventional technologies. Mentioned factors introduce random characteristics that need to be carefully considered by synthesis process. For instance, a competent synthesis methodology must consider basic technology preference for switching elements, defect or fault rates of the given nano switching array and the variation values as well as their effects on performance metrics including power, delay, and area. Presented synthesis methodology in this study comprehensively covers the all specified factors and provides optimization algorithms for each step of the process.This work is part of a project that has received funding from the European Union’s H2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 691178, and supported by the TUBITAK-Career project #113E76

    CubeSat Radiation Hardness Assurance Beyond Total Dose: Evaluating Single Event Effects

    Get PDF
    Radiation poses known and serious risks to smallsat survivability and mission duration, with effects falling into two categories: long-term total ionizing dose (TID) and instantaneous single event effects (SEE). Although literature exists on the topic of addressing TID in smallsats, few resources exist for addressing SEEs. Many varieties of SEEs exist, such as bit upsets and latch ups, which can occur in any electronic component containing active semiconductors (such as transistors). SEE consequences range from benign to destructive, so mission reliability can be enhanced by implementing fault protection strategies based on predicted SEE rates. Unfortunately, SEE rates are most reliably estimated through experimental testing that is often too costly for smallsat-scale missions. Prior test data published by larger programs exist, but may be sparse or incompatible with the environment of a particular mission. Despite these limitations, a process may be followed to gain insights and make informed design decisions for smallsats in the absence of hardware testing capabilities or similar test data. This process is: (1) Define the radiation environment; (2) identify the most critical and/or susceptible components on a spacecraft; (3) perform a search for compatible prior test data and/or component class data; (4) evaluate mission-specific SEE rates from available data; (5) study the rates alongside the mission requirements to identify high-risk areas of potential mitigation. The methodology developed in this work is based on the multi-institutional, National Science Foundation (NSF) Space Weather Atmospheric Reconfigurable Multiscale Experiment (SWARM-EX) mission. The steps taken during SWARM-EX’s radiation analysis alongside the detailed methodology serve as a case study for how these techniques can be applied to increasing the reliability of a university-scale smallsat mission

    Fault tolerant and dynamic evolutionary optimization engines

    Get PDF
    Mimicking natural evolution to solve hard optimization problems has played an important role in the artificial intelligence arena. Such techniques are broadly classified as Evolutionary Algorithms (EAs) and have been investigated for around four decades during which important contributions and advances have been made. One main evolutionary technique which has been widely investigated is the Genetic Algorithm (GA). GAs are stochastic search techniques that follow the Darwinian principle of evolution. Their application in the solution of hard optimization problems has been very successful. Indeed multi-dimensional problems presenting difficult search spaces with characteristics such as multi-modality, epistasis, non regularity, deceptiveness, etc., have all been effectively tackled by GAs. In this research, a competitive form of GAs known as fine or cellular GAs (cGAs) are investigated, because of their suitability for System on Chip (SoC) implementation when tackling real-time problems. Cellular GAs have also attracted the attention of researchers due to their high performance, ease of implementation and massive parallelism. In addition, cGAs inherently possess a number of structural configuration parameters which make them capable of sustaining diversity during evolution and therefore of promoting an adequate balance between exploitative and explorative stages of the search. The fast technological development of Integrated Circuits (ICs) has allowed a considerable increase in compactness and therefore in density. As a result, it is nowadays possible to have millions of gates and transistor based circuits in very small silicon areas. Operational complexity has also significantly increased and consequently other setbacks have emerged, such as the presence of faults that commonly appear in the form of single or multiple bit flips. Tough environmental or time dependent operating conditions can trigger faults in registers and memory allocations due to induced radiation, electron migration and dielectric breakdown. These kinds of faults are known as Single Event Effects (SEEs). Research has shown that an effective way of dealing with SEEs consists of a combination of hardware and software mitigation techniques to overcome faulty scenarios. Permanent faults known as Single Hard Errors (SHEs) and temporary faults known as Single Event Upsets (SEUs) are common SEEs. This thesis aims to investigate the inherent abilities of cellular GAs to deal with SHEs and SEUs at algorithmic level. A hard real-time application is targeted: calculating the attitude parameters for navigation in vehicles using Global Positioning System (GPS) technology. Faulty critical data, which can cause a system’s functionality to fail, are evaluated. The proposed mitigation techniques show cGAs ability to deal with up to 40% stuck at zero and 30% stuck at one faults in chromosomes bits and fitness score cells. Due to the non-deterministic nature of GAs, dynamic on-the-fly algorithmic and parametric configuration has also attracted the attention of researchers. In this respect, the structural properties of cellular GAs provide a valuable attribute to influence their selection pressure. This helps to maintain an adequate exploitation-exploration tradeoff, either from a pure topological perspective or through genetic operations that also make use of structural characteristics in cGAs. These properties, unique to cGAs, are further investigated in this thesis through a set of middle to high difficulty benchmark problems. Experimental results show that the proposed dynamic techniques enhance the overall performance of cGAs in most benchmark problems. Finally, being structurally attached, the dimensionality of cellular GAs is another line of investigation. 1D and 2D structures have normally been used to test cGAs at algorithm and implementation levels. Although 3D-cGAs are an immediate extension, not enough attention has been paid to them, and so a comparative study on the dimensionality of cGAs is carried out. Having shorter radii, 3D-cGAs present a faster dissemination of solutions and have denser neighbourhoods. Empirical results reported in this thesis show that 3D-cGAs achieve better efficiency when solving multi-modal and epistatic problems. In future, the performance improvements of 3D-cGAs will merge with the latest benefits that 3D integration technology has demonstrated, such as reductions in routing length, in interconnection delays and in power consumption

    DESIGN AND SYNTHESIS OF HIGH DENSITY INTEGRATED CIRCUITS

    Get PDF
    Gordon E. Moore, a co-founder of Fairchild Semiconductor, and later of Intel, predicted that after 1980 the complexity of an Integrated Circuit would be expected to double every two years. The prevision made by Moore held for decades, for this reason it is also called \u201cMoore\u2019s law\u201d. The trend in ICs is driven by a reduction of area and power consumption. Today scaled CMOS technologies are the main solution for digital processing. However, the interconnection scaling is not optimal. At every new technology node, the number of metal layers and their thickness increases, exploiting the vertical direction. The reduction of the minimum distance between interconnections and the growth in vertical dimension increase the parasitic capacitance and consequently the dynamic power consumption. Moreover, due to the non-optimal scaling of the interconnections, signal routing is becoming more and more challenging at every technology node advancement. Very scaled technologies make possible to reach a great transistor density. However, the design must comply to strict rules for metal interconnections. The aim of this thesis is to find possible solutions to the disadvantages of scaled CMOS technologies. This goal is obtained in two different ways: using ad-hoc design techniques on today CMOS technologies and finding new approaches to logic synthesis of nanocrossbars, that are an emerging post-CMOS technology. The two approaches used corresponds to the two parts of this thesis. The first part presents the design of an Associative Memory focusing the attention on develop design and logic synthesis techniques to reduce power consumption. The field of applicability of AMs is real-time pattern-recognition tasks. The possible uses range from scientific calculations to image processing for intelligent autonomous devices to image reconstruction for electro-medical apparatuses. In particular AMs are used in High Energy Physics (HEP) experiments to detect particle tracks. HEP experiments generate a huge amount of data, but it is necessary to select and save only the most interesting tracks. Being the data compared in parallel, AMs are synchronous ICs that have a very peaked power consumption, and therefore it is necessary to minimize the power consumption. This AM is designed within the projects IMPART and HTT in 28 nm CMOS technology, using a fully-CMOS approach. The logic is based on the propagation of a \u201ckill signal\u201d that, if one of the bits in a word is not matching, inhibits the switching of the following cells. Thanks to this feature, the designed AM array consumes less than 0.7 fJ/bit. A prototype has been fabricated and it has proven to be functional. The final chip will be installed in the data acquisition chain of ATLAS experiment on HL-LHC at CERN. In the future nanocrossbars are expected to reduce device dimensions and interconnection complexity with respect to CMOS. Logic functions are obtained with switching lattices of four-terminal switches. The research activity on nanocrossbars is done within the project NANOxCOMP. To improve synthesis are used some algorithmic approaches based on Boolean function decomposition and regularities, in particular P-circuits, EXOR-Projected Sums of Products (EP-SOP), Dimension-reducible (D-red) functions and autosymmetric functions. The decomposed functions are implemented into lattices using internal and external decomposition methods. Experimental results show that this approaches reduce the complexity of the single synthesis problem and leads, in average, to a reduction of lattice area and synthesis time. Lattices are made of self-assembled structures and they have a non-negligible defectivity ratio. To cope with this limitation, some techniques to reduce sensitivity to defects have been studied

    Compact fermion to qubit mappings for quantum simulation

    Get PDF
    Fermions are one of two types of particles that make up matter in the universe, characterised by many-body wavefunctions that are antisymmetric under particle exchange. Electrons, which underpin many physical systems of interest, are included in this group, so the ability to accurately simulate fermionic physics would be a great asset to research. However, the antisymmetric nature of these particles means that classical simulation of systems of multiple fermions is, in general, infeasible due to sign problems. This infeasibility extends even to simplified systems such as the Fermi-Hubbard model on a 2D grid. Simulation of fermions on a quantum device would avoid this problem entirely. A requisite step in simulating fermions on a quantum computer is mapping a many-body fermionic system onto qubits through a fermionic encoding. Significant properties of fermionic encodings include their qubit to fermionic mode ratio and the weight of their encoded fermionic interaction operators. Both affect the runtime of quantum simulation algorithms so it is ideal to minimise these quantities. This thesis presents the novel ``compact'' encoding which outperforms all previous local encodings in these metrics. The construction of the encoding is shown for a number of interaction graph structures and its general properties are explored. Special attention is given to a remarkable feature where low weight undetectable noise on the encoding corresponds to a natural noise process on fermionic systems, indicating that it may have utility in simulation even on imperfect, noisy quantum devices. An interesting feature of the compact encoding and others is an apparent link to topological error correcting codes like the toric code. Inspection of the compact encoding for a cubic lattice reveals a link to an apparently novel 3D topological code with some unusual properties. This size of its codespace and code distance are calculated and the exact form of its logical operators and syndromes are shown. Excitations with fermionic character exist in this code, consistent with the other codes linked with fermionic encodings, pointing to a possible unifying picture for local fermionic encodings. \end{abstract} \begin{impactstatement} Quantum computers have the potential for a wide reaching impact. The development of a fault tolerant quantum computer would allow hitherto infeasible problems to be tackled computationally. A significant application, which has been a primary motivator for the field since its inception, is the simulation of other quantum systems. Systems containing many fermions are of particular interest. Not only because they are fundamentally difficult to simulate with normal computers but because they include systems of electrons, the particles which underpin almost all of chemistry. Simulating these systems on a quantum computer is not a simple task however, as there must be a procedure to map the physics of many indistinguishable fermions onto the physics of stationary qubits, two fundamentally different systems. This procedure is called a \textit{fermionic encoding} and the main subject of this thesis is an example of this. The content of this thesis could benefit researchers in a number of fields. It adds to the rich zoo of fermionic encodings and may provide inspiration for further results in the field, it also highlights a possible link between the seemingly disparate local fermionic encodings which may pave the way to a more unified general theory of representing fermions on qubits. The encoding presented in this work has favourable properties for simulation on noisy devices so it may benefit research groups working on near term quantum hardware by providing the means to perform interesting fermionic simulation experiments. The content of the last chapter may also be of interest to the error correction community as it provides an example of an apparently unclassified topological code, this may lead to the development of new classes of code. This research may also yield benefits outside of academia. The quantum simulation of electronic systems would lead to greater understanding of chemical reactions such as Nitrogen fixing and materials such as superconductors and batteries. This understanding could lead to improvements in efficiency or the development of new substances which would be invaluable to industries including agriculture, transportation and battery production

    Engineering Resilient Space Systems

    Get PDF
    Several distinct trends will influence space exploration missions in the next decade. Destinations are becoming more remote and mysterious, science questions more sophisticated, and, as mission experience accumulates, the most accessible targets are visited, advancing the knowledge frontier to more difficult, harsh, and inaccessible environments. This leads to new challenges including: hazardous conditions that limit mission lifetime, such as high radiation levels surrounding interesting destinations like Europa or toxic atmospheres of planetary bodies like Venus; unconstrained environments with navigation hazards, such as free-floating active small bodies; multielement missions required to answer more sophisticated questions, such as Mars Sample Return (MSR); and long-range missions, such as Kuiper belt exploration, that must survive equipment failures over the span of decades. These missions will need to be successful without a priori knowledge of the most efficient data collection techniques for optimum science return. Science objectives will have to be revised ‘on the fly’, with new data collection and navigation decisions on short timescales. Yet, even as science objectives are becoming more ambitious, several critical resources remain unchanged. Since physics imposes insurmountable light-time delays, anticipated improvements to the Deep Space Network (DSN) will only marginally improve the bandwidth and communications cadence to remote spacecraft. Fiscal resources are increasingly limited, resulting in fewer flagship missions, smaller spacecraft, and less subsystem redundancy. As missions visit more distant and formidable locations, the job of the operations team becomes more challenging, seemingly inconsistent with the trend of shrinking mission budgets for operations support. How can we continue to explore challenging new locations without increasing risk or system complexity? These challenges are present, to some degree, for the entire Decadal Survey mission portfolio, as documented in Vision and Voyages for Planetary Science in the Decade 2013–2022 (National Research Council, 2011), but are especially acute for the following mission examples, identified in our recently completed KISS Engineering Resilient Space Systems (ERSS) study: 1. A Venus lander, designed to sample the atmosphere and surface of Venus, would have to perform science operations as components and subsystems degrade and fail; 2. A Trojan asteroid tour spacecraft would spend significant time cruising to its ultimate destination (essentially hibernating to save on operations costs), then upon arrival, would have to act as its own surveyor, finding new objects and targets of opportunity as it approaches each asteroid, requiring response on short notice; and 3. A MSR campaign would not only be required to perform fast reconnaissance over long distances on the surface of Mars, interact with an unknown physical surface, and handle degradations and faults, but would also contain multiple components (launch vehicle, cruise stage, entry and landing vehicle, surface rover, ascent vehicle, orbiting cache, and Earth return vehicle) that dramatically increase the need for resilience to failure across the complex system. The concept of resilience and its relevance and application in various domains was a focus during the study, with several definitions of resilience proposed and discussed. While there was substantial variation in the specifics, there was a common conceptual core that emerged—adaptation in the presence of changing circumstances. These changes were couched in various ways—anomalies, disruptions, discoveries—but they all ultimately had to do with changes in underlying assumptions. Invalid assumptions, whether due to unexpected changes in the environment, or an inadequate understanding of interactions within the system, may cause unexpected or unintended system behavior. A system is resilient if it continues to perform the intended functions in the presence of invalid assumptions. Our study focused on areas of resilience that we felt needed additional exploration and integration, namely system and software architectures and capabilities, and autonomy technologies. (While also an important consideration, resilience in hardware is being addressed in multiple other venues, including 2 other KISS studies.) The study consisted of two workshops, separated by a seven-month focused study period. The first workshop (Workshop #1) explored the ‘problem space’ as an organizing theme, and the second workshop (Workshop #2) explored the ‘solution space’. In each workshop, focused discussions and exercises were interspersed with presentations from participants and invited speakers. The study period between the two workshops was organized as part of the synthesis activity during the first workshop. The study participants, after spending the initial days of the first workshop discussing the nature of resilience and its impact on future science missions, decided to split into three focus groups, each with a particular thrust, to explore specific ideas further and develop material needed for the second workshop. The three focus groups and areas of exploration were: 1. Reference missions: address/refine the resilience needs by exploring a set of reference missions 2. Capability survey: collect, document, and assess current efforts to develop capabilities and technology that could be used to address the documented needs, both inside and outside NASA 3. Architecture: analyze the impact of architecture on system resilience, and provide principles and guidance for architecting greater resilience in our future systems The key product of the second workshop was a set of capability roadmaps pertaining to the three reference missions selected for their representative coverage of the types of space missions envisioned for the future. From these three roadmaps, we have extracted several common capability patterns that would be appropriate targets for near-term technical development: one focused on graceful degradation of system functionality, a second focused on data understanding for science and engineering applications, and a third focused on hazard avoidance and environmental uncertainty. Continuing work is extending these roadmaps to identify candidate enablers of the capabilities from the following three categories: architecture solutions, technology solutions, and process solutions. The KISS study allowed a collection of diverse and engaged engineers, researchers, and scientists to think deeply about the theory, approaches, and technical issues involved in developing and applying resilience capabilities. The conclusions summarize the varied and disparate discussions that occurred during the study, and include new insights about the nature of the challenge and potential solutions: 1. There is a clear and definitive need for more resilient space systems. During our study period, the key scientists/engineers we engaged to understand potential future missions confirmed the scientific and risk reduction value of greater resilience in the systems used to perform these missions. 2. Resilience can be quantified in measurable terms—project cost, mission risk, and quality of science return. In order to consider resilience properly in the set of engineering trades performed during the design, integration, and operation of space systems, the benefits and costs of resilience need to be quantified. We believe, based on the work done during the study, that appropriate metrics to measure resilience must relate to risk, cost, and science quality/opportunity. Additional work is required to explicitly tie design decisions to these first-order concerns. 3. There are many existing basic technologies that can be applied to engineering resilient space systems. Through the discussions during the study, we found many varied approaches and research that address the various facets of resilience, some within NASA, and many more beyond. Examples from civil architecture, Department of Defense (DoD) / Defense Advanced Research Projects Agency (DARPA) initiatives, ‘smart’ power grid control, cyber-physical systems, software architecture, and application of formal verification methods for software were identified and discussed. The variety and scope of related efforts is encouraging and presents many opportunities for collaboration and development, and we expect many collaborative proposals and joint research as a result of the study. 4. Use of principled architectural approaches is key to managing complexity and integrating disparate technologies. The main challenge inherent in considering highly resilient space systems is that the increase in capability can result in an increase in complexity with all of the 3 risks and costs associated with more complex systems. What is needed is a better way of conceiving space systems that enables incorporation of capabilities without increasing complexity. We believe principled architecting approaches provide the needed means to convey a unified understanding of the system to primary stakeholders, thereby controlling complexity in the conception and development of resilient systems, and enabling the integration of disparate approaches and technologies. A representative architectural example is included in Appendix F. 5. Developing trusted resilience capabilities will require a diverse yet strategically directed research program. Despite the interest in, and benefits of, deploying resilience space systems, to date, there has been a notable lack of meaningful demonstrated progress in systems capable of working in hazardous uncertain situations. The roadmaps completed during the study, and documented in this report, provide the basis for a real funded plan that considers the required fundamental work and evolution of needed capabilities. Exploring space is a challenging and difficult endeavor. Future space missions will require more resilience in order to perform the desired science in new environments under constraints of development and operations cost, acceptable risk, and communications delays. Development of space systems with resilient capabilities has the potential to expand the limits of possibility, revolutionizing space science by enabling as yet unforeseen missions and breakthrough science observations. Our KISS study provided an essential venue for the consideration of these challenges and goals. Additional work and future steps are needed to realize the potential of resilient systems—this study provided the necessary catalyst to begin this process

    Characterizing Errors in Quantum Information Processors

    Get PDF
    Error-free computation is an unattainable ideal, yet our world now contains many computers that appear error-free to their users. That such things are possible is explained by sophisticated theorems that demonstrate the possibility of efficiently reducing computational errors introduced by reasonably well-behaved noise.
 My thesis is about the problem of determining whether noise in prototype quantum information processors is sufficiently well-behaved for fault-tolerant quantum computing to be possible. My work is divided into two themes. The first theme is the interpretation of average gate fidelity, a quantity that has become the standard performance metric for assessing progress towards fault tolerance. I have elucidated the connection between average gate fidelity and the requirements of fault-tolerant quantum computing by demonstrating the limits of fidelity as a proxy for error rate, the usual metric in fault-tolerance literature. I thereby conclude that information additional to fidelity is required to assess progress towards fault-tolerance. The second theme is the characterization of two-level defect systems, a particularly deleterious kind of noise that can affect superconducting-integrated-circuit-based quantum computing prototypes. I have designed statistical experimental design algorithms that can rigorously assess the influence of these defect systems, and I helped develop a proposal to mitigate their influence. I thereby demonstrate that existing experimental techniques can become much more powerful by employing advanced data collection procedures.
 My work has immediate implications for current research efforts towards the first working quantum computer. Theoretical work should be directed at assessing noise sources using metrics other than average gate fidelity, and future experimental characterization techniques should become more modular in order to incorporate advanced statistical inference techniques like the ones I develop herein
    • …
    corecore