20 research outputs found

    Data Understanding Applied to Optimization

    Get PDF
    The goal of this research is to explore and develop software for supporting visualization and data analysis of search and optimization. Optimization is an ever-present problem in science. The theory of NP-completeness implies that the problems can only be resolved by increasingly smarter problem specific knowledge, possibly for use in some general purpose algorithms. Visualization and data analysis offers an opportunity to accelerate our understanding of key computational bottlenecks in optimization and to automatically tune aspects of the computation for specific problems. We will prototype systems to demonstrate how data understanding can be successfully applied to problems characteristic of NASA's key science optimization tasks, such as central tasks for parallel processing, spacecraft scheduling, and data transmission from a remote satellite

    DESIGN AND SYNTHESIS OF HIGH DENSITY INTEGRATED CIRCUITS

    Get PDF
    Gordon E. Moore, a co-founder of Fairchild Semiconductor, and later of Intel, predicted that after 1980 the complexity of an Integrated Circuit would be expected to double every two years. The prevision made by Moore held for decades, for this reason it is also called \u201cMoore\u2019s law\u201d. The trend in ICs is driven by a reduction of area and power consumption. Today scaled CMOS technologies are the main solution for digital processing. However, the interconnection scaling is not optimal. At every new technology node, the number of metal layers and their thickness increases, exploiting the vertical direction. The reduction of the minimum distance between interconnections and the growth in vertical dimension increase the parasitic capacitance and consequently the dynamic power consumption. Moreover, due to the non-optimal scaling of the interconnections, signal routing is becoming more and more challenging at every technology node advancement. Very scaled technologies make possible to reach a great transistor density. However, the design must comply to strict rules for metal interconnections. The aim of this thesis is to find possible solutions to the disadvantages of scaled CMOS technologies. This goal is obtained in two different ways: using ad-hoc design techniques on today CMOS technologies and finding new approaches to logic synthesis of nanocrossbars, that are an emerging post-CMOS technology. The two approaches used corresponds to the two parts of this thesis. The first part presents the design of an Associative Memory focusing the attention on develop design and logic synthesis techniques to reduce power consumption. The field of applicability of AMs is real-time pattern-recognition tasks. The possible uses range from scientific calculations to image processing for intelligent autonomous devices to image reconstruction for electro-medical apparatuses. In particular AMs are used in High Energy Physics (HEP) experiments to detect particle tracks. HEP experiments generate a huge amount of data, but it is necessary to select and save only the most interesting tracks. Being the data compared in parallel, AMs are synchronous ICs that have a very peaked power consumption, and therefore it is necessary to minimize the power consumption. This AM is designed within the projects IMPART and HTT in 28 nm CMOS technology, using a fully-CMOS approach. The logic is based on the propagation of a \u201ckill signal\u201d that, if one of the bits in a word is not matching, inhibits the switching of the following cells. Thanks to this feature, the designed AM array consumes less than 0.7 fJ/bit. A prototype has been fabricated and it has proven to be functional. The final chip will be installed in the data acquisition chain of ATLAS experiment on HL-LHC at CERN. In the future nanocrossbars are expected to reduce device dimensions and interconnection complexity with respect to CMOS. Logic functions are obtained with switching lattices of four-terminal switches. The research activity on nanocrossbars is done within the project NANOxCOMP. To improve synthesis are used some algorithmic approaches based on Boolean function decomposition and regularities, in particular P-circuits, EXOR-Projected Sums of Products (EP-SOP), Dimension-reducible (D-red) functions and autosymmetric functions. The decomposed functions are implemented into lattices using internal and external decomposition methods. Experimental results show that this approaches reduce the complexity of the single synthesis problem and leads, in average, to a reduction of lattice area and synthesis time. Lattices are made of self-assembled structures and they have a non-negligible defectivity ratio. To cope with this limitation, some techniques to reduce sensitivity to defects have been studied

    The 1991 3rd NASA Symposium on VLSI Design

    Get PDF
    Papers from the symposium are presented from the following sessions: (1) featured presentations 1; (2) very large scale integration (VLSI) circuit design; (3) VLSI architecture 1; (4) featured presentations 2; (5) neural networks; (6) VLSI architectures 2; (7) featured presentations 3; (8) verification 1; (9) analog design; (10) verification 2; (11) design innovations 1; (12) asynchronous design; and (13) design innovations 2

    36th International Symposium on Theoretical Aspects of Computer Science: STACS 2019, March 13-16, 2019, Berlin, Germany

    Get PDF

    Subject Index Volumes 1–200

    Get PDF

    Control Architecture for Intentional Island Operation in Distribution Network with High Penetration of Distributed Generation

    Get PDF

    Formal Methods Specification and Analysis Guidebook for the Verification of Software and Computer Systems

    Get PDF
    This guidebook, the second of a two-volume series, is intended to facilitate the transfer of formal methods to the avionics and aerospace community. The 1st volume concentrates on administrative and planning issues [NASA-95a], and the second volume focuses on the technical issues involved in applying formal methods to avionics and aerospace software systems. Hereafter, the term "guidebook" refers exclusively to the second volume of the series. The title of this second volume, A Practitioner's Companion, conveys its intent. The guidebook is written primarily for the nonexpert and requires little or no prior experience with formal methods techniques and tools. However, it does attempt to distill some of the more subtle ingredients in the productive application of formal methods. To the extent that it succeeds, those conversant with formal methods will also nd the guidebook useful. The discussion is illustrated through the development of a realistic example, relevant fragments of which appear in each chapter. The guidebook focuses primarily on the use of formal methods for analysis of requirements and high-level design, the stages at which formal methods have been most productively applied. Although much of the discussion applies to low-level design and implementation, the guidebook does not discuss issues involved in the later life cycle application of formal methods

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Explicit or Symbolic Translation of Linear Temporal Logic to Automata

    Get PDF
    Formal verification techniques are growing increasingly vital for the development of safety-critical software and hardware in practice. Techniques such as requirements-based design and model checking for system verification have been successfully used to verify systems for air traffic control, airplane separation assurance, autopilots, CPU logic designs, life-support, medical equipment, and other functions that ensure human safety. Formal behavioral specifications written early in the system-design process and communicated across all design phases increase the efficiency, consistency, and quality of the system under development. We argue that to prevent introducing design or verification errors, it is crucial to test specifications for satisfiability. We advocate for the adaptation of a new sanity check via satisfiability checking for property assurance. Our focus here is on specifications expressed in Linear Temporal Logic (LTL). We demonstrate that LTL satisfiability checking reduces to model checking and satisfiability checking for the specification, its complement, and a conjunction of all properties should be performed as a first step to LTL model checking. We report on an experimental investigation of LTL satisfiability checking. We introduce a large set of rigorous benchmarks to enable objective evaluation of LTL-to-automaton algorithms in terms of scalability, performance, correctness, and size of the automata produced. For explicit model checking, we use the Spin model checker; we tested all LTL-to-explicit automaton translation tools that were publicly available when we conducted our study. For symbolic model checking, we use CadenceSMV, NuSMV, and SAL-SMC for both LTL-to-symbolic automaton translation and to perform the satisfiability check. Our experiments result in two major findings. First, scalability, correctness, and other debilitating performance issues afflict most LTL translation tools. Second, for LTL satisfiability checking, the symbolic approach is clearly superior to the explicit approach. Ironically, the explicit approach to LTL-to-automata had been heavily studied while only one algorithm existed for LTL-to-symbolic automata. Since 1994, there had been essentially no new progress in encoding symbolic automata for BDD-based analysis. Therefore, we introduce a set of 30 symbolic automata encodings. The set consists of novel combinations of existing constructs, such as different LTL formula normal forms, with a novel transition-labeled symbolic automaton form, a new way to encode transitions, and new BDD variable orders based on algorithms for tree decomposition of graphs. An extensive set of experiments demonstrates that these encodings translate to significant, sometimes exponential, improvement over the current standard encoding for symbolic LTL satisfiability checking. Building upon these ideas, we return to the explicit automata domain and focus on the most common type of specifications used in industrial practice: safety properties. We show that we can exploit the inherent determinism of safety properties to create a set of 26 explicit automata encodings comprised of novel aspects including: state numbers versus state labels versus a state look-up table, finite versus infinite acceptance conditions, forward-looking versus backward-looking transition encodings, assignment-based versus BDD-based alphabet representation, state and transition minimization, edge abbreviation, trap-state elimination, and determinization either on-the-fly or up-front using the subset construction. We conduct an extensive experimental evaluation and identify an encoding that offers the best performance in explicit LTL model checking time and is constantly faster than the previous best explicit automaton encoding algorithm
    corecore