240 research outputs found

    Aggregating over Dominated Points by Sorting, Scanning, Zip and Flat Maps

    Get PDF
    Prefix aggregation operation (also called scan), and its particular case, prefix summation, is an important parallel primitive and enjoys a lot of attention in the research literature. It is also used in many algorithms as one of the steps. Aggregation over dominated points in ?^m is a multidimensional generalisation of prefix aggregation. It is also intensively researched, both as a parallel primitive and as a practical problem, encountered in computational geometry, spatial databases and data warehouses. In this paper we show that, for a constant dimension m, aggregation over dominated points in ?^m can be computed by O(1) basic operations that include sorting the whole dataset, zipping sorted lists of elements, computing prefix aggregations of lists of elements and flat maps, which expand the data size from initial n to n log^{m-1}n. Thereby we establish that prefix aggregation suffices to express aggregation over dominated points in more dimensions, even though the latter is a far-reaching generalisation of the former. Many problems known to be expressible by aggregation over dominated points become expressible by prefix aggregation, too. We rely on a small set of primitive operations which guarantee an easy transfer to various distributed architectures and some desired properties of the implementation

    Parallel-prefix structures for binary and modulo {2n - 1, 2n, 2n + 1} adders

    Get PDF
    Adders are the among the most essential arithmetic units within digital systems. Parallel-prefix structures are efficient for adders because of their regular topology and logarithmic delay. However, building parallel-prefix adders are barely discussed in literature. This work puts emphasis on how to build prefix trees and simple algorithms for building these architectures. One particular modification of adders is for use with modulo arithmetic. The most common type of modulo adders are modulo 2n -1 and modulo 2n + 1 adders because they have a common base that is a power of 2. In order to improve their speed, parallel-prefix structures can also be employed for modulo 2n +- 1 adders. This dissertation presents the formation of several binary and modulo prefix architectures and their modifications using Ling's algorithm. For all binary and modulo adders, both algorithmic and quantitative analysis are provided to compare the performance of different architectures. Furthermore, to see how process impact the design, three technologies, from deep submicron to nanometer range, are utilized to collect the quantitative data

    Experimental Progress in Computation by Self-Assembly of DNA Tilings

    Get PDF
    Approaches to DNA-based computing by self-assembly require the use of D. T A nanostructures, called tiles, that have efficient chemistries, expressive computational power: and convenient input and output (I/O) mechanisms. We have designed two new classes of DNA tiles: TAO and TAE, both of which contain three double-helices linked by strand exchange. Structural analysis of a TAO molecule has shown that the molecule assembles efficiently from its four component strands. Here we demonstrate a novel method for I/O whereby multiple tiles assemble around a single-stranded (input) scaffold strand. Computation by tiling theoretically results in the formation of structures that contain single-stranded (output) reported strands, which can then be isolated for subsequent steps of computation if necessary. We illustrate the advantages of TAO and TAE designs by detailing two examples of massively parallel arithmetic: construction of complete XOR and addition tables by linear assemblies of DNA tiles. The three helix structures provide flexibility for topological routing of strands in the computation: allowing the implementation of string tile models

    Design of Soft Error Robust High Speed 64-bit Logarithmic Adder

    Get PDF
    Continuous scaling of the transistor size and reduction of the operating voltage have led to a significant performance improvement of integrated circuits. However, the vulnerability of the scaled circuits to transient data upsets or soft errors, which are caused by alpha particles and cosmic neutrons, has emerged as a major reliability concern. In this thesis, we have investigated the effects of soft errors in combinational circuits and proposed soft error detection techniques for high speed adders. In particular, we have proposed an area-efficient 64-bit soft error robust logarithmic adder (SRA). The adder employs the carry merge Sklansky adder architecture in which carries are generated every 4 bits. Since the particle-induced transient, which is often referred to as a single event transient (SET) typically lasts for 100~200 ps, the adder uses time redundancy by sampling the sum outputs twice. The sampling instances have been set at 110 ps apart. In contrast to the traditional time redundancy, which requires two clock cycles to generate a given output, the SRA generates an output in a single clock cycle. The sampled sum outputs are compared using a 64-bit XOR tree to detect any possible error. An energy efficient 4-input transmission gate based XOR logic is implemented to reduce the delay and the power in this case. The pseudo-static logic (PSL), which has the ability to recover from a particle induced transient, is used in the adder implementation. In comparison with the space redundant approach which requires hardware duplication for error detection, the SRA is 50% more area efficient. The proposed SRA is simulated for different operands with errors inserted at different nodes at the inputs, the carry merge tree, and the sum generation circuit. The simulation vectors are carefully chosen such that the SET is not masked by error masking mechanisms, which are inherently present in combinational circuits. Simulation results show that the proposed SRA is capable of detecting 77% of the errors. The undetected errors primarily result when the SET causes an even number of errors and when errors occur outside the sampling window

    Problems

    Get PDF
    I. Definition of the Subject and Its Importanc

    FPGA를 이용한 시간 기반 고집적 PET 데이터 수집 장치

    Get PDF
    학위논문(박사)--서울대학교 대학원 :의과대학 의과학과,2019. 8. 이재성.Positron emission tomography (PET) is a widely used functional imaging device for diagnosing cancer and neurodegenerative diseases. PET instrumentation studies focus on improving both spatial resolution and sensitivity to improve the lesion detectability while reducing radiation exposure to patients. The silicon photomultiplier (SiPM) is a photosensor suitable for high-performance PET scanners owing to its compact size and fast response. However, the SiPM-based PET scanners require a large number of readout channels owing to a high level of granularity. For example, the typical whole-body PET scanners require more than 40,000 SiPM channels. Therefore, the highly integrated data acquisition (DAQ) system that can digitize a large number of SiPM signal with preserving its fast temporal response is required to develop the high-performance SiPM-based PET scanners. Time-based signal digitization is a promising method to develop highly integrated DAQ systems owing to its simple circuitry and fast temporal response. In this thesis, studies on developing highly integrated DAQ systems using a field-programmable gate array (FPGA) were presented. Firstly, a 10-ps time-to-digital converter (TDC) implemented within the FPGA was developed. The FPGA-TDCs suffer from the non-linearity, because FPGAs are not originally designed to implement TDC. We proposed the dual-phase sampling architecture considering the FPGA clock distribution network to mitigate the TDC non-linearity. In addition, we developed the on-the-fly calibrator that compensated the innate bin width variations without introducing the dead time. Secondly, the time-based SiPM multiplexing and readout method was developed using the principle of the global positioning system (GPS). The signal traces connecting every SiPM to four timing channels were used to encode the position information. The position information was obtained using the innate transit time differences measured by four FPGA-TDCs. In addition, the minimal signal distortion by multiplexing circuit allowed to use a time-over-threshold (ToT) method for energy measurement after multiplexing. Thirdly, we proposed a new FPGA-only digitizer. The programmable FPGA input/output (I/O) port was configured with stub-series terminated logic (SSTL) input receiver, and each FPGA I/O port functioned as a high-performance voltage comparator with a fast temporal response. We demonstrated that the FPGA can be used as a high-performance DAQ system by directly digitizing the time-of-flight (TOF) PET detector signals using the FPGA without any front-end electronics. Lastly, we developed comparator-less charge-to-time converter (QTC) DAQ systems to collect data from a prototype high-resolution brain PET scanner. The energy channel consisted of a QTC combined with the SSTL input receiver of the FPGA. The timing channel was a TDC implemented within the same FPGA. The detailed structure of brain phantom was well-resolved using the developed high-resolution brain PET scanner and the highly-integrated time-based DAQ systems.양전자방출단층촬영 (Positron Emission Tomography; PET) 장치는 암과 신경퇴행성 질환을 영상화하는 데 널리 쓰이는 기능 영상장치이다. 최근 PET 스캐너 연구는 공간 분해능과 장비 민감도를 높여 병변의 진단을 쉽게 하면서 환자의 방사선 피폭을 줄이는 방법에 초점을 맞추고 있다. 실리콘 관증배기 (silicon photomultiplier; SiPM)은 크기가 작고 반응속도가 빠르기 때문에 고성능 PET 스캐너에 적합한 광검출소자이다. 하지만 SiPM 기반 PET 스캐너는 개별 SiPM의 크기가 작기 때문에 수많은 데이터 수집 채널이 필요하다. 예를 들어, 전신 PET 스캐너를 SiPM으로 구성할 경우 40,000개 이상의 SiPM 소자가 필요하다. 따라서, SiPM의 성능을 유지하면서 다채널 신호 디지털화가 가능한 고집적 데이터 수집장치 (data acquisition; DAQ)가 고성능 SiPM PET 스캐너 개발에 필요하다. 시간 기반 신호 디지털 방법은 단순한 회로와 빠른 반응속도 덕분에 고집적 DAQ 시스템을 구현하는 유망한 방법이다. 본 학위논문에서는 프로그램 가능 게이트 배열 (field-programmable gate array; FPGA)을 이용하여 고집적 DAQ 시스템을 개발하는 연구내용을 다룬다. 첫째로, 10 ps 의 분해능을 갖는 FPGA 기반 시간-디지털 변환기 (time-to-digital converter; TDC)를 개발하였다. FPGA는 TDC 구현을 위한 집적소자가 아니므로 FPGA에 구현된 TDC는 일반적으로 비선형성 문제를 가진다. 이를 해결하기 위해 비선형성 문제를 야기하는 FPGA의 클락 신호 분배 구조를 고려하여 이중 위상 샘플링 방법을 제안하였다. 또한, FPGA TDC 고유의 불균일한 분해능을 측정하고 보상하기 위하여 실시간 보정기술을 개발하였다. 둘째로, GPS 원리를 사용한 시간 기반 신호 부호화 (multiplexing) 및 수집 방법을 개발하였다. 부호화 회로는 SiPM을 네 개의 시간 수집 채널로 연결한 도선으로 구성되고 위치정보는 각 SiPM으로부터 네 개의 시간 수집 채널까지의 고유한 도파시간 차이를 계산해서 수집할 수 있다. 또한, 기존 전하 분배 부호화 회로와 달리 신호가 왜곡되지 않기 때문에 문턱 전압 방법 (time-over-threshold; ToT) 방식으로 에너지를 수집하는 것이 가능하였다. 셋째로, FPGA만으로 아날로그 신호를 디지털화 하는 새로운 방법을 개발하였다. FPGA의 프로그램 가능 입출력포트를 stub-series terminated logic (SSTL) 수신기로 프로그램하면, 각각의 FPGA 입출력포트가 빠른 시간 반응성을 가진 고성능 전압비교기로 동작한다. 비정시간 (time-of-flight; TOF) 측정 가능 PET 검출기의 신호를 전단회로 없이 FPGA만으로 디지털화하여 FPGA를 고성능 DAQ 장치로 사용할 수 있음을 입증하였다. 마지막으로, 공간분해능이 뛰어난 뇌전용 스캐너로부터 데이터를 수집하기 위해 전압비교기를 사용하지 않는 시간 기반 DAQ 장치를 개발하였다. 에너지 측정 채널은 시간-전하 변환기 (charge-to-time converter; QTC)와 FPGA의 SSTL 수신기로 구성하였다. 시각 측정 채널은 FPGA 기반 TDC로 구성하였다. 개발한 뇌전용 스캐너와 고집적 시간 기반 DAQ 장치로 획득한 뇌모양 팬텀의 자세한 구조들은 잘 구분되었다.Chapter 1. Introduction 1 1.1. Background 1 1.1.1. Positron Emission Tomography 1 1.1.2. Silicon Photomultiplier 1 1.1.3. Data Acquisition System 2 1.1.4. Time-based Signal Digitization Method 3 1.2. Purpose of Research 6 Chapter 2. FPGA-based Time-to-Digital Converter 8 2.1. Background 8 2.2. Materials and Methods 9 2.2.1. Tapped-Delay-Line TDC 9 2.2.2. FPGA 11 2.2.3. Dual-Phase TDL TDC with On-the-Fly Calibrator 11 2.2.3.1. FPGA Clock Distribution Network 11 2.2.3.2. The Principle of Dual-Phase TDL TDC 14 2.2.3.3. The Principle of Pipelined On-the-Fly Calibrator 16 2.2.3.4. Implementation of Dual-Phase TDL TDC with On-the-Fly Calibrator 18 2.2.4. Experimental Setups and Data Processing 20 2.2.4.1. TDC Characteristics 21 2.2.4.2. Arrival Time Difference Measurements 22 2.3. Results 24 2.3.1. TDC Characteristics 24 2.3.2. Arrival Time Difference Measurements 25 2.4. Discussion 28 Chapter 3. Time-based Multiplexing Method 29 3.1. Background 29 3.2. Materials and Methods 30 3.2.1. Delay Grid Multiplexing 30 3.2.2. Detector for Concept Verification 32 3.2.3. Front-end Electronics 34 3.2.4. Experimental Setups 35 3.2.4.1. Data Acquisition Using the Waveform Digitizer 37 3.2.4.2. Data Acquisition Using the FPGA-TDC 37 3.2.5. Data Processing and Analysis 38 3.2.5.1. Waveform Digitizer 38 3.2.5.2. FPGA-TDC 41 3.3. Results 44 3.3.1. Waveform Digitizer 44 3.3.1.1. Waveform, Rise Time, and Decay Time 44 3.3.1.2. Flood Map 46 3.3.1.3. Energy 48 3.3.1.4. CTR 49 3.3.2. FPGA-TDC 50 3.3.2.1. ToT and Energy 50 3.3.2.2. Flood Map 51 3.3.2.3. CTR 52 3.4. Discussion 53 Chapter 4. FPGA-Only Signal Digitization Method 54 4.1. Background 54 4.2. Materials and Methods 56 4.2.1. Single-ended Memory Interface Input Receiver 56 4.2.2. SeMI Digitizer 56 4.2.3. Experimental Setup for Intrinsic Performance Characterization 59 4.2.3.1. ToT 59 4.2.3.2. Timing 60 4.2.4. Experimental Setup for Individual Signal Digitization 60 4.2.4.1. TOF PET Detector 60 4.2.4.2. Data Acquisition Using the Waveform Digitizer 61 4.2.4.3. Data Acquisition Using the SeMI Digitizer 63 4.2.4.4. Data Analysis 63 4.3. Results 64 4.3.1. Results of Intrinsic Performance Characterization 64 4.3.1.1. ToT 64 4.3.1.2. Timing 65 4.3.2. Results of Individual Signal Digitization 66 4.3.2.1. Energy 66 4.3.2.2. CTR 67 4.4. Discussion 68 Chapter 5. Comparator-less QTC DAQ Systems for High-Resolution Brain PET Scanners 70 5.1. Background 70 5.2. Materials and Methods 72 5.2.1. Brain PET Scanner 72 5.2.1.1. Block Detector 72 5.2.1.2. Sector 73 5.2.1.3. Scanner Geometry 74 5.2.2. Comparator-less QTC DAQ System 75 5.2.3. Data Acquisition Chain of Brain PET Scanner 79 5.2.4. Experimental Setups and Data Processing 79 5.2.4.1. Energy Linearity 79 5.2.4.2. Performance Evaluation of Block Detector 80 5.2.4.3. Phantom Studies 82 5.3. Results 83 5.3.1. Energy Linearity 83 5.3.2. Performance Evaluation of Block Detector 83 5.3.3. Phantom Studies 85 5.4. Discussion 87 Chapter 6. Conclusions 89 Bibliography 90 Abstract in Korean (국문 초록) 94Docto

    Decidability and coincidence of equivalences for concurrency

    Get PDF
    There are two fundamental problems concerning equivalence relations in con-currency. One is: for which system classes is a given equivalence decidable? The second is: when do two equivalences coincide? Two well-known equivalences are history preserving bisimilarity (hpb) and hereditary history preserving bisimi-larity (hhpb). These are both ‘independence ’ equivalences: they reflect causal dependencies between events. Hhpb is obtained from hpb by adding a ‘back-tracking ’ requirement. This seemingly small change makes hhpb computationally far harder: hpb is well-known to be decidable for finite-state systems, whereas the decidability of hhpb has been a renowned open problem for several years; only recently it has been shown undecidable. The main aim of this thesis is to gain insights into the decidability problem for hhpb, and to analyse when it coincides with hpb; less technically, we might say, to analyse the power of the interplay between concurrency, causality, and conflict. We first examine the backtracking condition, and see that it has two dimen

    Degradation Models and Optimizations for CMOS Circuits

    Get PDF
    Die Gewährleistung der Zuverlässigkeit von CMOS-Schaltungen ist derzeit eines der größten Herausforderungen beim Chip- und Schaltungsentwurf. Mit dem Ende der Dennard-Skalierung erhöht jede neue Generation der Halbleitertechnologie die elektrischen Felder innerhalb der Transistoren. Dieses stärkere elektrische Feld stimuliert die Degradationsphänomene (Alterung der Transistoren, Selbsterhitzung, Rauschen, usw.), was zu einer immer stärkeren Degradation (Verschlechterung) der Transistoren führt. Daher erleiden die Transistoren in jeder neuen Technologiegeneration immer stärkere Verschlechterungen ihrer elektrischen Parameter. Um die Funktionalität und Zuverlässigkeit der Schaltung zu wahren, wird es daher unerlässlich, die Auswirkungen der geschwächten Transistoren auf die Schaltung präzise zu bestimmen. Die beiden wichtigsten Auswirkungen der Verschlechterungen sind ein verlangsamtes Schalten, sowie eine erhöhte Leistungsaufnahme der Schaltung. Bleiben diese Auswirkungen unberücksichtigt, kann die verlangsamte Schaltgeschwindigkeit zu Timing-Verletzungen führen (d.h. die Schaltung kann die Berechnung nicht rechtzeitig vor Beginn der nächsten Operation abschließen) und die Funktionalität der Schaltung beeinträchtigen (fehlerhafte Ausgabe, verfälschte Daten, usw.). Um diesen Verschlechterungen der Transistorparameter im Laufe der Zeit Rechnung zu tragen, werden Sicherheitstoleranzen eingeführt. So wird beispielsweise die Taktperiode der Schaltung künstlich verlängert, um ein langsameres Schaltverhalten zu tolerieren und somit Fehler zu vermeiden. Dies geht jedoch auf Kosten der Performanz, da eine längere Taktperiode eine niedrigere Taktfrequenz bedeutet. Die Ermittlung der richtigen Sicherheitstoleranz ist entscheidend. Wird die Sicherheitstoleranz zu klein bestimmt, führt dies in der Schaltung zu Fehlern, eine zu große Toleranz führt zu unnötigen Performanzseinbußen. Derzeit verlässt sich die Industrie bei der Zuverlässigkeitsbestimmung auf den schlimmstmöglichen Fall (maximal gealterter Schaltkreis, maximale Betriebstemperatur bei minimaler Spannung, ungünstigste Fertigung, etc.). Diese Annahme des schlimmsten Falls garantiert, dass der Chip (oder integrierte Schaltung) unter allen auftretenden Betriebsbedingungen funktionsfähig bleibt. Darüber hinaus ermöglicht die Betrachtung des schlimmsten Falles viele Vereinfachungen. Zum Beispiel muss die eigentliche Betriebstemperatur nicht bestimmt werden, sondern es kann einfach die schlimmstmögliche (sehr hohe) Betriebstemperatur angenommen werden. Leider lässt sich diese etablierte Praxis der Berücksichtigung des schlimmsten Falls (experimentell oder simulationsbasiert) nicht mehr aufrechterhalten. Diese Berücksichtigung bedingt solch harsche Betriebsbedingungen (maximale Temperatur, etc.) und Anforderungen (z.B. 25 Jahre Betrieb), dass die Transistoren unter den immer stärkeren elektrischen Felder enorme Verschlechterungen erleiden. Denn durch die Kombination an hoher Temperatur, Spannung und den steigenden elektrischen Feldern bei jeder Generation, nehmen die Degradationphänomene stetig zu. Das bedeutet, dass die unter dem schlimmsten Fall bestimmte Sicherheitstoleranz enorm pessimistisch ist und somit deutlich zu hoch ausfällt. Dieses Maß an Pessimismus führt zu erheblichen Performanzseinbußen, die unnötig und demnach vermeidbar sind. Während beispielsweise militärische Schaltungen 25 Jahre lang unter harschen Bedingungen arbeiten müssen, wird Unterhaltungselektronik bei niedrigeren Temperaturen betrieben und muss ihre Funktionalität nur für die Dauer der zweijährigen Garantie aufrechterhalten. Für letzteres können die Sicherheitstoleranzen also deutlich kleiner ausfallen, um die Performanz deutlich zu erhöhen, die zuvor im Namen der Zuverlässigkeit aufgegeben wurde. Diese Arbeit zielt darauf ab, maßgeschneiderte Sicherheitstoleranzen für die einzelnen Anwendungsszenarien einer Schaltung bereitzustellen. Für fordernde Umgebungen wie Weltraumanwendungen (wo eine Reparatur unmöglich ist) ist weiterhin der schlimmstmögliche Fall relevant. In den meisten Anwendungen, herrschen weniger harsche Betriebssbedingungen (z.B. sorgen Kühlsysteme für niedrigere Temperaturen). Hier können Sicherheitstoleranzen maßgeschneidert und anwendungsspezifisch bestimmt werden, sodass Verschlechterungen exakt toleriert werden können und somit die Zuverlässigkeit zu minimalen Kosten (Performanz, etc.) gewahrt wird. Leider sind die derzeitigen Standardentwurfswerkzeuge für diese anwendungsspezifische Bestimmung der Sicherheitstoleranz nicht gut gerüstet. Diese Arbeit zielt darauf ab, Standardentwurfswerkzeuge in die Lage zu versetzen, diesen Bedarf an Zuverlässigkeitsbestimmungen für beliebige Schaltungen unter beliebigen Betriebsbedingungen zu erfüllen. Zu diesem Zweck stellen wir unsere Forschungsbeiträge als vier Schritte auf dem Weg zu anwendungsspezifischen Sicherheitstoleranzen vor: Schritt 1 verbessert die Modellierung der Degradationsphänomene (Transistor-Alterung, -Selbsterhitzung, -Rauschen, etc.). Das Ziel von Schritt 1 ist es, ein umfassendes, einheitliches Modell für die Degradationsphänomene zu erstellen. Durch die Verwendung von materialwissenschaftlichen Defektmodellierungen werden die zugrundeliegenden physikalischen Prozesse der Degradationsphänomena modelliert, um ihre Wechselwirkungen zu berücksichtigen (z.B. Phänomen A kann Phänomen B beschleunigen) und ein einheitliches Modell für die simultane Modellierung verschiedener Phänomene zu erzeugen. Weiterhin werden die jüngst entdeckten Phänomene ebenfalls modelliert und berücksichtigt. In Summe, erlaubt dies eine genaue Degradationsmodellierung von Transistoren unter gleichzeitiger Berücksichtigung aller essenziellen Phänomene. Schritt 2 beschleunigt diese Degradationsmodelle von mehreren Minuten pro Transistor (Modelle der Physiker zielen auf Genauigkeit statt Performanz) auf wenige Millisekunden pro Transistor. Die Forschungsbeiträge dieser Dissertation beschleunigen die Modelle um ein Vielfaches, indem sie zuerst die Berechnungen so weit wie möglich vereinfachen (z.B. sind nur die Spitzenwerte der Degradation erforderlich und nicht alle Werte über einem zeitlichen Verlauf) und anschließend die Parallelität heutiger Computerhardware nutzen. Beide Ansätze erhöhen die Auswertungsgeschwindigkeit, ohne die Genauigkeit der Berechnung zu beeinflussen. In Schritt 3 werden diese beschleunigte Degradationsmodelle in die Standardwerkzeuge integriert. Die Standardwerkzeuge berücksichtigen derzeit nur die bestmöglichen, typischen und schlechtestmöglichen Standardzellen (digital) oder Transistoren (analog). Diese drei Typen von Zellen/Transistoren werden von der Foundry (Halbleiterhersteller) aufwendig experimentell bestimmt. Da nur diese drei Typen bestimmt werden, nehmen die Werkzeuge keine Zuverlässigkeitsbestimmung für eine spezifische Anwendung (Temperatur, Spannung, Aktivität) vor. Simulationen mit Degradationsmodellen ermöglichen eine Bestimmung für spezifische Anwendungen, jedoch muss diese Fähigkeit erst integriert werden. Diese Integration ist eines der Beiträge dieser Dissertation. Schritt 4 beschleunigt die Standardwerkzeuge. Digitale Schaltungsentwürfe, die nicht auf Standardzellen basieren, sowie komplexe analoge Schaltungen können derzeit nicht mit analogen Schaltungssimulatoren ausgewertet werden. Ihre Performanz reicht für solch umfangreiche Simulationen nicht aus. Diese Dissertation stellt Techniken vor, um diese Werkzeuge zu beschleunigen und somit diese umfangreichen Schaltungen simulieren zu können. Diese Forschungsbeiträge, die sich jeweils über mehrere Veröffentlichungen erstrecken, ermöglichen es Standardwerkzeugen, die Sicherheitstoleranz für kundenspezifische Anwendungsszenarien zu bestimmen. Für eine gegebene Schaltungslebensdauer, Temperatur, Spannung und Aktivität (Schaltverhalten durch Software-Applikationen) können die Auswirkungen der Transistordegradation ausgewertet werden und somit die erforderliche (weder unter- noch überschätzte) Sicherheitstoleranz bestimmt werden. Diese anwendungsspezifische Sicherheitstoleranz, garantiert die Zuverlässigkeit und Funktionalität der Schaltung für genau diese Anwendung bei minimalen Performanzeinbußen

    Fundamentals

    Get PDF
    Volume 1 establishes the foundations of this new field. It goes through all the steps from data collection, their summary and clustering, to different aspects of resource-aware learning, i.e., hardware, memory, energy, and communication awareness. Machine learning methods are inspected with respect to resource requirements and how to enhance scalability on diverse computing architectures ranging from embedded systems to large computing clusters

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access book constitutes the proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2022, which was held during April 2-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 46 full papers and 4 short papers presented in this volume were carefully reviewed and selected from 159 submissions. The proceedings also contain 16 tool papers of the affiliated competition SV-Comp and 1 paper consisting of the competition report. TACAS is a forum for researchers, developers, and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, exibility, and efficiency of tools and algorithms for building computer-controlled systems
    corecore