310 research outputs found

    AI/ML Algorithms and Applications in VLSI Design and Technology

    Full text link
    An evident challenge ahead for the integrated circuit (IC) industry in the nanometer regime is the investigation and development of methods that can reduce the design complexity ensuing from growing process variations and curtail the turnaround time of chip manufacturing. Conventional methodologies employed for such tasks are largely manual; thus, time-consuming and resource-intensive. In contrast, the unique learning strategies of artificial intelligence (AI) provide numerous exciting automated approaches for handling complex and data-intensive tasks in very-large-scale integration (VLSI) design and testing. Employing AI and machine learning (ML) algorithms in VLSI design and manufacturing reduces the time and effort for understanding and processing the data within and across different abstraction levels via automated learning algorithms. It, in turn, improves the IC yield and reduces the manufacturing turnaround time. This paper thoroughly reviews the AI/ML automated approaches introduced in the past towards VLSI design and manufacturing. Moreover, we discuss the scope of AI/ML applications in the future at various abstraction levels to revolutionize the field of VLSI design, aiming for high-speed, highly intelligent, and efficient implementations

    Testability and redundancy techniques for improved yield and reliability of CMOS VLSI circuits

    Get PDF
    The research presented in this thesis is concerned with the design of fault-tolerant integrated circuits as a contribution to the design of fault-tolerant systems. The economical manufacture of very large area ICs will necessitate the incorporation of fault-tolerance features which are routinely employed in current high density dynamic random access memories. Furthermore, the growing use of ICs in safety-critical applications and/or hostile environments in addition to the prospect of single-chip systems will mandate the use of fault-tolerance for improved reliability. A fault-tolerant IC must be able to detect and correct all possible faults that may affect its operation. The ability of a chip to detect its own faults is not only necessary for fault-tolerance, but it is also regarded as the ultimate solution to the problem of testing. Off-line periodic testing is selected for this research because it achieves better coverage of physical faults and it requires less extra hardware than on-line error detection techniques. Tests for CMOS stuck-open faults are shown to detect all other faults. Simple test sequence generation procedures for the detection of all faults are derived. The test sequences generated by these procedures produce a trivial output, thereby, greatly simplifying the task of test response analysis. A further advantage of the proposed test generation procedures is that they do not require the enumeration of faults. The implementation of built-in self-test is considered and it is shown that the hardware overhead is comparable to that associated with pseudo-random and pseudo-exhaustive techniques while achieving a much higher fault coverage through-the use of the proposed test generation procedures. The consideration of the problem of testing the test circuitry led to the conclusion that complete test coverage may be achieved if separate chips cooperate in testing each other's untested parts. An alternative approach towards complete test coverage would be to design the test circuitry so that it is as distributed as possible and so that it is tested as it performs its function. Fault correction relies on the provision of spare units and a means of reconfiguring the circuit so that the faulty units are discarded. This raises the question of what is the optimum size of a unit? A mathematical model, linking yield and reliability is therefore developed to answer such a question and also to study the effects of such parameters as the amount of redundancy, the size of the additional circuitry required for testing and reconfiguration, and the effect of periodic testing on reliability. The stringent requirement on the size of the reconfiguration logic is illustrated by the application of the model to a typical example. Another important result concerns the effect of periodic testing on reliability. It is shown that periodic off-line testing can achieve approximately the same level of reliability as on-line testing, even when the time between tests is many hundreds of hours

    Techniques for Improving Security and Trustworthiness of Integrated Circuits

    Get PDF
    The integrated circuit (IC) development process is becoming increasingly vulnerable to malicious activities because untrusted parties could be involved in this IC development flow. There are four typical problems that impact the security and trustworthiness of ICs used in military, financial, transportation, or other critical systems: (i) Malicious inclusions and alterations, known as hardware Trojans, can be inserted into a design by modifying the design during GDSII development and fabrication. Hardware Trojans in ICs may cause malfunctions, lower the reliability of ICs, leak confidential information to adversaries or even destroy the system under specifically designed conditions. (ii) The number of circuit-related counterfeiting incidents reported by component manufacturers has increased significantly over the past few years with recycled ICs contributing the largest percentage of the total reported counterfeiting incidents. Since these recycled ICs have been used in the field before, the performance and reliability of such ICs has been degraded by aging effects and harsh recycling process. (iii) Reverse engineering (RE) is process of extracting a circuit’s gate-level netlist, and/or inferring its functionality. The RE causes threats to the design because attackers can steal and pirate a design (IP piracy), identify the device technology, or facilitate other hardware attacks. (iv) Traditional tools for uniquely identifying devices are vulnerable to non-invasive or invasive physical attacks. Securing the ID/key is of utmost importance since leakage of even a single device ID/key could be exploited by an adversary to hack other devices or produce pirated devices. In this work, we have developed a series of design and test methodologies to deal with these four challenging issues and thus enhance the security, trustworthiness and reliability of ICs. The techniques proposed in this thesis include: a path delay fingerprinting technique for detection of hardware Trojans, recycled ICs, and other types counterfeit ICs including remarked, overproduced, and cloned ICs with their unique identifiers; a Built-In Self-Authentication (BISA) technique to prevent hardware Trojan insertions by untrusted fabrication facilities; an efficient and secure split manufacturing via Obfuscated Built-In Self-Authentication (OBISA) technique to prevent reverse engineering by untrusted fabrication facilities; and a novel bit selection approach for obtaining the most reliable bits for SRAM-based physical unclonable function (PUF) across environmental conditions and silicon aging effects

    Optimization of Pseudo Functional Path Delay Test Through Embedded Memories

    Get PDF
    Traditional automatic test pattern generation achieves high coverage of logic faults in integrated circuits. Automatic test of embedded memory arrays uses built-in self-test. Testing the memories and logic separately does not fully test the critical timing paths that go into or out of memories. Prior research has developed algorithms and software to test the longest paths into and out of embedded memories. However, in this prior work, the test generation time increased superlinearly with memory size. This is contrary to the intuition that the time should rise approximately linearly with memory size. This behavior limits the algorithm to circuits with relatively small memories. The focus of this research is to analyze the time complexity of the algorithm and propose changes to reduce the time required to test circuits with large memories. We use our prior work on pseudo functional K longest path per gate test generation, and the benchmark circuits with embedded memories developed in the prior work. Since the cells within a memory array are not scan cells, a value that is captured in a memory cell must be moved to a scan cell using low-speed coda cycles. This approach will also support the test of any non-scan flip-flop or latch, in addition to embedded memory arrays. In addition to testing the critical timing paths, testing through memories eliminates the logic “shadows” around the memory where faults cannot be tested. In this research our complexity analysis has identified the reason for the superlinear increase in test generation time with larger memories and verified this analysis with experimental results. We have also developed and implemented several heuristics to increase performance, with experimental results. This research also identifies the major algorithm changes required to further increase performance

    New techniques for functional testing of microprocessor based systems

    Get PDF
    Electronic devices may be affected by failures, for example due to physical defects. These defects may be introduced during the manufacturing process, as well as during the normal operating life of the device due to aging. How to detect all these defects is not a trivial task, especially in complex systems such as processor cores. Nevertheless, safety-critical applications do not tolerate failures, this is the reason why testing such devices is needed so to guarantee a correct behavior at any time. Moreover, testing is a key parameter for assessing the quality of a manufactured product. Consolidated testing techniques are based on special Design for Testability (DfT) features added in the original design to facilitate test effectiveness. Design, integration, and usage of the available DfT for testing purposes are fully supported by commercial EDA tools, hence approaches based on DfT are the standard solutions adopted by silicon vendors for testing their devices. Tests exploiting the available DfT such as scan-chains manipulate the internal state of the system, differently to the normal functional mode, passing through unreachable configurations. Alternative solutions that do not violate such functional mode are defined as functional tests. In microprocessor based systems, functional testing techniques include software-based self-test (SBST), i.e., a piece of software (referred to as test program) which is uploaded in the system available memory and executed, with the purpose of exciting a specific part of the system and observing the effects of possible defects affecting it. SBST has been widely-studies by the research community for years, but its adoption by the industry is quite recent. My research activities have been mainly focused on the industrial perspective of SBST. The problem of providing an effective development flow and guidelines for integrating SBST in the available operating systems have been tackled and results have been provided on microprocessor based systems for the automotive domain. Remarkably, new algorithms have been also introduced with respect to state-of-the-art approaches, which can be systematically implemented to enrich SBST suites of test programs for modern microprocessor based systems. The proposed development flow and algorithms are being currently employed in real electronic control units for automotive products. Moreover, a special hardware infrastructure purposely embedded in modern devices for interconnecting the numerous on-board instruments has been interest of my research as well. This solution is known as reconfigurable scan networks (RSNs) and its practical adoption is growing fast as new standards have been created. Test and diagnosis methodologies have been proposed targeting specific RSN features, aimed at checking whether the reconfigurability of such networks has not been corrupted by defects and, in this case, at identifying the defective elements of the network. The contribution of my work in this field has also been included in the first suite of public-domain benchmark networks

    Rethinking Watermark: Providing Proof of IP Ownership in Modern SoCs

    Get PDF
    Intellectual property (IP) cores are essential to creating modern system-on-chips (SoCs). Protecting the IPs deployed in modern SoCs has become more difficult as the IP houses have been established across the globe over the past three decades. The threat posed by IP piracy and overuse has been a topic of research for the past decade or so and has led to creation of a field called watermarking. IP watermarking aims of detecting unauthorized IP usage by embedding excess, nonfunctional circuitry into the SoC. Unfortunately, prior work has been built upon assumptions that cannot be met within the modern SoC design and verification processes. In this paper, we first provide an extensive overview of the current state-of-the-art IP watermarking. Then, we challenge these dated assumptions and propose a new path for future effective IP watermarking approaches suitable for today\u27s complex SoCs in which IPs are deeply embedded

    Studies on Core-Based Testing of System-on-Chips Using Functional Bus and Network-on-Chip Interconnects

    Get PDF
    The tests of a complex system such as a microprocessor-based system-onchip (SoC) or a network-on-chip (NoC) are difficult and expensive. In this thesis, we propose three core-based test methods that reuse the existing functional interconnects-a flat bus, hierarchical buses of multiprocessor SoC's (MPSoC), and a N oC-in order to avoid the silicon area cost of a dedicated test access mechanism (TAM). However, the use of functional interconnects as functional TAM's introduces several new problems. During tests, the interconnects-including the bus arbitrator, the bus bridges, and the NoC routers-operate in the functional mode to transport the test stimuli and responses, while the core under tests (CUT) operate in the test mode. Second, the test data is transported to the CUT through the functional bus, and not directly to the test port. Therefore, special core test wrappers that can provide the necessary control signals required by the different functional interconnect are proposed. We developed two types of wrappers, one buffer-based wrapper for the bus-based systems and another pair of complementary wrappers for the NoCbased systems. Using the core test wrappers, we propose test scheduling schemes for the three functionally different types of interconnects. The test scheduling scheme for a flat bus is developed based on an efficient packet scheduling scheme that minimizes both the buffer sizes and the test time under a power constraint. The schedulingscheme is then extended to take advantage of the hierarchical bus architecture of the MPSoC systems. The third test scheduling scheme based on the bandwidth sharing is developed specifically for the NoC-based systems. The test scheduling is performed under the objective of co-optimizing the wrapper area cost and the resulting test application time using the two complementary NoC wrappers. For each of the proposed methodology for the three types of SoC architec .. ture, we conducted a thorough experimental evaluation in order to verify their effectiveness compared to other methods

    Metodologia de monitorização do envelhecimento para aplicações de auto-teste embutido

    Get PDF
    Dissertação de mestrado, Engenharia Eléctrica e Electrónica, Instituto Superior de Engenharia, Universidade do Algarve, 2013The high integration level achieved as well as complexity and performance enhancements in new nanometer technologies make IC (Integrated Circuits) products very difficult to test. Moreover, long term operation brings aging cumulative degradations, due to new processes and materials that lead to emerging defect phenomena and the consequence are products with increased variability in their behaviour, more susceptible to delay-faults and with a reduced expected lifecycle. The main objectives of this thesis are twofold, as explained in the following. First, a new software tool is presented to generate HDL (Hardware Description Language) for BIST (Built-In Self-Test) structures, aiming delay-faults, and inserted the new auto-test functionality in generic sequential CMOS circuits. The BIST methodology used implements a scan based BIST approach, using a new BIST controller to implement the Launch-On-Shift (LOS) and Launch-On-Capture (LOC) delay-fault techniques. Second, it will be shown that multi-VDD tests in circuits with BIST infra-structures can be used to detect gross delay-faults during on-field operations, and consequently can be used as an aging sensor methodology during circuits’ lifecycle. The discrete set of multi-VDD BIST sessions generates a Voltage Signature Collection (VSC) and the presence of a delay-fault (or a physical defect) modifies the VSC collection, allowing the aging sensor capability. The proposed Design for Testability (DFT) method and tool are demonstrated with extensive SPICE simulation using three ITC’99 benchmark circuits.O elevado nível de integração atingida, complexidade, assim como performances melhoradas em novas tecnologias nanométricas tornam os produtos em circuitos integrados tecnológicos muito difíceis de testar. Para além disso, a operação a longo prazo produz degradações cumulativas pelo envelhecimento dos circuitos, devido a novos processos e materiais que conduzem a novos defeitos e a consequência são produtos com maior variabilidade no seu funcionamento, mais susceptíveis às faltas de atraso e com um tempo de vida menor. Os principais objectivos desta tese são dois, como explicado em seguida. Primeiro, é apresentada uma nova ferramenta de software para gerar estruturas de auto-teste integrado (BIST, Built-In Self-Test) descritas em linguagens de descrição de hardware (HDL, Hardware Description Language), com o objectivo de detectar faltas de atraso, e inserir a nova funcionalidade de auto-teste em circuitos genéricos sequenciais CMOS. A metodologia de BIST utilizada implementa um procedimento baseado em caminhos de deslocamento, utilizando um novo controlador de BIST para implementar técnicas de faltas de atraso, como Launch-On-Shift (LOS) e Launch-On-Capture (LOC). Segundo, irá ser mostrado que testes multi-VDD em circuitos com infra-estruturas de BIST podem ser usados para detectar faltas de atraso grosseiras durante a operação no terreno e, consequentemente, pode ser usado como uma metodologia de sensor de envelhecimento durante o tempo de vida dos circuitos. Um número discreto de sessões BIST multi-VDD geram uma Colecção de Assinaturas de Tensão (Voltage Signature Collection, VSC) e a presença de uma falta de atraso (ou um defeito físico) faz modificar a colecção VSC, comportando-se como sensor de envelhecimento. O trabalho foi iniciado com o estudo do estado da arte nesta área. Assim, foram estudadas e apresentadas no capítulo 2 as principais técnicas de DfT (Design for Testability) disponíveis e utilizadas pela indústria, nomeadamente, as técnicas de SP (Scan Path), de BIST e as técnicas de scan para delay-faults, LOS e LOC. No capítulo 3, ainda referente ao estudo sobre o estado da arte, é apresentado o estudo sobre os fenómenos que provocam o envelhecimento dos circuitos digitais, nomeadamente o NBTI (Negative Bias Temperature Instability), que é considerado o factor mais relevante no envelhecimento de circuitos integrados (especialmente em nanotecnologias). Em seguida, iniciou-se o desenvolvimento do primeiro objectivo. Relativamente a este assunto, começou-se por definir qual o comportamento das estruturas de BIST e como se iriam interligar. O comportamento foi descrito, bloco a bloco, em VHDL comportamental, ao nível RTL (Register Transfer Level). Esta descrição foi então validada por simulação, utilizando a ferramenta ModelSim. Posteriormente, esta descrição comportamental foi sintetizada através da ferramenta Synopsys, com a colaboração do INESC-ID em Lisboa (instituição parceira nestes trabalhos de investigação), e foi obtida uma netlist ao nível de porta lógica, que foi guardada utilizando a linguagem de descrição de hardware Verilog. Assim, obtiveram-se dois tipos de descrição dos circuitos BIST: uma comportamental, em VHDL, e outra estrutural, em Verilog (esta descrição estrutural em Verilog irá permitir, posteriormente, fazer a simulação e análise de envelhecimento). A nova estrutura de BIST obtida é baseada no modelo clássico de BIST, mas apresenta algumas alterações, nomeadamente ao nível da geração de vectores de teste e no controlo e aplicação desses vectores ao circuito. Estas modificações têm como objectivo aumentar a detecção de faltas e permitir o teste de faltas de atraso. É composto por três blocos denominados LFSRs (Linear Feedback Shift Registers), um utilizado para gerar os vectores pseudo-aleatórios para as entradas primárias do circuito, outro para gerar os vectores para a entrada do scan path, e o último utilizado como contador para controlar o número de bits introduzidos no scan path. Relativamente ao controlador, este foi especificamente desenhado para controlar um teste com estratégia de test-per-scan (ou seja, um teste baseado no caminho de varrimento existente no circuito) e tem uma codificação de estados que permite implementar as estratégias de teste de faltas de atraso, Launch-On-Shift (LOS) e Launch-On-Capture (LOC). Na secção de saída do novo modelo de BIST, o processo de compactação usa o mesmo princípio do modelo tradicional, utilizando neste caso um MISR (Multiple Input Signature Register). Ainda relativamente ao primeiro objectivo, seguiu-se o desenvolvimento da ferramenta BISTGen, para automatizar a geração das estruturas de BIST atrás mencionadas, nos dois tipos de descrição, e automaticamente inserir estas estruturas num circuito de teste (CUT, Circuit Under Test). A aplicação de software deve permitir o manuseamento de dois tipos de informação relativa ao circuito: descrição do circuito pelo seu comportamento, em VHDL, e descrição do circuito pela sua estrutura, em Verilog. Deve ter como saída a descrição de hardware supra citada, inserindo todos os blocos integrantes da estrutura num só ficheiro, contendo apenas um dos tipos de linguagem (Verilog ou VHDL), escolhida previamente pelo utilizador. No caso dos LFSRs e do MISR, o programa deve permitir ao utilizador a escolha de LFSRs do tipo linear ou do tipo modular (também conhecidos por fibonacci ou galois), e deve também possuir suporte para automaticamente seleccionar de uma base de dados quais as realimentações necessárias que conduzem à definição do polinómio primitivo para o LFSR. Será necessário ainda criar uma estrutura em base de dados para gerir os nomes e o número de entradas e saídas do circuito submetido a teste, a que chamamos CUT, de forma a simplificar o processo de renomeação que o utilizador poderá ter de efectuar. Dar a conhecer ao programa os nomes das entradas e saídas do CUT é de relevante importância, uma vez que a atribuição de nomes para as entradas e saídas pode vir em qualquer língua ou dialecto, não coincidindo com os nomes padrão normalmente atribuídos. Relativamente às duas linguagens que o programa recebe através do CUT na sua entrada, no caso VHDL após inserir BIST o ficheiro final terá sempre uma estrutura semelhante, qualquer que seja o ficheiro a ser tratado, variando apenas com o hardware apresentado pelo CUT. No entanto, para o caso Verilog a situação será diferente, uma vez que o programa tem de permitir que o ficheiro final gerado possa surgir de duas formas dependendo da escolha desejada. A primeira forma que o software deve permitir para o caso Verilog é gerar um ficheiro contendo módulos, de uma forma semelhante ao que acontece no caso VHDL. No entanto, deve permitir também a obtenção, caso o utilizador solicite, de um ficheiro unificado, sem sub-módulos nos blocos, para que o ficheiro final contenha apenas uma única estrutura, facilitando a sua simulação e análise de envelhecimento nas etapas seguintes. Relativamente ao segundo objectivo, com base no trabalho anterior já efectuado em metodologias para detectar faltas de delay em circuitos com BIST, foi definida uma metodologia de teste para, durante a vida útil dos circuitos, permitir avaliar como vão envelhecendo, tratando-se assim de uma metodologia de monitorização de envelhecimento para circuitos com BIST. Um aspecto fundamental para a realização deste segundo objectivo é podermos prever como o circuito vai envelhecer. Para realizar esta tarefa, sempre subjectiva, utilizou-se uma ferramenta desenvolvida no ISE-UAlg em outra tese de mestrado anterior a esta, a ferramenta AgingCalc. Esta ferramenta inicia-se com a definição, por parte do utilizador, das probabilidades de operação das entradas primárias do circuito (probabilidades de cada entrada estar a ‘0’ ou a ‘1’). De notar que este é o processo subjectivo existente na análise de envelhecimento, já que é impossível prever como um circuito irá ser utilizado. Com base nestas probabilidades de operação, o programa utiliza a estrutura do circuito para calcular, numa primeira instância, as probabilidades dos nós do circuito estarem a ‘0’ ou a ‘1’, e numa segunda instância as probabilidades de cada transístor PMOS estar ligado e com o seu canal em stress (com uma tensão negativa aplicada à tensão VGS e um campo eléctrico aplicado ao dieléctrico da porta). Utilizando fórmulas definidas na literatura para modelação do parâmetro Vth (tensão limiar de condução) do transístor de acordo com um envelhecimento produzido pelo efeito NBTI (Negative Bias Temperature Instability), o programa calcula, para cada ano ou tempo de envelhecimento a considerar, as variações ocorridas no Vth de cada transístor PMOS, com base nas probabilidades e condições de operação previamente definidas, obtendo um novo Vth para cada transístor (os valores prováveis para os transístores envelhecidos). Em seguida, o programa instancia o simulador HSPICE para simular as portas lógicas do circuito, utilizando uma descrição que contém os Vth calculados. Esta simulação permite calcular os atrasos em cada porta para cada ano de envelhecimento considerado, podendo em seguida calcular e obter a previsão para o envelhecimento de cada caminho combinatório do circuito. É de notar que, embora a previsão de envelhecimento seja subjectiva, pois depende de uma previsão de operação, é possível definir diferentes probabilidades de operação de forma a estabelecer limites prováveis para o envelhecimento de cada caminho. Tendo uma ferramenta que permite prever como o circuito irá envelhecer, é possível utilizá-la para modificar a estrutura do circuito e introduzir faltas de delay produzidas pelo envelhecimento por NBTI ao longo dos anos de operação (modelados pelo Vth dos transístores PMOS). Assim, no capítulo 5 irá ser mostrado que testes multi-VDD em circuitos com infra-estruturas de BIST podem ser usados para detectar faltas de atraso grosseiras durante a operação no terreno, podendo em alguns casos identificar variações provocadas pelo envelhecimento em caminhos curtos, e consequentemente, estes testes podem ser usados como uma metodologia de sensor de envelhecimento durante o tempo de vida dos circuitos. Um número discreto de sessões BIST multi-VDD geram uma Colecção de Assinaturas de Tensão (Voltage Signature Collection, VSC) e a presença de uma falta de atraso (ou um defeito físico) faz modificar a colecção VSC, comportando-se como sensor de envelhecimento. O objectivo será, especificando, fazer variar a tensão de alimentação, baixando o seu valor dentro de um determinado intervalo e submetendo o circuito a sucessivas sessões de BIST para cada valor de tensão, até que o circuito retorne uma assinatura diferente da esperada. Este procedimento de simulação será feito para uma maturidade de até 20 anos, podendo o incremento não ser unitário. Na realidade os circuitos nos primeiros anos de vida em termos estatísticos não sofrem envelhecimento a ponto de causar falhas por esse efeito. As falhas que podem acelerar o processo de envelhecimento estão relacionadas com defeitos significativos no processo de fabrico mas que ainda assim não são suficientes para no início do seu ciclo de vida fazer o circuito falhar, tornando-se efectivas após algum tempo de utilização. Os métodos e ferramentas propostos de DfT são demonstrados com extensas simulações VHDL e SPICE, utilizando circuitos de referência

    Design and Validation of Network-on-Chip Architectures for the Next Generation of Multi-synchronous, Reliable, and Reconfigurable Embedded Systems

    Get PDF
    NETWORK-ON-CHIP (NoC) design is today at a crossroad. On one hand, the design principles to efficiently implement interconnection networks in the resource-constrained on-chip setting have stabilized. On the other hand, the requirements on embedded system design are far from stabilizing. Embedded systems are composed by assembling together heterogeneous components featuring differentiated operating speeds and ad-hoc counter measures must be adopted to bridge frequency domains. Moreover, an unmistakable trend toward enhanced reconfigurability is clearly underway due to the increasing complexity of applications. At the same time, the technology effect is manyfold since it provides unprecedented levels of system integration but it also brings new severe constraints to the forefront: power budget restrictions, overheating concerns, circuit delay and power variability, permanent fault, increased probability of transient faults. Supporting different degrees of reconfigurability and flexibility in the parallel hardware platform cannot be however achieved with the incremental evolution of current design techniques, but requires a disruptive approach and a major increase in complexity. In addition, new reliability challenges cannot be solved by using traditional fault tolerance techniques alone but the reliability approach must be also part of the overall reconfiguration methodology. In this thesis we take on the challenge of engineering a NoC architectures for the next generation systems and we provide design methods able to overcome the conventional way of implementing multi-synchronous, reliable and reconfigurable NoC. Our analysis is not only limited to research novel approaches to the specific challenges of the NoC architecture but we also co-design the solutions in a single integrated framework. Interdependencies between different NoC features are detected ahead of time and we finally avoid the engineering of highly optimized solutions to specific problems that however coexist inefficiently together in the final NoC architecture. To conclude, a silicon implementation by means of a testchip tape-out and a prototype on a FPGA board validate the feasibility and effectivenes
    corecore