109 research outputs found

    Self-Test Mechanisms for Automotive Multi-Processor System-on-Chips

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    PWM controller design with Zynq SoC

    Get PDF
    The project is the development of a digital pulse-width modulation controller using Zynq technology, a system-on-chip that integrates a processor and programmable logic components. This technology enables offloading data processing to the hardware, to obtain the parallelization and acceleration of the control system to achieve a faster response. It also allows the creation of specific subsystems for tasks such as data flow, event synchronization, and management of input and output commands. The system can control up to two variable of a process through a dual mode control loop, which is implemented using a combination of software and hardware. The software can execute a configuration program and then run a high-level control algorithm that can be potentially complex. On the other hand, the programmable logic is pre-configured using a hardware description language that the compiler translates for the specific technology employed. By doing so, the project is flexible because both the hardware and software can be reprogrammed according to the requirements. These requirements can vary in nature, including increased performance, compatibility with interfacing devices, or the need to isolate subsystems for function verification. Finally, it was possible to verify the system and its components at each development stage, both internally using integrated logic analyzers and with the hardware-in-the-loop methodology.The project is the development of a digital pulse-width modulation controller using Zynq technology, a system-on-chip that integrates a processor and programmable logic components. This technology enables offloading data processing to the hardware, to obtain the parallelization and acceleration of the control system to achieve a faster response. It also allows the creation of specific subsystems for tasks such as data flow, event synchronization, and management of input and output commands. The system can control up to two variable of a process through a dual mode control loop, which is implemented using a combination of software and hardware. The software can execute a configuration program and then run a high-level control algorithm that can be potentially complex. On the other hand, the programmable logic is pre-configured using a hardware description language that the compiler translates for the specific technology employed. By doing so, the project is flexible because both the hardware and software can be reprogrammed according to the requirements. These requirements can vary in nature, including increased performance, compatibility with interfacing devices, or the need to isolate subsystems for function verification. Finally, it was possible to verify the system and its components at each development stage, both internally using integrated logic analyzers and with the hardware-in-the-loop methodology

    Validation and verification of the interconnection of hardware intellectual property blocks for FPGA-based packet processing systems

    Get PDF
    As networks become more versatile, the computational requirement for supporting additional functionality increases. The increasing demands of these networks can be met by Field Programmable Gate Arrays (FPGA), which are an increasingly popular technology for implementing packet processing systems. The fine-grained parallelism and density of these devices can be exploited to meet the computational requirements and implement complex systems on a single chip. However, the increasing complexity of FPGA-based systems makes them susceptible to errors and difficult to test and debug. To tackle the complexity of modern designs, system-level languages have been developed to provide abstractions suited to the domain of the target system. Unfortunately, the lack of formality in these languages can give rise to errors that are not caught until late in the design cycle. This thesis presents three techniques for verifying and validating FPGA-based packet processing systems described in a system-level description language. First, a type system is applied to the system description language to detect errors before implementation. Second, system-level transaction monitoring is used to observe high-level events on-chip following implementation. Third, the high-level information embodied in the system description language is exploited to allow the system to be automatically instrumented for on-chip monitoring. This thesis demonstrates that these techniques catch errors which are undetected by traditional verification and validation tools. The locations of faults are specified and errors are caught earlier in the design flow, which saves time by reducing synthesis iterations

    Design and Validation of Network-on-Chip Architectures for the Next Generation of Multi-synchronous, Reliable, and Reconfigurable Embedded Systems

    Get PDF
    NETWORK-ON-CHIP (NoC) design is today at a crossroad. On one hand, the design principles to efficiently implement interconnection networks in the resource-constrained on-chip setting have stabilized. On the other hand, the requirements on embedded system design are far from stabilizing. Embedded systems are composed by assembling together heterogeneous components featuring differentiated operating speeds and ad-hoc counter measures must be adopted to bridge frequency domains. Moreover, an unmistakable trend toward enhanced reconfigurability is clearly underway due to the increasing complexity of applications. At the same time, the technology effect is manyfold since it provides unprecedented levels of system integration but it also brings new severe constraints to the forefront: power budget restrictions, overheating concerns, circuit delay and power variability, permanent fault, increased probability of transient faults. Supporting different degrees of reconfigurability and flexibility in the parallel hardware platform cannot be however achieved with the incremental evolution of current design techniques, but requires a disruptive approach and a major increase in complexity. In addition, new reliability challenges cannot be solved by using traditional fault tolerance techniques alone but the reliability approach must be also part of the overall reconfiguration methodology. In this thesis we take on the challenge of engineering a NoC architectures for the next generation systems and we provide design methods able to overcome the conventional way of implementing multi-synchronous, reliable and reconfigurable NoC. Our analysis is not only limited to research novel approaches to the specific challenges of the NoC architecture but we also co-design the solutions in a single integrated framework. Interdependencies between different NoC features are detected ahead of time and we finally avoid the engineering of highly optimized solutions to specific problems that however coexist inefficiently together in the final NoC architecture. To conclude, a silicon implementation by means of a testchip tape-out and a prototype on a FPGA board validate the feasibility and effectivenes

    The 1992 4th NASA SERC Symposium on VLSI Design

    Get PDF
    Papers from the fourth annual NASA Symposium on VLSI Design, co-sponsored by the IEEE, are presented. Each year this symposium is organized by the NASA Space Engineering Research Center (SERC) at the University of Idaho and is held in conjunction with a quarterly meeting of the NASA Data System Technology Working Group (DSTWG). One task of the DSTWG is to develop new electronic technologies that will meet next generation electronic data system needs. The symposium provides insights into developments in VLSI and digital systems which can be used to increase data systems performance. The NASA SERC is proud to offer, at its fourth symposium on VLSI design, presentations by an outstanding set of individuals from national laboratories, the electronics industry, and universities. These speakers share insights into next generation advances that will serve as a basis for future VLSI design

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 11561 and 11562 constitutes the refereed proceedings of the 31st International Conference on Computer Aided Verification, CAV 2019, held in New York City, USA, in July 2019. The 52 full papers presented together with 13 tool papers and 2 case studies, were carefully reviewed and selected from 258 submissions. The papers were organized in the following topical sections: Part I: automata and timed systems; security and hyperproperties; synthesis; model checking; cyber-physical systems and machine learning; probabilistic systems, runtime techniques; dynamical, hybrid, and reactive systems; Part II: logics, decision procedures; and solvers; numerical programs; verification; distributed systems and networks; verification and invariants; and concurrency

    FAT-DBT engine (framework for application-tailorcd, co-designcd dynamic binary translation enginc)

    Get PDF
    Tese de Doutoramento em Engenharia Eletrónica e de Computadores (PDEEC)Dynamic binary translation (DBT) has emerged as an execution engine that monitors, modifies and possibly optimizes running applications for specific purposes. DBT is deployed as an execution layer between the application binary and the operating system or host-machine, which creates opportunities for collecting runtime information. Initially, DBT supported binary-level compatibility, but based on the collected runtime information, it also became popular for code instrumentation, ISA-virtualization and dynamic-optimization purposes. Building a DBT system brings many challenges, as it involves complex components integration and requires deep architectural level knowledge. Moreover, DBT incurs in significant overheads, mainly due to code decoding and translation, as well as execution along with general functionalities emulation. While initially conceived bearing in mind high-end architectures for performance demanding applications, such challenges become even more evident when directing DBT to embedded systems. The latter makes an effective deployment very challenging due to its complexity, tight constraints on memory, and limited performance and power. Legacy support and binary compatibility is a topic of relevant interest in such systems, due to their broad dissemination among industrial environments and wide utilization in sensing and monitoring processes, from yearly times, with considerable maintenance and replacement costs. To address such issues, this thesis intents to contribute with a solution that leverages an optimized and accelerated dynamic binary translator targeting resourceconstrained embedded systems while supporting legacy systems. The developed work allows to: (1) evaluate the potential of DBT for legacy support purposes on the resource-constrained embedded systems; (2) achieve a configurable DBT architecture specialized for resource-constrained embedded systems; (3) address DBT translation, execution and emulation overheads through the combination of software and hardware; and (4) promote DBT utilization as a legacy support tool for the industry as a end-product.A tradução binária dinâmica (TBD) emergiu como um motor de execução que permite a modificação e possível optimização de código executável para um determinado propósito. A TBD é integrada nos sistemas como uma camada de execução entre o código binário executável e o sistema operativo ou a máquina hospedeira alvo, o que origina oportunidades de recolha de informação de execução. A criação de um sistema de TBD traz consigo diversos desafios, uma vez que envolve a integração de componentes complexos e conhecimentos aprofundados das arquitecturas de processadores envolvidas. Ademais, a utilização de TBD gera diversos custos computacionais indirectos, maioritariamente devido à descodificação e tradução de código, bem como emulação de funcionalidades em geral. Considerando que a TBD foi inicialmente pensada para sistemas de gama alta, os desafios mencionados tornam-se ainda mais evidentes quando a mesma é aplicada em sistemas embebidos. Nesta área os limitados recursos de memória e os exigentes requisitos de desempenho e consumo energético,tornam uma implementação eficiente de TBD muito difícil de obter. Compatibilidade binária e suporte a código de legado são tópicos de interesse em sistemas embebidos, justificado pela ampla disseminação dos mesmos no meio industrial para tarefas de sensorização e monitorização ao longo dos tempos, reforçado pelos custos de manutenção adjacentes à sua utilização. Para endereçar os desafios descritos, nesta tese propõe-se uma solução para potencializar a tradução binária dinâmica, optimizada e com aceleração, para suporte a código de legado em sistemas embebidos de baixa gama. O trabalho permitiu (1) avaliar o potencial da TBD quando aplicada ao suporte a código de legado em sistemas embebidos de baixa gama; (2) a obtenção de uma arquitectura de TBD configurável e especializada para este tipo de sistemas; (3) reduzir os custos computacionais associados à tradução, execução e emulação, através do uso combinado de software e hardware; (4) e promover a utilização na industria de TBD como uma ferramenta de suporte a código de legado.This thesis was supported by a PhD scholarship from Fundação para a Ciência e Tecnologia, SFRH/BD/81681/201

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 11561 and 11562 constitutes the refereed proceedings of the 31st International Conference on Computer Aided Verification, CAV 2019, held in New York City, USA, in July 2019. The 52 full papers presented together with 13 tool papers and 2 case studies, were carefully reviewed and selected from 258 submissions. The papers were organized in the following topical sections: Part I: automata and timed systems; security and hyperproperties; synthesis; model checking; cyber-physical systems and machine learning; probabilistic systems, runtime techniques; dynamical, hybrid, and reactive systems; Part II: logics, decision procedures; and solvers; numerical programs; verification; distributed systems and networks; verification and invariants; and concurrency

    Synthesis Techniques for Semi-Custom Dynamically Reconfigurable Superscalar Processors

    Get PDF
    The accelerated adoption of reconfigurable computing foreshadows a computational paradigm shift, aimed at fulfilling the need of customizable yet high-performance flexible hardware. Reconfigurable computing fulfills this need by allowing the physical resources of a chip to be adapted to the computational requirements of a specific program, thus achieving higher levels of computing performance. This dissertation evaluates the area requirements for reconfigurable processing, an important yet often disregarded assessment for partial reconfiguration. Common reconfigurable computing approaches today attempt to create custom circuitry in static co-processor accelerators. We instead focused on a new approach that synthesized semi-custom general-purpose processor cores. Each superscalar processor core's execution units can be customized for a particular application, yet the processor retains its standard microprocessor interface. We analyzed the area consumption for these computational components by studying the synthesis requirements of different processor configurations. This area/performance assessment aids designers when constraining processing elements in a fixed-size area slot, a requirement for modern partial reconfiguration approaches. Our results provide a more deterministic evaluation of performance density, hence making the area cost analysis less ambiguous when optimizing dynamic systems for coarse-grained parallelism. The results obtained showed that even though performance density decreases with processor complexity, the additional area still provides a positive contribution to the aggregate parallel processing performance. This evaluation of parallel execution density contributes to ongoing efforts in the field of reconfigurable computing by providing a baseline for area/performance trade-offs for partial reconfiguration and multi-processor systems

    Design and test of readout electronics for medical and astrophysics applications

    Get PDF
    The applied particle physics has a strong R&D tradition aimed at rising the instrumentation performances to achieve relevant results for the scientific community. The know-how achieved in developing particle detectors can be applied to apparently divergent fields like hadrontherapy and cosmic ray detection. A proof of this fact is presented in this doctoral thesis, where the results coming from three different projects are discussed in likewise macro-chapters. A brief introduction (Chapter 1) reports the basic features characterizing a typical particle detector system. This section is developed following the data transmission path: from the sensor, the data moves through the front-end electronics for being readout and collected, ready for the data manipulation. After this general section, the thesis describes the results achieved in two projects developed by the collaboration between the medical physics group of the University of Turin and the Turin section of the Italian Nuclear Institute for Nuclear Physics. Chapter 2 focuses on the TERA09 project. TERA09 is a 64 channels customized chip that has been realized to equip the front-end readout electronics for the new generation of beam monitor chambers for particle therapy applications. In this field, the trend in the accelerators development is moving toward compact solutions providing high-intensity pulsed-beams. However, such a high intensity will saturate the present readout electronics. In order to overcome this critical issue, the TERA09 chip is able to cope with the expected maximum intensity while keeping high resolution by working on a wide conversion-linearity zone which extends from hundreds of pA to hundreds of μA. The chip gain spread is in the order of 1-3% (r.m.s.), with a 200 fC charge resolution. The thesis author took part in the chip design and fully characterized the device. The same group is currently working on behalf of the MoVeIT collaboration for the development of a new silicon strip detector prototype for particle therapy applications. Chapter 3 presents the technical aspects of this project, focusing on the author’s contribution: the front-end electronics design. The sensor adopted for the MoVeIT project is based on 50 μm thin sensors with internal gain, aiming to detect the single beam particle thus counting their number up to 109 cm2/s fluxes, with a pileup probability < 1%. A similar approach would lead to a drastic step forward if compared to the classical and widely used monitoring system based on gas ionization chambers. For what concerns the front-end electronics, the group strategy has been to design two prototypes of custom front-end: one based on a transimpedance preamplifier with a resistive feedback and the other one based on a charge sensitive amplifier. The challenging tasks for the electronics are represented by the charge and dynamic range which are respectively the 3 - 150 fC and the hundreds of MHz instantaneous rate (100 MHz as the milestone, up to 250 MHz ideally). Chapter 4 is a report on the trigger logic development for the Mini-EUSO detector. Mini-EUSO is a telescope designed by the JEM-EUSO Collaboration to map the Earth in the UV range from the vantage point of the International Space Station (ISS), in low Earth orbit. This approach will lay the groundwork for the detection of Extreme Energy Cosmic Rays (EECRs) from space. Due to its 2.5 μs time resolution, Mini-EUSO is capable of detecting a wide range of UV phenomena in the Earth’s atmosphere. In order to maximize the scientific return of the mission, it is necessary to implement a multi-level trigger logic for data selection over different timescales. This logic is key to the success of the mission and thus must be thoroughly tested and carefully integrated into the data processing system prior to the launch. The author took part in the trigger integration in hardware, laboratory trigger tests and also developed the firmware of the trigger ancillary blocks. Chapter 5 closes this doctoral thesis, with a dedicated summary part for each of the three macro-chapters
    corecore