154 research outputs found

    High-speed Analog-to-digital Converters For Modern Satellite Receivers: Design Verification Test And Sensitivity Analysis

    Get PDF
    Mixed-signal System-on-chip devices such as analog-to-digital converters (ADCs) have become increasingly prevalent in the semiconductor industry. Since the complexity and applications are different for each device, complex testing and characterization methods are required. Specifically, signal integrity in I/O interfaces requires that standard RF design and test techniques must be integrated into mixed signal processes. While such techniques may be difficult to implement, on-chip test-vehicles and RF circuitry offer the possibility of wireless approaches to chip testing. This would eliminate expensive wafer probing solution to verify the design of high-speed ADC functionality currently required for high-speed product evaluation. This thesis describes a new high-speed analog-to-digital converter test methodology. The target systems used on-chip digital de-multiplexing and clock distribution. A detail sequence of performance testing operations is presented. Digital outputs are post processed and fed into a computer-aided ADC performance characterization tool which is custom-developed in a MATLAB GUI. The problems of high sampling rate ADC testing are described. The test methodologies described reduce test costs and overcome many test hardware limitations. As our focus is on satellite receiver systems, we emphasize the measurement of inter-modulation distortion and effective resolution bandwidth. As a primary characterization component, Fourier analysis is used and we address the issue of sample window adjustment to eliminate spectral leakage and false spur generation. A 6-bit 800 MSamples/sec dual channel SiGe-based ADC is used as a target example and investigated on the corner lot process variations to determine the impact of process variations and the sensitivity of the ADCs to critical process parameter variations

    A Methodology for Implementing RF BiSTs in Production Testing to Replace RF Conventional Tests

    Get PDF
    Production testing of Radio Frequency (RF) devices is challenging due to the complex nature of the tests that have to be performed to verify functionality. In this dissertation a methodology to replace the complex and expensive RF functional tests with defect-oriented Built-in Self Tests (BiSTs) is detailed. If a design has sufficient margin to RF specifications then RF tests can be replaced with structural tests using a new data analysis technique called quadrant analysis, which is presented. Data from the analysis of over one million production units of said System on Chip (SoC) is presented along with the results of the analysis. The BiST techniques that have been used are discussed and a Texas Instruments 65 nm RF SoC with a Bluetooth and a FM core was used as a case study. The defect models that were used to develop the BiSTs are discussed as well. The scenario in which a design does not have sufficient margin to specification is also discussed. The data analysis method required in such a case is a regression analysis and the data from such an analysis is shown. The results prove that it is possible to replace expensive RF conventional tests with structural tests and that modern RFCMOS process technology and advances in design like the Digital Radio Processor (DRPTM) technology enable this. The Defective Parts Per Million (DPPM) impact of making this replacement is 27 units and is acceptable for RFCMOS high volume products. Finally, data showing test cost reduction of about 38% that resulted from the elimination of RF conventional tests is presented

    Design and Realization of Fully-digital Microwave and Mm-wave Multi-beam Arrays with FPGA/RF-SOC Signal Processing

    Get PDF
    There has been a constant increase in data-traffic and device-connections in mobile wireless communications, which led the fifth generation (5G) implementations to exploit mm-wave bands at 24/28 GHz. The next-generation wireless access point (6G and beyond) will need to adopt large-scale transceiver arrays with a combination of multi-input-multi-output (MIMO) theory and fully digital multi-beam beamforming. The resulting high gain array factors will overcome the high path losses at mm-wave bands, and the simultaneous multi-beams will exploit the multi-directional channels due to multi-path effects and improve the signal-to-noise ratio. Such access points will be based on electronic systems which heavily depend on the integration of RF electronics with digital signal processing performed in Field programmable gate arrays (FPGA)/ RF-system-on-chip (SoC). This dissertation is directed towards the investigation and realization of fully-digital phased arrays that can produce wideband simultaneous multi-beams with FPGA or RF-SoC digital back-ends. The first proposed approach is a spatial bandpass (SBP) IIR filter-based beamformer, and is based on the concepts of space-time network resonance. A 2.4 GHz, 16-element array receiver, has been built for real-time experimental verification of this approach. The second and third approaches are respectively based on Discrete Fourier Transform (DFT) theory, and a lens plus focal planar array theory. Lens based approach is essentially an analog model of DFT. These two approaches are verified for a 28 GHz 800 MHz mm-wave implementation with RF-SoC as the digital back-end. It has been shown that for all proposed multibeam beamformer implementations, the measured beams are well aligned with those of the simulated. The proposed approaches differ in terms of their architectures, hardware complexity and costs, which will be discussed as this dissertation opens up. This dissertation also presents an application of multi-beam approaches for RF directional sensing applications to explore white spaces within the spatio-temporal spectral regions. A real-time directional sensing system is proposed to capture the white spaces within the 2.4 GHz Wi-Fi band. Further, this dissertation investigates the effect of electro-magnetic (EM) mutual coupling in antenna arrays on the real-time performance of fully-digital transceivers. Different algorithms are proposed to uncouple the mutual coupling in digital domain. The first one is based on finding the MC transfer function from the measured S-parameters of the antenna array and employing it in a Frost FIR filter in the beamforming backend. The second proposed method uses fast algorithms to realize the inverse of mutual coupling matrix via tridiagonal Toeplitz matrices having sparse factors. A 5.8 GHz 32-element array and 1-7 GHz 7-element tightly coupled dipole array (TCDA) have been employed to demonstrate the proof-of-concept of these algorithms

    AI/ML Algorithms and Applications in VLSI Design and Technology

    Full text link
    An evident challenge ahead for the integrated circuit (IC) industry in the nanometer regime is the investigation and development of methods that can reduce the design complexity ensuing from growing process variations and curtail the turnaround time of chip manufacturing. Conventional methodologies employed for such tasks are largely manual; thus, time-consuming and resource-intensive. In contrast, the unique learning strategies of artificial intelligence (AI) provide numerous exciting automated approaches for handling complex and data-intensive tasks in very-large-scale integration (VLSI) design and testing. Employing AI and machine learning (ML) algorithms in VLSI design and manufacturing reduces the time and effort for understanding and processing the data within and across different abstraction levels via automated learning algorithms. It, in turn, improves the IC yield and reduces the manufacturing turnaround time. This paper thoroughly reviews the AI/ML automated approaches introduced in the past towards VLSI design and manufacturing. Moreover, we discuss the scope of AI/ML applications in the future at various abstraction levels to revolutionize the field of VLSI design, aiming for high-speed, highly intelligent, and efficient implementations

    On Energy Efficient Computing Platforms

    Get PDF
    In accordance with the Moore's law, the increasing number of on-chip integrated transistors has enabled modern computing platforms with not only higher processing power but also more affordable prices. As a result, these platforms, including portable devices, work stations and data centres, are becoming an inevitable part of the human society. However, with the demand for portability and raising cost of power, energy efficiency has emerged to be a major concern for modern computing platforms. As the complexity of on-chip systems increases, Network-on-Chip (NoC) has been proved as an efficient communication architecture which can further improve system performances and scalability while reducing the design cost. Therefore, in this thesis, we study and propose energy optimization approaches based on NoC architecture, with special focuses on the following aspects. As the architectural trend of future computing platforms, 3D systems have many bene ts including higher integration density, smaller footprint, heterogeneous integration, etc. Moreover, 3D technology can signi cantly improve the network communication and effectively avoid long wirings, and therefore, provide higher system performance and energy efficiency. With the dynamic nature of on-chip communication in large scale NoC based systems, run-time system optimization is of crucial importance in order to achieve higher system reliability and essentially energy efficiency. In this thesis, we propose an agent based system design approach where agents are on-chip components which monitor and control system parameters such as supply voltage, operating frequency, etc. With this approach, we have analysed the implementation alternatives for dynamic voltage and frequency scaling and power gating techniques at different granularity, which reduce both dynamic and leakage energy consumption. Topologies, being one of the key factors for NoCs, are also explored for energy saving purpose. A Honeycomb NoC architecture is proposed in this thesis with turn-model based deadlock-free routing algorithms. Our analysis and simulation based evaluation show that Honeycomb NoCs outperform their Mesh based counterparts in terms of network cost, system performance as well as energy efficiency.Siirretty Doriast

    Didactic platform with a DSP to support the teaching of digital signal processing

    Get PDF
    Si ens posem en context d'un estudiant d'enginyeria, descobrirem que una de les majors motivacions de l'aprenentatge són les pràctiques de laboratori. Aquest treball de fi de grau tractarà sobre la recerca i el desenvolupament d'una plataforma didàctica per a l'assignatura de «Processament Digital del Senyal», impartida durant el tercer curs acadèmic del grau d'Enginyeria de Sistemes TIC. Aquesta plataforma que desenvoluparem inclourà un processador de senyals digitals (DSP) i els perifèrics necessaris perquè els estudiants i professors creïn projectes en un entorn de prototipatge ràpid. A més, els annexos proporcionats haurien de complir amb els requisits per a que aquells que estiguin interessats puguin fabricar el nostre disseny amb poques dificultats.If we put ourselves in the context of an engineering student, we will discover that one of the greatest motivations for learning are laboratory works. This final degree thesis will be about the research and development of a didactic platform for the subject of Digital Signal Processing, taught during the third academical year of the ICT Systems Engineering degree. This platform we are going to develop will encase a digital signal processor (DSP) and the required peripherals for the students and teachers to quickly create projects in a fast prototyping environment. Additionally, the provided annexes should meet with the requirements for those who are interested to manufacture our design with little trouble

    Low-Power Embedded Design Solutions and Low-Latency On-Chip Interconnect Architecture for System-On-Chip Design

    Get PDF
    This dissertation presents three design solutions to support several key system-on-chip (SoC) issues to achieve low-power and high performance. These are: 1) joint source and channel decoding (JSCD) schemes for low-power SoCs used in portable multimedia systems, 2) efficient on-chip interconnect architecture for massive multimedia data streaming on multiprocessor SoCs (MPSoCs), and 3) data processing architecture for low-power SoCs in distributed sensor network (DSS) systems and its implementation. The first part includes a low-power embedded low density parity check code (LDPC) - H.264 joint decoding architecture to lower the baseband energy consumption of a channel decoder using joint source decoding and dynamic voltage and frequency scaling (DVFS). A low-power multiple-input multiple-output (MIMO) and H.264 video joint detector/decoder design that minimizes energy for portable, wireless embedded systems is also designed. In the second part, a link-level quality of service (QoS) scheme using unequal error protection (UEP) for low-power network-on-chip (NoC) and low latency on-chip network designs for MPSoCs is proposed. This part contains WaveSync, a low-latency focused network-on-chip architecture for globally-asynchronous locally-synchronous (GALS) designs and a simultaneous dual-path routing (SDPR) scheme utilizing path diversity present in typical mesh topology network-on-chips. SDPR is akin to having a higher link width but without the significant hardware overhead associated with simple bus width scaling. The last part shows data processing unit designs for embedded SoCs. We propose a data processing and control logic design for a new radiation detection sensor system generating data at or above Peta-bits-per-second level. Implementation results show that the intended clock rate is achieved within the power target of less than 200mW. We also present a digital signal processing (DSP) accelerator supporting configurable MAC, FFT, FIR, and 3-D cross product operations for embedded SoCs. It consumes 12.35mW along with 0.167mm2 area at 333MHz

    Reliable Design of Three-Dimensional Integrated Circuits

    Get PDF

    ToSHI - Towards Secure Heterogeneous Integration: Security Risks, Threat Assessment, and Assurance

    Get PDF
    The semiconductor industry is entering a new age in which device scaling and cost reduction will no longer follow the decades-long pattern. Packing more transistors on a monolithic IC at each node becomes more difficult and expensive. Companies in the semiconductor industry are increasingly seeking technological solutions to close the gap and enhance cost-performance while providing more functionality through integration. Putting all of the operations on a single chip (known as a system on a chip, or SoC) presents several issues, including increased prices and greater design complexity. Heterogeneous integration (HI), which uses advanced packaging technology to merge components that might be designed and manufactured independently using the best process technology, is an attractive alternative. However, although the industry is motivated to move towards HI, many design and security challenges must be addressed. This paper presents a three-tier security approach for secure heterogeneous integration by investigating supply chain security risks, threats, and vulnerabilities at the chiplet, interposer, and system-in-package levels. Furthermore, various possible trust validation methods and attack mitigation were proposed for every level of heterogeneous integration. Finally, we shared our vision as a roadmap toward developing security solutions for a secure heterogeneous integration

    Conception et test des circuits et systèmes numériques à haute fiabilité et sécurité

    Get PDF
    Research activities I carried on after my nomination as Chargé de Recherche deal with the definition of methodologies and tools for the design, the test and the reliability of secure digital circuits and trustworthy manufacturing. More recently, we have started a new research activity on the test of 3D stacked Integrated CIrcuits, based on the use of Through Silicon Vias. Moreover, thanks to the relationships I have maintained after my post-doc in Italy, I have kept on cooperating with Politecnico di Torino on the topics related to test and reliability of memories and microprocessors.Secure and Trusted DevicesSecurity is a critical part of information and communication technologies and it is the necessary basis for obtaining confidentiality, authentication, and integrity of data. The importance of security is confirmed by the extremely high growth of the smart-card market in the last 20 years. It is reported in "Le monde Informatique" in the article "Computer Crime and Security Survey" in 2007 that financial losses due to attacks on "secure objects" in the digital world are greater than $11 Billions. Since the race among developers of these secure devices and attackers accelerates, also due to the heterogeneity of new systems and their number, the improvement of the resistance of such components becomes today’s major challenge.Concerning all the possible security threats, the vulnerability of electronic devices that implement cryptography functions (including smart cards, electronic passports) has become the Achille’s heel in the last decade. Indeed, even though recent crypto-algorithms have been proven resistant to cryptanalysis, certain fraudulent manipulations on the hardware implementing such algorithms can allow extracting confidential information. So-called Side-Channel Attacks have been the first type of attacks that target the physical device. They are based on information gathered from the physical implementation of a cryptosystem. For instance, by correlating the power consumed and the data manipulated by the device, it is possible to discover the secret encryption key. Nevertheless, this point is widely addressed and integrated circuit (IC) manufacturers have already developed different kinds of countermeasures.More recently, new threats have menaced secure devices and the security of the manufacturing process. A first issue is the trustworthiness of the manufacturing process. From one side, secure devices must assure a very high production quality in order not to leak confidential information due to a malfunctioning of the device. Therefore, possible defects due to manufacturing imperfections must be detected. This requires high-quality test procedures that rely on the use of test features that increases the controllability and the observability of inner points of the circuit. Unfortunately, this is harmful from a security point of view, and therefore the access to these test features must be protected from unauthorized users. Another harm is related to the possibility for an untrusted manufacturer to do malicious alterations to the design (for instance to bypass or to disable the security fence of the system). Nowadays, many steps of the production cycle of a circuit are outsourced. For economic reasons, the manufacturing process is often carried out by foundries located in foreign countries. The threat brought by so-called Hardware Trojan Horses, which was long considered theoretical, begins to materialize.A second issue is the hazard of faults that can appear during the circuit’s lifetime and that may affect the circuit behavior by way of soft errors or deliberate manipulations, called Fault Attacks. They can be based on the intentional modification of the circuit’s environment (e.g., applying extreme temperature, exposing the IC to radiation, X-rays, ultra-violet or visible light, or tampering with clock frequency) in such a way that the function implemented by the device generates an erroneous result. The attacker can discover secret information by comparing the erroneous result with the correct one. In-the-field detection of any failing behavior is therefore of prime interest for taking further action, such as discontinuing operation or triggering an alarm. In addition, today’s smart cards use 90nm technology and according to the various suppliers of chip, 65nm technology will be effective on the horizon 2013-2014. Since the energy required to force a transistor to switch is reduced for these new technologies, next-generation secure systems will become even more sensitive to various classes of fault attacks.Based on these considerations, within the group I work with, we have proposed new methods, architectures and tools to solve the following problems:• Test of secure devices: unfortunately, classical techniques for digital circuit testing cannot be easily used in this context. Indeed, classical testing solutions are based on the use of Design-For-Testability techniques that add hardware components to the circuit, aiming to provide full controllability and observability of internal states. Because crypto‐ processors and others cores in a secure system must pass through high‐quality test procedures to ensure that data are correctly processed, testing of crypto chips faces a dilemma. In fact design‐for‐testability schemes want to provide high controllability and observability of the device while security wants minimal controllability and observability in order to hide the secret. We have therefore proposed, form one side, the use of enhanced scan-based test techniques that exploit compaction schemes to reduce the observability of internal information while preserving the high level of testability. From the other side, we have proposed the use of Built-In Self-Test for such devices in order to avoid scan chain based test.• Reliability of secure devices: we proposed an on-line self-test architecture for hardware implementation of the Advanced Encryption Standard (AES). The solution exploits the inherent spatial replications of a parallel architecture for implementing functional redundancy at low cost.• Fault Attacks: one of the most powerful types of attack for secure devices is based on the intentional injection of faults (for instance by using a laser beam) into the system while an encryption occurs. By comparing the outputs of the circuits with and without the injection of the fault, it is possible to identify the secret key. To face this problem we have analyzed how to use error detection and correction codes as counter measure against this type of attack, and we have proposed a new code-based architecture. Moreover, we have proposed a bulk built-in current-sensor that allows detecting the presence of undesired current in the substrate of the CMOS device.• Fault simulation: to evaluate the effectiveness of countermeasures against fault attacks, we developed an open source fault simulator able to perform fault simulation for the most classical fault models as well as user-defined electrical level fault models, to accurately model the effect of laser injections on CMOS circuits.• Side-Channel attacks: they exploit physical data-related information leaking from the device (e.g. current consumption or electro-magnetic emission). One of the most intensively studied attacks is the Differential Power Analysis (DPA) that relies on the observation of the chip power fluctuations during data processing. I studied this type of attack in order to evaluate the influence of the countermeasures against fault attack on the power consumption of the device. Indeed, the introduction of countermeasures for one type of attack could lead to the insertion of some circuitry whose power consumption is related to the secret key, thus allowing another type of attack more easily. We have developed a flexible integrated simulation-based environment that allows validating a digital circuit when the device is attacked by means of this attack. All architectures we designed have been validated through this tool. Moreover, we developed a methodology that allows to drastically reduce the time required to validate countermeasures against this type of attack.TSV- based 3D Stacked Integrated Circuits TestThe stacking process of integrated circuits using TSVs (Through Silicon Via) is a promising technology that keeps the development of the integration more than Moore’s law, where TSVs enable to tightly integrate various dies in a 3D fashion. Nevertheless, 3D integrated circuits present many test challenges including the test at different levels of the 3D fabrication process: pre-, mid-, and post- bond tests. Pre-bond test targets the individual dies at wafer level, by testing not only classical logic (digital logic, IOs, RAM, etc) but also unbounded TSVs. Mid-bond test targets the test of partially assembled 3D stacks, whereas finally post-bond test targets the final circuit.The activities carried out within this topic cover 2 main issues:• Pre-bond test of TSVs: the electrical model of a TSV buried within the substrate of a CMOS circuit is a capacitance connected to ground (when the substrate is connected to ground). The main assumption is that a defect may affect the value of that capacitance. By measuring the variation of the capacitance’s value it is possible to check whether the TSV is correctly fabricated or not. We have proposed a method to measure the value of the capacitance based on the charge/ discharge delay of the RC network containing the TSV.• Test infrastructures for 3D stacked Integrated Circuits: testing a die before stacking to another die introduces the problem of a dynamic test infrastructure, where test data must be routed to a specific die based on the reached fabrication step. New solutions are proposed in literature that allow reconfiguring the test paths within the circuit, based on on-the-fly requirements. We have started working on an extension of the IEEE P1687 test standard that makes use of an automatic die-detection based on pull-up resistors.Memory and Microprocessor Test and ReliabilityThanks to device shrinking and miniaturization of fabrication technology, performances of microprocessors and of memories have grown of more than 5 magnitude order in the last 30 years. With this technology trend, it is necessary to face new problems and challenges, such as reliability, transient errors, variability and aging.In the last five years I’ve worked in cooperation with the Testgroup of Politecnico di Torino (Italy) to propose a new method to on-line validate the correctness of the program execution of a microprocessor. The main idea is to monitor a small set of control signals of the processors in order to identify incorrect activation sequences. This approach can detect both permanent and transient errors of the internal logic of the processor.Concerning the test of memories, we have proposed a new approach to automatically generate test programs starting from a functional description of the possible faults in the memory.Moreover, we proposed a new methodology, based on microprocessor error probability profiling, that aims at estimating fault injection results without the need of a typical fault injection setup. The proposed methodology is based on two main ideas: a one-time fault-injection analysis of the microprocessor architecture to characterize the probability of successful execution of each of its instructions in presence of a soft-error, and a static and very fast analysis of the control and data flow of the target software application to compute its probability of success
    corecore