840 research outputs found

    Automated Hardware Prototyping for 3D Network on Chips

    Get PDF
    Vor mehr als 50 Jahren stellte Intel® Mitbegründer Gordon Moore eine Prognose zum Entwicklungsprozess der Transistortechnologie auf. Er prognostizierte, dass sich die Zahl der Transistoren in integrierten Schaltungen alle zwei Jahre verdoppeln wird. Seine Aussage ist immer noch gültig, aber ein Ende von Moores Gesetz ist in Sicht. Mit dem Ende von Moore’s Gesetz müssen neue Aspekte untersucht werden, um weiterhin die Leistung von integrierten Schaltungen zu steigern. Zwei mögliche Ansätze für "More than Moore” sind 3D-Integrationsverfahren und heterogene Systeme. Gleichzeitig entwickelt sich ein Trend hin zu Multi-Core Prozessoren, basierend auf Networks on chips (NoCs). Neben dem Ende des Mooreschen Gesetzes ergeben sich bei immer kleiner werdenden Technologiegrößen, vor allem jenseits der 60 nm, neue Herausforderungen. Eine Schwierigkeit ist die Wärmeableitung in großskalierten integrierten Schaltkreisen und die daraus resultierende Überhitzung des Chips. Um diesem Problem in modernen Multi-Core Architekturen zu begegnen, muss auch die Verlustleistung der Netzwerkressourcen stark reduziert werden. Diese Arbeit umfasst eine durch Hardware gesteuerte Kombination aus Frequenzskalierung und Power Gating für 3D On-Chip Netzwerke, einschließlich eines FPGA Prototypen. Dafür wurde ein Takt-synchrones 2D Netzwerk auf ein dreidimensionales asynchrones Netzwerk mit mehreren Frequenzbereichen erweitert. Zusätzlich wurde ein skalierbares Online-Power-Management System mit geringem Ressourcenaufwand entwickelt. Die Verifikation neuer Hardwarekomponenten ist einer der zeitaufwendigsten Schritte im Entwicklungsprozess hochintegrierter digitaler Schaltkreise. Um diese Aufgabe zu beschleunigen und um eine parallele Softwareentwicklung zu ermöglichen, wurde im Rahmen dieser Arbeit ein automatisiertes und benutzerfreundliches Tool für den Entwurf neuer Hardware Projekte entwickelt. Eine grafische Benutzeroberfläche zum Erstellen des gesamten Designablaufs, vom Erstellen der Architektur, Parameter Deklaration, Simulation, Synthese und Test ist Teil dieses Werkzeugs. Zudem stellt die Größe der Architektur für die Erstellung eines Prototypen eine besondere Herausforderung dar. Frühere Arbeiten haben es versäumt, eine schnelles und unkompliziertes Prototyping, insbesondere von Architekturen mit mehr als 50 Prozessorkernen, zu realisieren. Diese Arbeit umfasst eine Design Space Exploration und FPGA-basierte Prototypen von verschiedenen 3D-NoC Implementierungen mit mehr als 80 Prozessoren

    An Interactive System Level Simulation Environment for Systems- on-Chip

    Get PDF
    International audienceThis article presents an interactive simulation environment for high level models intended for Design Space Exploration of Systems-On-Chip. The existing open source development environment TTool supports the MARTE compliant UML profile DIPLODOCUS and enables the designer to create, simulate and formally verify models. The goal is to obtain first performance estimations of the system intended for design while minimizing the modeling effort. The contribution outlined in this paper is an additional module providing means for controlling the simulation in real time by performing step wise execution, saving and restoring simulation states as well as animating UML models of the system. Moreover the paper elaborates on the integration of these new features into the existing framework consisting of a simulation engine on the one hand and a graphical user interface on the other hand

    Platform-based design, test and fast verification flow for mixed-signal systems on chip

    Get PDF
    This research is providing methodologies to enhance the design phase from architectural space exploration and system study to verification of the whole mixed-signal system. At the beginning of the work, some innovative digital IPs have been designed to develop efficient signal conditioning for sensor systems on-chip that has been included in commercial products. After this phase, the main focus has been addressed to the creation of a re-usable and versatile test of the device after the tape-out which is close to become one of the major cost factor for ICs companies, strongly linking it to model’s test-benches to avoid re-design phases and multi-environment scenarios, producing a very effective approach to a single, fast and reliable multi-level verification environment. All these works generated different publications in scientific literature. The compound scenario concerning the development of sensor systems is presented in Chapter 1, together with an overview of the related market with a particular focus on the latest MEMS and MOEMS technology devices, and their applications in various segments. Chapter 2 introduces the state of the art for sensor interfaces: the generic sensor interface concept (based on sharing the same electronics among similar applications achieving cost saving at the expense of area and performance loss) versus the Platform Based Design methodology, which overcomes the drawbacks of the classic solution by keeping the generality at the highest design layers and customizing the platform for a target sensor achieving optimized performances. An evolution of Platform Based Design achieved by implementation into silicon of the ISIF (Intelligent Sensor InterFace) platform is therefore presented. ISIF is a highly configurable mixed-signal chip which allows designers to perform an effective design space exploration and to evaluate directly on silicon the system performances avoiding the critical and time consuming analysis required by standard platform based approach. In chapter 3 we describe the design of a smart sensor interface for conditioning next generation MOEMS. The adoption of a new, high performance and high integrated technology allow us to integrate not only a versatile platform but also a powerful ARM processor and various IPs providing the possibility to use the platform not only as a conditioning platform but also as a processing unit for the application. In this chapter a description of the various blocks is given, with a particular emphasis on the IP developed in order to grant the highest grade of flexibility with the minimum area occupation. The architectural space evaluation and the application prototyping with ISIF has enabled an effective, rapid and low risk development of a new high performance platform achieving a flexible sensor system for MEMS and MOEMS monitoring and conditioning. The platform has been design to cover very challenging test-benches, like a laser-based projector device. In this way the platform will not only be able to effectively handle the sensor but also all the system that can be built around it, reducing the needed for further electronics and resulting in an efficient test bench for the algorithm developed to drive the system. The high costs in ASIC development are mainly related to re-design phases because of missing complete top-level tests. Analog and digital parts design flows are separately verified. Starting from these considerations, in the last chapter a complete test environment for complex mixed-signal chips is presented. A semi-automatic VHDL-AMS flow to provide totally matching top-level is described and then, an evolution for fast self-checking test development for both model and real chip verification is proposed. By the introduction of a Python interface, the designer can easily perform interactive tests to cover all the features verification (e.g. calibration and trimming) into the design phase and check them all with the same environment on the real chip after the tape-out. This strategy has been tested on a consumer 3D-gyro for consumer application, in collaboration with SensorDynamics AG

    Conception et test des circuits et systèmes numériques à haute fiabilité et sécurité

    Get PDF
    Research activities I carried on after my nomination as Chargé de Recherche deal with the definition of methodologies and tools for the design, the test and the reliability of secure digital circuits and trustworthy manufacturing. More recently, we have started a new research activity on the test of 3D stacked Integrated CIrcuits, based on the use of Through Silicon Vias. Moreover, thanks to the relationships I have maintained after my post-doc in Italy, I have kept on cooperating with Politecnico di Torino on the topics related to test and reliability of memories and microprocessors.Secure and Trusted DevicesSecurity is a critical part of information and communication technologies and it is the necessary basis for obtaining confidentiality, authentication, and integrity of data. The importance of security is confirmed by the extremely high growth of the smart-card market in the last 20 years. It is reported in "Le monde Informatique" in the article "Computer Crime and Security Survey" in 2007 that financial losses due to attacks on "secure objects" in the digital world are greater than $11 Billions. Since the race among developers of these secure devices and attackers accelerates, also due to the heterogeneity of new systems and their number, the improvement of the resistance of such components becomes today’s major challenge.Concerning all the possible security threats, the vulnerability of electronic devices that implement cryptography functions (including smart cards, electronic passports) has become the Achille’s heel in the last decade. Indeed, even though recent crypto-algorithms have been proven resistant to cryptanalysis, certain fraudulent manipulations on the hardware implementing such algorithms can allow extracting confidential information. So-called Side-Channel Attacks have been the first type of attacks that target the physical device. They are based on information gathered from the physical implementation of a cryptosystem. For instance, by correlating the power consumed and the data manipulated by the device, it is possible to discover the secret encryption key. Nevertheless, this point is widely addressed and integrated circuit (IC) manufacturers have already developed different kinds of countermeasures.More recently, new threats have menaced secure devices and the security of the manufacturing process. A first issue is the trustworthiness of the manufacturing process. From one side, secure devices must assure a very high production quality in order not to leak confidential information due to a malfunctioning of the device. Therefore, possible defects due to manufacturing imperfections must be detected. This requires high-quality test procedures that rely on the use of test features that increases the controllability and the observability of inner points of the circuit. Unfortunately, this is harmful from a security point of view, and therefore the access to these test features must be protected from unauthorized users. Another harm is related to the possibility for an untrusted manufacturer to do malicious alterations to the design (for instance to bypass or to disable the security fence of the system). Nowadays, many steps of the production cycle of a circuit are outsourced. For economic reasons, the manufacturing process is often carried out by foundries located in foreign countries. The threat brought by so-called Hardware Trojan Horses, which was long considered theoretical, begins to materialize.A second issue is the hazard of faults that can appear during the circuit’s lifetime and that may affect the circuit behavior by way of soft errors or deliberate manipulations, called Fault Attacks. They can be based on the intentional modification of the circuit’s environment (e.g., applying extreme temperature, exposing the IC to radiation, X-rays, ultra-violet or visible light, or tampering with clock frequency) in such a way that the function implemented by the device generates an erroneous result. The attacker can discover secret information by comparing the erroneous result with the correct one. In-the-field detection of any failing behavior is therefore of prime interest for taking further action, such as discontinuing operation or triggering an alarm. In addition, today’s smart cards use 90nm technology and according to the various suppliers of chip, 65nm technology will be effective on the horizon 2013-2014. Since the energy required to force a transistor to switch is reduced for these new technologies, next-generation secure systems will become even more sensitive to various classes of fault attacks.Based on these considerations, within the group I work with, we have proposed new methods, architectures and tools to solve the following problems:• Test of secure devices: unfortunately, classical techniques for digital circuit testing cannot be easily used in this context. Indeed, classical testing solutions are based on the use of Design-For-Testability techniques that add hardware components to the circuit, aiming to provide full controllability and observability of internal states. Because crypto‐ processors and others cores in a secure system must pass through high‐quality test procedures to ensure that data are correctly processed, testing of crypto chips faces a dilemma. In fact design‐for‐testability schemes want to provide high controllability and observability of the device while security wants minimal controllability and observability in order to hide the secret. We have therefore proposed, form one side, the use of enhanced scan-based test techniques that exploit compaction schemes to reduce the observability of internal information while preserving the high level of testability. From the other side, we have proposed the use of Built-In Self-Test for such devices in order to avoid scan chain based test.• Reliability of secure devices: we proposed an on-line self-test architecture for hardware implementation of the Advanced Encryption Standard (AES). The solution exploits the inherent spatial replications of a parallel architecture for implementing functional redundancy at low cost.• Fault Attacks: one of the most powerful types of attack for secure devices is based on the intentional injection of faults (for instance by using a laser beam) into the system while an encryption occurs. By comparing the outputs of the circuits with and without the injection of the fault, it is possible to identify the secret key. To face this problem we have analyzed how to use error detection and correction codes as counter measure against this type of attack, and we have proposed a new code-based architecture. Moreover, we have proposed a bulk built-in current-sensor that allows detecting the presence of undesired current in the substrate of the CMOS device.• Fault simulation: to evaluate the effectiveness of countermeasures against fault attacks, we developed an open source fault simulator able to perform fault simulation for the most classical fault models as well as user-defined electrical level fault models, to accurately model the effect of laser injections on CMOS circuits.• Side-Channel attacks: they exploit physical data-related information leaking from the device (e.g. current consumption or electro-magnetic emission). One of the most intensively studied attacks is the Differential Power Analysis (DPA) that relies on the observation of the chip power fluctuations during data processing. I studied this type of attack in order to evaluate the influence of the countermeasures against fault attack on the power consumption of the device. Indeed, the introduction of countermeasures for one type of attack could lead to the insertion of some circuitry whose power consumption is related to the secret key, thus allowing another type of attack more easily. We have developed a flexible integrated simulation-based environment that allows validating a digital circuit when the device is attacked by means of this attack. All architectures we designed have been validated through this tool. Moreover, we developed a methodology that allows to drastically reduce the time required to validate countermeasures against this type of attack.TSV- based 3D Stacked Integrated Circuits TestThe stacking process of integrated circuits using TSVs (Through Silicon Via) is a promising technology that keeps the development of the integration more than Moore’s law, where TSVs enable to tightly integrate various dies in a 3D fashion. Nevertheless, 3D integrated circuits present many test challenges including the test at different levels of the 3D fabrication process: pre-, mid-, and post- bond tests. Pre-bond test targets the individual dies at wafer level, by testing not only classical logic (digital logic, IOs, RAM, etc) but also unbounded TSVs. Mid-bond test targets the test of partially assembled 3D stacks, whereas finally post-bond test targets the final circuit.The activities carried out within this topic cover 2 main issues:• Pre-bond test of TSVs: the electrical model of a TSV buried within the substrate of a CMOS circuit is a capacitance connected to ground (when the substrate is connected to ground). The main assumption is that a defect may affect the value of that capacitance. By measuring the variation of the capacitance’s value it is possible to check whether the TSV is correctly fabricated or not. We have proposed a method to measure the value of the capacitance based on the charge/ discharge delay of the RC network containing the TSV.• Test infrastructures for 3D stacked Integrated Circuits: testing a die before stacking to another die introduces the problem of a dynamic test infrastructure, where test data must be routed to a specific die based on the reached fabrication step. New solutions are proposed in literature that allow reconfiguring the test paths within the circuit, based on on-the-fly requirements. We have started working on an extension of the IEEE P1687 test standard that makes use of an automatic die-detection based on pull-up resistors.Memory and Microprocessor Test and ReliabilityThanks to device shrinking and miniaturization of fabrication technology, performances of microprocessors and of memories have grown of more than 5 magnitude order in the last 30 years. With this technology trend, it is necessary to face new problems and challenges, such as reliability, transient errors, variability and aging.In the last five years I’ve worked in cooperation with the Testgroup of Politecnico di Torino (Italy) to propose a new method to on-line validate the correctness of the program execution of a microprocessor. The main idea is to monitor a small set of control signals of the processors in order to identify incorrect activation sequences. This approach can detect both permanent and transient errors of the internal logic of the processor.Concerning the test of memories, we have proposed a new approach to automatically generate test programs starting from a functional description of the possible faults in the memory.Moreover, we proposed a new methodology, based on microprocessor error probability profiling, that aims at estimating fault injection results without the need of a typical fault injection setup. The proposed methodology is based on two main ideas: a one-time fault-injection analysis of the microprocessor architecture to characterize the probability of successful execution of each of its instructions in presence of a soft-error, and a static and very fast analysis of the control and data flow of the target software application to compute its probability of success

    Multi-core devices for safety-critical systems: a survey

    Get PDF
    Multi-core devices are envisioned to support the development of next-generation safety-critical systems, enabling the on-chip integration of functions of different criticality. This integration provides multiple system-level potential benefits such as cost, size, power, and weight reduction. However, safety certification becomes a challenge and several fundamental safety technical requirements must be addressed, such as temporal and spatial independence, reliability, and diagnostic coverage. This survey provides a categorization and overview at different device abstraction levels (nanoscale, component, and device) of selected key research contributions that support the compliance with these fundamental safety requirements.This work has been partially supported by the Spanish Ministry of Economy and Competitiveness under grant TIN2015-65316-P, Basque Government under grant KK-2019-00035 and the HiPEAC Network of Excellence. The Spanish Ministry of Economy and Competitiveness has also partially supported Jaume Abella under Ramon y Cajal postdoctoral fellowship (RYC-2013-14717).Peer ReviewedPostprint (author's final draft

    Low-power CMOS circuit design for fast infrared imagers

    Get PDF
    La present tesi de màster detalla novedoses tècniques circuitals per al disseny de circuits integrats digitals CMOS de lectura compactes, de baixa potència i completament programables, destinats a aplicacions d'IR d'alta velocitat operant a temperatura ambient. En aquest sentit, el treball recull i amplia notablement la recerca iniciada en el Projecte Final de Carrera "Tècniques de disseny CMOS per a sistemes de visió híbrids de pla focal modular" obtenint-se resultats específics en tres diferents àrees: Recerca de l'arquitectura òptima d'FPA, des del punt de vista funcional i de construcció física. Disseny d'un conjunt complet de blocs bàsics d'autopolarització, compensació de la capacitat d'entrada i del corrent d'obscuritat, conversió A/D i interfície d'E/S exclusivament digital, amb compensació de l'FPN. Aplicació industrial real: Integraciió de tres versions diferents de píxel per sensors PbSe d'IR i fabricació de mòduls ROIC monolítics i híbrids en tecnologia CMOS estàndard 0.35&·956;m 2-PoliSi4-metall. Caracterització elèctrica i òptica-preliminar de les llibreries de disseny

    Embedded electronic systems driven by run-time reconfigurable hardware

    Get PDF
    Abstract This doctoral thesis addresses the design of embedded electronic systems based on run-time reconfigurable hardware technology –available through SRAM-based FPGA/SoC devices– aimed at contributing to enhance the life quality of the human beings. This work does research on the conception of the system architecture and the reconfiguration engine that provides to the FPGA the capability of dynamic partial reconfiguration in order to synthesize, by means of hardware/software co-design, a given application partitioned in processing tasks which are multiplexed in time and space, optimizing thus its physical implementation –silicon area, processing time, complexity, flexibility, functional density, cost and power consumption– in comparison with other alternatives based on static hardware (MCU, DSP, GPU, ASSP, ASIC, etc.). The design flow of such technology is evaluated through the prototyping of several engineering applications (control systems, mathematical coprocessors, complex image processors, etc.), showing a high enough level of maturity for its exploitation in the industry.Resumen Esta tesis doctoral abarca el diseño de sistemas electrónicos embebidos basados en tecnología hardware dinámicamente reconfigurable –disponible a través de dispositivos lógicos programables SRAM FPGA/SoC– que contribuyan a la mejora de la calidad de vida de la sociedad. Se investiga la arquitectura del sistema y del motor de reconfiguración que proporcione a la FPGA la capacidad de reconfiguración dinámica parcial de sus recursos programables, con objeto de sintetizar, mediante codiseño hardware/software, una determinada aplicación particionada en tareas multiplexadas en tiempo y en espacio, optimizando así su implementación física –área de silicio, tiempo de procesado, complejidad, flexibilidad, densidad funcional, coste y potencia disipada– comparada con otras alternativas basadas en hardware estático (MCU, DSP, GPU, ASSP, ASIC, etc.). Se evalúa el flujo de diseño de dicha tecnología a través del prototipado de varias aplicaciones de ingeniería (sistemas de control, coprocesadores aritméticos, procesadores de imagen, etc.), evidenciando un nivel de madurez viable ya para su explotación en la industria.Resum Aquesta tesi doctoral està orientada al disseny de sistemes electrònics empotrats basats en tecnologia hardware dinàmicament reconfigurable –disponible mitjançant dispositius lògics programables SRAM FPGA/SoC– que contribueixin a la millora de la qualitat de vida de la societat. S’investiga l’arquitectura del sistema i del motor de reconfiguració que proporcioni a la FPGA la capacitat de reconfiguració dinàmica parcial dels seus recursos programables, amb l’objectiu de sintetitzar, mitjançant codisseny hardware/software, una determinada aplicació particionada en tasques multiplexades en temps i en espai, optimizant així la seva implementació física –àrea de silici, temps de processat, complexitat, flexibilitat, densitat funcional, cost i potència dissipada– comparada amb altres alternatives basades en hardware estàtic (MCU, DSP, GPU, ASSP, ASIC, etc.). S’evalúa el fluxe de disseny d’aquesta tecnologia a través del prototipat de varies aplicacions d’enginyeria (sistemes de control, coprocessadors aritmètics, processadors d’imatge, etc.), demostrant un nivell de maduresa viable ja per a la seva explotació a la indústria

    Quantum Communication, Sensing and Measurement in Space

    Get PDF
    The main theme of the conclusions drawn for classical communication systems operating at optical or higher frequencies is that there is a well‐understood performance gain in photon efficiency (bits/photon) and spectral efficiency (bits/s/Hz) by pursuing coherent‐state transmitters (classical ideal laser light) coupled with novel quantum receiver systems operating near the Holevo limit (e.g., joint detection receivers). However, recent research indicates that these receivers will require nonlinear and nonclassical optical processes and components at the receiver. Consequently, the implementation complexity of Holevo‐capacityapproaching receivers is not yet fully ascertained. Nonetheless, because the potential gain is significant (e.g., the projected photon efficiency and data rate of MIT Lincoln Laboratory's Lunar Lasercom Demonstration (LLCD) could be achieved with a factor‐of‐20 reduction in the modulation bandwidth requirement), focused research activities on ground‐receiver architectures that approach the Holevo limit in space‐communication links would be beneficial. The potential gains resulting from quantum‐enhanced sensing systems in space applications have not been laid out as concretely as some of the other areas addressed in our study. In particular, while the study period has produced several interesting high‐risk and high‐payoff avenues of research, more detailed seedlinglevel investigations are required to fully delineate the potential return relative to the state‐of‐the‐art. Two prominent examples are (1) improvements to pointing, acquisition and tracking systems (e.g., for optical communication systems) by way of quantum measurements, and (2) possible weak‐valued measurement techniques to attain high‐accuracy sensing systems for in situ or remote‐sensing instruments. While these concepts are technically sound and have very promising bench‐top demonstrations in a lab environment, they are not mature enough to realistically evaluate their performance in a space‐based application. Therefore, it is recommended that future work follow small focused efforts towards incorporating practical constraints imposed by a space environment. The space platform has been well recognized as a nearly ideal environment for some of the most precise tests of fundamental physics, and the ensuing potential of scientific advances enabled by quantum technologies is evident in our report. For example, an exciting concept that has emerged for gravity‐wave detection is that the intermediate frequency band spanning 0.01 to 10 Hz—which is inaccessible from the ground—could be accessed at unprecedented sensitivity with a space‐based interferometer that uses shorter arms relative to state‐of‐the‐art to keep the diffraction losses low, and employs frequency‐dependent squeezed light to surpass the standard quantum limit sensitivity. This offers the potential to open up a new window into the universe, revealing the behavior of compact astrophysical objects and pulsars. As another set of examples, research accomplishments in the atomic and optics fields in recent years have ushered in a number of novel clocks and sensors that can achieve unprecedented measurement precisions. These emerging technologies promise new possibilities in fundamental physics, examples of which are tests of relativistic gravity theory, universality of free fall, frame‐dragging precession, the gravitational inverse‐square law at micron scale, and new ways of gravitational wave detection with atomic inertial sensors. While the relevant technologies and their discovery potentials have been well demonstrated on the ground, there exists a large gap to space‐based systems. To bridge this gap and to advance fundamental‐physics exploration in space, focused investments that further mature promising technologies, such as space‐based atomic clocks and quantum sensors based on atom‐wave interferometers, are recommended. Bringing a group of experts from diverse technical backgrounds together in a productive interactive environment spurred some unanticipated innovative concepts. One promising concept is the possibility of utilizing a space‐based interferometer as a frequency reference for terrestrial precision measurements. Space‐based gravitational wave detectors depend on extraordinarily low noise in the separation between spacecraft, resulting in an ultra‐stable frequency reference that is several orders of magnitude better than the state of the art of frequency references using terrestrial technology. The next steps in developing this promising new concept are simulations and measurement of atmospheric effects that may limit performance due to non‐reciprocal phase fluctuations. In summary, this report covers a broad spectrum of possible new opportunities in space science, as well as enhancements in the performance of communication and sensing technologies, based on observing, manipulating and exploiting the quantum‐mechanical nature of our universe. In our study we identified a range of exciting new opportunities to capture the revolutionary capabilities resulting from quantum enhancements. We believe that pursuing these opportunities has the potential to positively impact the NASA mission in both the near term and in the long term. In this report we lay out the research and development paths that we believe are necessary to realize these opportunities and capitalize on the gains quantum technologies can offer

    Optical Frequency Domain Interferometry for the Characterization and Development of Complex and Tunable Photonic Integrated Circuits

    Full text link
    [ES] Esta tesis aborda la caracterización de circuitos fotónicos integrados (PIC) usando interferometría óptica en el domino de las frecuencias (OFDI). OFDI tiene una implementación razonablemente simple e interroga al dispositivo bajo test (DUT) proporcionando su respuesta en el dominio del tiempo, en la que los distintos caminos ópticos seguidos por la luz se manifiestan en contribuciones que contienen información de posición, amplitud y fase. Junto con un "setup" OFDI construido en nuestros laboratorios y estructuras de test integradas que involucran anillos resonantes, interferómetros, etc., proponemos e implementamos técnicas para obtener parámetros ópticos cruciales tales como el índice de grupo, dispersión cromática, rotación de polarización y pérdidas de propagación en guías de onda. También para caracterizar acopladores ópticos. Se realizan evaluaciones directas de fase óptica en diferentes experimentos para, entre otras aplicaciones, caracterizar efectos de calor en chips. En la culminación de la tesis, se aborda la integración conjunta de los interferómetros de OFDI junto con el DUT, concibiéndolo como una estructura de caracterización integrada. El uso de guías de onda integradas proporciona una alta estabilidad y adaptación al DUT, además de un mecanismo inherente de compensación de la dispersión. Se realiza un análisis y prueba de concepto experimental caracterizando un "arrayed waveguide grating" en tecnología de nitruro de silicio. Seguidamente, se da un paso adelante proponiendo una arquitectura interferométrica de tres brazos novedosa que permite reducir la complejidad de la medida. Se lleva a cabo una validación experimental amplia usando distintos equipos de laboratorio, acoplamiento horizontal y vertical al chip, y diferentes DUTs en tecnologías de nitruro de silicio y "silicon-on-insulator".[CAT] Aquesta tesi aborda la caracterització de circuits fotònics integrats (PIC) usant interferometria òptica al domini de les freqüències (OFDI). OFDI té una implementació raonablement simple i interroga el dispositiu sota test (DUT) proporcionant la seva resposta en el domini del temps, en què els diferents camins òptics seguits per la llum es manifesten en contribucions que contenen informació de posició, amplitud i fase. Juntament amb un "setup" OFDI construït als nostres laboratoris i estructures de test integrades que involucren anells ressonants, interferòmetres, etc., proposem i implementem tècniques per obtenir paràmetres òptics crucials com ara l'índex de grup, dispersió cromàtica, rotació de polarització i pèrdues de propagació en guies d'ona. També per caracteritzar acobladors òptics. Es fan avaluacions directes de fase òptica en diferents experiments per, entre altres aplicacions, caracteritzar efectes de calor en xips. A la culminació de la tesi, s'aborda la integració conjunta dels interferòmetres d'OFDI juntament amb el DUT, concebent-ho com una estructura de caracterització integrada. L'ús de guies d'ona integrades proporciona una alta estabilitat i adaptació al DUT, a més d'un mecanisme inherent de compensació de la dispersió. Es realitza una anàlisi i prova de concepte experimental caracteritzant un "arrayed waveguide grating" en tecnologia de nitrur de silici. Seguidament, es fa un pas avant proposant una arquitectura interferomètrica de tres braços nova que permet reduir la complexitat de la mesura. Es du a terme una validació experimental àmplia usant diferents equips de laboratori, acoblament horitzontal i vertical al xip, i diferents DUTs en tecnologies de nitrur de silici i "silicon-on-insulator".[EN] This PhD thesis covers the characterization of complex photonic integrated circuits (PIC) by using Optical Frequency Domain Interferometry (OFDI). OFDI has a fairly simple implementation and interrogates the device under test (DUT) providing its time domain response, in which the different optical paths followed by light manifest in contributions with position, amplitude and phase information. Together with a working OFDI setup built in our laboratory and integrated test structures involving devices such as ring resonators, interferometers, etc., we propose and implement techniques to get crucial optical parameters such as waveguide group refractive index, chromatic dispersion, polarization rotation, and propagation loss. Also, to characterize optical couplers. Direct optical phase assessment is made in different experiments permitting, amongst others, the characterization of on-chip heat effects. In the culmination of the thesis, the co-integration of the OFDI interferometers with the DUT is addressed, conceiving it as an integrated characterization structure. The use of integrated waveguides provide high stability and adaptation to the DUT, as well as an inherent dispersion de-embedding mechanism. It is provided analysis and experimental proof of concept with an arrayed waveguide grating as DUT in a silicon nitride platform. A considerable leap forward is then taken by proposing a novel three-way interferometer architecture, reducing the measurement complexity. Wide experimental validation is carried out using different laboratory equipment, horizontal and vertical chip coupling, and different DUTs in silicon nitride and silicon-on-insulator.Bru Orgiles, LA. (2022). Optical Frequency Domain Interferometry for the Characterization and Development of Complex and Tunable Photonic Integrated Circuits [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/181635TESI
    corecore