18 research outputs found

    Doctor of Philosophy

    Get PDF
    dissertationRecent breakthroughs in silicon photonics technology are enabling the integration of optical devices into silicon-based semiconductor processes. Photonics technology enables high-speed, high-bandwidth, and high-fidelity communications on the chip-scale-an important development in an increasingly communications-oriented semiconductor world. Significant developments in silicon photonic manufacturing and integration are also enabling investigations into applications beyond that of traditional telecom: sensing, filtering, signal processing, quantum technology-and even optical computing. In effect, we are now seeing a convergence of communications and computation, where the traditional roles of optics and microelectronics are becoming blurred. As the applications for opto-electronic integrated circuits (OEICs) are developed, and manufacturing capabilities expand, design support is necessary to fully exploit the potential of this optics technology. Such design support for moving beyond custom-design to automated synthesis and optimization is not well developed. Scalability requires abstractions, which in turn enables and requires the use of optimization algorithms and design methodology flows. Design automation represents an opportunity to take OEIC design to a larger scale, facilitating design-space exploration, and laying the foundation for current and future optical applications-thus fully realizing the potential of this technology. This dissertation proposes design automation for integrated optic system design. Using a buildingblock model for optical devices, we provide an EDA-inspired design flow and methodologies for optical design automation. Underlying these flows and methodologies are new supporting techniques in behavioral and physical synthesis, as well as device-resynthesis techniques for thermal-aware system integration. We also provide modeling for optical devices and determine optimization and constraint parameters that guide the automation techniques. Our techniques and methodologies are then applied to the design and optimization of optical circuits and devices. Experimental results are analyzed to evaluate their efficacy. We conclude with discussions on the contributions and limitations of the approaches in the context of optical design automation, and describe the tremendous opportunities for future research in design automation for integrated optics

    The Customizable Virtual FPGA: Generation, System Integration and Configuration of Application-Specific Heterogeneous FPGA Architectures

    Get PDF
    In den vergangenen drei Jahrzehnten wurde die Entwicklung von Field Programmable Gate Arrays (FPGAs) stark von Moore’s Gesetz, Prozesstechnologie (Skalierung) und kommerziellen MĂ€rkten beeinflusst. State-of-the-Art FPGAs bewegen sich einerseits dem Allzweck nĂ€her, aber andererseits, da FPGAs immer mehr traditionelle DomĂ€nen der Anwendungsspezifischen integrierten Schaltungen (ASICs) ersetzt haben, steigen die Effizienzerwartungen. Mit dem Ende der Dennard-Skalierung können Effizienzsteigerungen nicht mehr auf Technologie-Skalierung allein zurĂŒckgreifen. Diese Facetten und Trends in Richtung rekonfigurierbarer System-on-Chips (SoCs) und neuen Low-Power-Anwendungen wie Cyber Physical Systems und Internet of Things erfordern eine bessere Anpassung der Ziel-FPGAs. Neben den Trends fĂŒr den Mainstream-Einsatz von FPGAs in Produkten des tĂ€glichen Bedarfs und Services wird es vor allem bei den jĂŒngsten Entwicklungen, FPGAs in Rechenzentren und Cloud-Services einzusetzen, notwendig sein, eine sofortige PortabilitĂ€t von Applikationen ĂŒber aktuelle und zukĂŒnftige FPGA-GerĂ€te hinweg zu gewĂ€hrleisten. In diesem Zusammenhang kann die Hardware-Virtualisierung ein nahtloses Mittel fĂŒr PlattformunabhĂ€ngigkeit und PortabilitĂ€t sein. Ehrlich gesagt stehen die Zwecke der Anpassung und der Virtualisierung eigentlich in einem Konfliktfeld, da die Anpassung fĂŒr die Effizienzsteigerung vorgesehen ist, wĂ€hrend jedoch die Virtualisierung zusĂ€tzlichen FlĂ€chenaufwand hinzufĂŒgt. Die Virtualisierung profitiert aber nicht nur von der Anpassung, sondern fĂŒgt auch mehr FlexibilitĂ€t hinzu, da die Architektur jederzeit verĂ€ndert werden kann. Diese Besonderheit kann fĂŒr adaptive Systeme ausgenutzt werden. Sowohl die Anpassung als auch die Virtualisierung von FPGA-Architekturen wurden in der Industrie bisher kaum adressiert. Trotz einiger existierenden akademischen Werke können diese Techniken noch als unerforscht betrachtet werden und sind aufstrebende Forschungsgebiete. Das Hauptziel dieser Arbeit ist die Generierung von FPGA-Architekturen, die auf eine effiziente Anpassung an die Applikation zugeschnitten sind. Im Gegensatz zum ĂŒblichen Ansatz mit kommerziellen FPGAs, bei denen die FPGA-Architektur als gegeben betrachtet wird und die Applikation auf die vorhandenen Ressourcen abgebildet wird, folgt diese Arbeit einem neuen Paradigma, in dem die Applikation oder Applikationsklasse fest steht und die Zielarchitektur auf die effiziente Anpassung an die Applikation zugeschnitten ist. Dies resultiert in angepassten anwendungsspezifischen FPGAs. Die drei SĂ€ulen dieser Arbeit sind die Aspekte der Virtualisierung, der Anpassung und des Frameworks. Das zentrale Element ist eine weitgehend parametrierbare virtuelle FPGA-Architektur, die V-FPGA genannt wird, wobei sie als primĂ€res Ziel auf jeden kommerziellen FPGA abgebildet werden kann, wĂ€hrend Anwendungen auf der virtuellen Schicht ausgefĂŒhrt werden. Dies sorgt fĂŒr PortabilitĂ€t und Migration auch auf Bitstream-Ebene, da die Spezifikation der virtuellen Schicht bestehen bleibt, wĂ€hrend die physische Plattform ausgetauscht werden kann. DarĂŒber hinaus wird diese Technik genutzt, um eine dynamische und partielle Rekonfiguration auf Plattformen zu ermöglichen, die sie nicht nativ unterstĂŒtzen. Neben der Virtualisierung soll die V-FPGA-Architektur auch als eingebettetes FPGA in ein ASIC integriert werden, das effiziente und dennoch flexible System-on-Chip-Lösungen bietet. Daher werden Zieltechnologie-Abbildungs-Methoden sowohl fĂŒr Virtualisierung als auch fĂŒr die physikalische Umsetzung adressiert und ein Beispiel fĂŒr die physikalische Umsetzung in einem 45 nm Standardzellen Ansatz aufgezeigt. Die hochflexible V-FPGA-Architektur kann mit mehr als 20 Parametern angepasst werden, darunter LUT-Grösse, Clustering, 3D-Stacking, Routing-Struktur und vieles mehr. Die Auswirkungen der Parameter auf FlĂ€che und Leistung der Architektur werden untersucht und eine umfangreiche Analyse von ĂŒber 1400 BenchmarklĂ€ufen zeigt eine hohe Parameterempfindlichkeit bei Abweichungen bis zu ±95, 9% in der FlĂ€che und ±78, 1% in der Leistung, was die hohe Bedeutung von Anpassung fĂŒr Effizienz aufzeigt. Um die Parameter systematisch an die BedĂŒrfnisse der Applikation anzupassen, wird eine parametrische Entwurfsraum-Explorationsmethode auf der Basis geeigneter FlĂ€chen- und Zeitmodellen vorgeschlagen. Eine Herausforderung von angepassten Architekturen ist der Entwurfsaufwand und die Notwendigkeit fĂŒr angepasste Werkzeuge. Daher umfasst diese Arbeit ein Framework fĂŒr die Architekturgenerierung, die Entwurfsraumexploration, die Anwendungsabbildung und die Evaluation. Vor allem ist der V-FPGA in einem vollstĂ€ndig synthetisierbaren generischen Very High Speed Integrated Circuit Hardware Description Language (VHDL) Code konzipiert, der sehr flexibel ist und die Notwendigkeit fĂŒr externe Codegeneratoren eliminiert. Systementwickler können von verschiedenen Arten von generischen SoC-Architekturvorlagen profitieren, um die Entwicklungszeit zu reduzieren. Alle notwendigen Konstruktionsschritte fĂŒr die Applikationsentwicklung und -abbildung auf den V-FPGA werden durch einen Tool-Flow fĂŒr Entwurfsautomatisierung unterstĂŒtzt, der eine Sammlung von vorhandenen kommerziellen und akademischen Werkzeugen ausnutzt, die durch geeignete Modelle angepasst und durch ein neues Werkzeug namens V-FPGA-Explorer ergĂ€nzt werden. Dieses neue Tool fungiert nicht nur als Back-End-Tool fĂŒr die Anwendungsabbildung auf dem V-FPGA sondern ist auch ein grafischer Konfigurations- und Layout-Editor, ein Bitstream-Generator, ein Architekturdatei-Generator fĂŒr die Place & Route Tools, ein Script-Generator und ein Testbenchgenerator. Eine Besonderheit ist die UnterstĂŒtzung der Just-in-Time-Kompilierung mit schnellen Algorithmen fĂŒr die In-System Anwendungsabbildung. Die Arbeit schliesst mit einigen AnwendungsfĂ€llen aus den Bereichen industrielle Prozessautomatisierung, medizinische Bildgebung, adaptive Systeme und Lehre ab, in denen der V-FPGA eingesetzt wird

    Provably Trustworthy and Secure Hardware Design with Low Overhead

    Get PDF
    Due to the globalization of IC design in the semiconductor industry and outsourcing of chip manufacturing, 3PIPs become vulnerable to IP piracy, reverse engineering, counterfeit IC, and hardware Trojans. To thwart such attacks, ICs can be protected using logic encryption techniques. However, strong resilient techniques incur significant overheads. SCAs further complicate matters by introducing potential attacks post-fabrication. One of the most severe SCAs is PA attacks, in which an attacker can observe the power variations of the device and analyze them to extract the secret key. PA attacks can be mitigated via adding large extra hardware; however, the overheads of such solutions can render them impractical, especially when there are power and area constraints. In our first approach, we present two techniques to prevent normal attacks. The first one is based on inserting MUX equal to half/full of the output bit number. In the second technique, we first design PLGs using SiNW FETs and then replace some logic gates in the original design with their SiNW FETs-based PLGs counterparts. In our second approach, we use SiNW FETs to produce obfuscated ICs that are resistant to advanced reverse engineering attacks. Our method is based on designing a small block, whose output is untraceable, namely URSAT. Since URSAT may not offer very strong resilience against the combined AppSAT-removal attack, S-URSAT is achieved using only CMOS-logic gates, and this increases the security level of the design to robustly thwart all existing attacks. In our third topic, we present the usage of ASLD to produce secure and resilient circuits that withstand IC attacks (during the fabrication) and PA attacks (after fabrication). First, we show that ASLD has unique features that can be used to prevent PA and IC attacks. In our three topics, we evaluate each design based on performance overheads and security guarantees

    Integrated Circuits/Microchips

    Get PDF
    With the world marching inexorably towards the fourth industrial revolution (IR 4.0), one is now embracing lives with artificial intelligence (AI), the Internet of Things (IoTs), virtual reality (VR) and 5G technology. Wherever we are, whatever we are doing, there are electronic devices that we rely indispensably on. While some of these technologies, such as those fueled with smart, autonomous systems, are seemingly precocious; others have existed for quite a while. These devices range from simple home appliances, entertainment media to complex aeronautical instruments. Clearly, the daily lives of mankind today are interwoven seamlessly with electronics. Surprising as it may seem, the cornerstone that empowers these electronic devices is nothing more than a mere diminutive semiconductor cube block. More colloquially referred to as the Very-Large-Scale-Integration (VLSI) chip or an integrated circuit (IC) chip or simply a microchip, this semiconductor cube block, approximately the size of a grain of rice, is composed of millions to billions of transistors. The transistors are interconnected in such a way that allows electrical circuitries for certain applications to be realized. Some of these chips serve specific permanent applications and are known as Application Specific Integrated Circuits (ASICS); while, others are computing processors which could be programmed for diverse applications. The computer processor, together with its supporting hardware and user interfaces, is known as an embedded system.In this book, a variety of topics related to microchips are extensively illustrated. The topics encompass the physics of the microchip device, as well as its design methods and applications

    A fully integrated CMOS microelectrode system for electrochemistry

    Get PDF
    Electroanalysis has proven to be one of the most widely used technologies for point-of-care devices. Owing to the direct recording of the intrinsic properties of biochemical functions, the field has been involved in the study of biology since electrochemistry’s conception in the 1800’s. With the advent of microelectronics, humanity has welcomed self-monitoring portable devices such as the glucose sensor in its everyday routine. The sensitivity of amperometry/ voltammetry has been enhanced by the use of microelectrodes. Their arrangement into microelectrode arrays (MEAs) took a step forward into sensing biomarkers, DNA and pathogens on a multitude of sites. Integrating these devices and their operating circuits on CMOS monolithically miniaturised these systems even more, improved the noise response and achieved parallel data collection. Including microfluidics on this type of devices has led to the birth of the Lab-on-a-Chip technology. Despite the technology’s inclusion in many bioanalytical instruments there is still room for enhancing its capabilities and application possibilities. Even though research has been conducted on the selective preparation of microelectrodes with different materials in a CMOS MEA to sense several biomarkers, limited effort has been demonstrated on improving the parallel electroanalytical capabilities of these devices. Living and chemical materials have a tendency to alter their composition over time. Therefore analysing a biochemical sample using as many electroanalytical methods as possible simultaneously could offer a more complete diagnostic snapshot. This thesis describes the development of a CMOS Lab-on-a-Chip device comprised of many electrochemical cells, capable of performing simultaneous amperometric/voltammetric measurements in the same fluidic chamber. The chip is named an electrochemical cell microarray (ECM) and it contains a MEA controlled by independent integrated potentiostats. The key stages in this work were: to investigate techniques for the electrochemical cell isolation through simulations; to design and implement a CMOS ECM ASIC; to prepare the CMOS chip for use in an electrochemical environment and encapsulate it to work with liquids; to test and characterise the CMOS chip housed in an experimental system; and to make parallel measurements by applying different simultaneous electroanalytical methods. It is envisaged that results from the system could be combined with multivariate analysis to describe a molecular profile rather than only concentration levels. Simulations to determine the microelectrode structure and the potentiostat design, capable of constructing isolated electrochemical cells, were made using the Cadence CAD software package. The electrochemical environment and the microelectrode structure were modelled using a netlist of resistors and capacitors. The netlist was introduced in Cadence and it was simulated with potentiostat designs to produce 3-D potential distribution and electric field intensity maps of the chemical volume. The combination of a coaxial microelectrode structure and a fully differential potentiostat was found to result in independent electrochemical cells isolated from each other. A 4 x 4 integrated ECM controlled by on-chip fully differential potentiostats and made up by a 16 × 16 working electrode MEA (laid out with the coaxial structure) was designed in an unmodified 0.35 ÎŒm CMOS process. The working electrodes were connected to a circuit capable of multiplexing them along a voltammetric measurement, maintaining their diffusion layers during stand-by time. Two readout methods were integrated, a simple resistor for an analogue readout and a discrete time digital current-to-frequency charge-sensitive amplifier. Working electrodes were designed with a 20 ÎŒm side length while the counter and reference electrodes had an 11 ÎŒm width. The microelectrodes were designed using the aluminium top metal layer of the CMOS process. The chips were received from the foundry unmodified and passivated, thus they were post-process fabricated with photolithographic processes. The passivation layer had to be thinned over the MEA and completely removed on top of the microelectrodes. The openings were made 25 % smaller than the top metal layer electrode size to ensure a full coverage of the easily corroded Al metal. Two batches of chips were prepared, one with biocompatible Au on all the microelectrodes and one altered with Pd on the counter and Ag on the reference electrode. The chips were packaged on ceramic pin grid array packages and encapsulated using chemically resistant materials. Electroplating was verified to deposit Au with increased roughness on the microelectrodes and a cleaning step was performed prior to electrochemical experiments. An experimental setup containing a PCB, a PXIe system by National Instruments, and software programs coded for use with the ECM was prepared. The programs were prepared to conduct various voltammetric and amperometric methods as well as to analyse the results. The first batch of post-processed encapsulated chips was used for characterisation and experimental measurements. The on-chip potentiostat was verified to perform alike a commercial potentiostat, tested with microelectrode samples prepared to mimic the coaxial structure of the ECM. The on-chip potentiostat’s fully differential design achieved a high 5.2 V potential window range for a CMOS device. An experiment was also devised and a 12.3 % cell-to-cell electrochemical cross-talk was found. The system was characterised with a 150 kHz bandwidth enabling fast-scan cyclic voltammetry(CV) experiments to be performed. A relatively high 1.39 nA limit-of-detection was recorded compared to other CMOS MEAs, which is however adequate for possible applications of the ECM. Due to lack of a current polarity output the digital current readout was only eligible for amperometric measurements, thus the analogue readout was used for the rest of the measurements. The capability of the ECM system to perform independent parallel electroanalytical measurements was demonstrated with 3 different experimental techniques. The first one was a new voltammetric technique made possible by the ECM’s unique characteristics. The technique was named multiplexed cyclic voltammetry and it increased the acquisition speed of a voltammogram by a parallel potential scan on all the electrochemical cells. The second technique measured a chemical solution with 5 mM of ferrocene with constant potential amperometry, staircase cyclic voltammetry, normal pulse voltammetry, and differential pulse voltammetry simultaneously on different electrochemical cells. Lastly, a chemical solution with 2 analytes (ferrocene and decamethylferrocene) was prepared and they were sensed separately with constant potential amperometry and staircase cyclic voltammetry on different cells. The potential settings of each electrochemical cell were adjusted to detect its respective analyte

    Topical Workshop on Electronics for Particle Physics

    Get PDF
    The purpose of the workshop was to present results and original concepts for electronics research and development relevant to particle physics experiments as well as accelerator and beam instrumentation at future facilities; to review the status of electronics for the LHC experiments; to identify and encourage common efforts for the development of electronics; and to promote information exchange and collaboration in the relevant engineering and physics communities
    corecore