126 research outputs found

    Extension of the L1Calo PreProcessor System for the ATLAS Phase-I Calorimeter Trigger Upgrade

    Get PDF
    For the Run-3 data-taking period at the Large Hadron Collider (LHC), the hardware- based Level-1 Calorimeter Trigger (L1Calo) of the ATLAS experiment was upgraded. Through new and sophisticated algorithms, the upgrade will increase the trigger performance in a challenging, high-pileup environment while maintaining low selection thresholds. The Tile Rear Extension (TREX) modules are the latest addition to the L1Calo PreProcessor system. Hosting state-of-the-art FPGAs and high-speed optical transceivers, the TREX modules provide digitised hadronic transverse energies from the ATLAS Tile Calorimeter to the new feature extractor (FEX) processors every 25 ns. In addition, the modules are designed to maintain compatibility with the original trigger processors. The system of 32 TREX modules has been developed, produced and successfully installed in ATLAS. The thesis describes the functional implementation of the modules and the detailed integration and commissioning into the ATLAS detector

    Advanced photonic and electronic systems - WILGA 2017

    Get PDF
    WILGA annual symposium on advanced photonic and electronic systems has been organized by young scientist for young scientists since two decades. It traditionally gathers more than 350 young researchers and their tutors. Ph.D students and graduates present their recent achievements during well attended oral sessions. Wilga is a very good digest of Ph.D. works carried out at technical universities in electronics and photonics, as well as information sciences throughout Poland and some neighboring countries. Publishing patronage over Wilga keep Elektronika technical journal by SEP, IJET by PAN and Proceedings of SPIE. The latter world editorial series publishes annually more than 200 papers from Wilga. Wilga 2017 was the XL edition of this meeting. The following topical tracks were distinguished: photonics, electronics, information technologies and system research. The article is a digest of some chosen works presented during Wilga 2017 symposium. WILGA 2017 works were published in Proc. SPIE vol.10445

    Novel Transistor Resistance Variation-based Physical Unclonable Functions with On-Chip Voltage-to-Digital Converter Designed for Use in Cryptographic and Authentication Applications

    Get PDF
    Security mechanisms such as encryption, authentication, and feature activation depend on the integrity of embedded secret keys. Currently, this keying material is stored as digital bitstrings in non-volatile memory on FPGAs and ASICs. However, secrets stored this way are not secure against a determined adversary, who can use specialized probing attacks to uncover the secret. Furthermore, storing these pre-determined bitstrings suffers from the disadvantage of not being able to generate the key only when needed. Physical Unclonable Functions (PUFs) have emerged as a superior alternative to this. A PUF is an embedded Integrated Circuit (IC) structure that is designed to leverage random variations in physical parameters of on-chip components as the source of entropy for generating random and unique bitstrings. PUFs also incorporate an on-chip infrastructure for measuring and digitizing these variations in order to produce bitstrings. Additionally, PUFs are designed to reproduce a bitstring on-demand and therefore eliminate the need for on-chip storage. In this work, two novel PUFs are presented that leverage the random variations observed in the resistance of transistors. A thorough analysis of the randomness, uniqueness and stability characteristics of the bitstrings generated by these PUFs is presented. All results shown are based on an exhaustive testing of a set of 63 chips designed with numerous copies of the PUFs on each chip and fabricated in a 90nm nine-metal layer technology. An on-chip voltage-to-digital conversion technique is also presented and tested on the set of 63 chips. Statistical results of the bitstrings generated by the on-chip digitization technique are compared with that of the voltage-derived bitstrings to evaluate the efficacy of the digitization technique. One of the most important quality metrics of the PUF and the on-chip voltage-to-digital converter, the stability, is evaluated through a lengthy temperature-voltage testing over the range of -40C to +85C and voltage variations of +/- 10% of the nominal supply voltage. The stability of both the bitstrings and the underlying physical parameters is evaluated for the PUFs using the data collected from the hardware experiments and supported with software simulations conducted on the devices. Several novel techniques are proposed and successfully tested that address known issues related to instability of PUFs to changing temperature and voltage conditions, thus rendering our PUFs more resilient to these changing conditions faced in practical use. Lastly, an analysis of the stability to changing temperature and voltage variations of a third PUF that leverages random variations in the resistance of the metal wires in the power and ground grids of a chip is also presented

    MakerFluidics: low cost microfluidics for synthetic biology

    Full text link
    Recent advancements in multilayer, multicellular, genetic logic circuits often rely on manual intervention throughout the computation cycle and orthogonal signals for each chemical “wire”. These constraints can prevent genetic circuits from scaling. Microfluidic devices can be used to mitigate these constraints. However, continuous-flow microfluidics are largely designed through artisanal processes involving hand-drawing features and accomplishing design rule checks visually: processes that are also inextensible. Additionally, continuous-flow microfluidic routing is only a consideration during chip design and, once built, the routing structure becomes “frozen in silicon,” or for many microfluidic chips “frozen in polydimethylsiloxane (PDMS)”; any changes to fluid routing often require an entirely new device and control infrastructure. The cost of fabricating and controlling a new device is high in terms of time and money; attempts to reduce one cost measure are, generally, paid through increases in the other. This work has three main thrusts: to create a microfluidic fabrication framework, called MakerFluidics, that lowers the barrier to entry for designing and fabricating microfluidics in a manner amenable to automation; to prove this methodology can design, fabricate, and control complex and novel microfluidic devices; and to demonstrate the methodology can be used to solve biologically-relevant problems. Utilizing accessible technologies, rapid prototyping, and scalable design practices, the MakerFluidics framework has demonstrated its ability to design, fabricate and control novel, complex and scalable microfludic devices. This was proven through the development of a reconfigurable, continuous-flow routing fabric driven by a modular, scalable primitive called a transposer. In addition to creating complex microfluidic networks, MakerFluidics was deployed in support of cutting-edge, application-focused research at the Charles Stark Draper Laboratory. Informed by a design of experiments approach using the parametric rapid prototyping capabilities made possible by MakerFluidics, a plastic blood--bacteria separation device was optimized, demonstrating that the new device geometry can separate bacteria from blood while operating at 275% greater flow rate as well as reduce the power requirement by 82% for equivalent separation performance when compared to the state of the art. Ultimately, MakerFluidics demonstrated the ability to design, fabricate, and control complex and practical microfluidic devices while lowering the barrier to entry to continuous-flow microfluidics, thus democratizing cutting edge technology beyond a handful of well-resourced and specialized labs

    Introduction to Logic Circuits & Logic Design with VHDL

    Get PDF
    The overall goal of this book is to fill a void that has appeared in the instruction of digital circuits over the past decade due to the rapid abstraction of system design. Up until the mid-1980s, digital circuits were designed using classical techniques. Classical techniques relied heavily on manual design practices for the synthesis, minimization, and interfacing of digital systems. Corresponding to this design style, academic textbooks were developed that taught classical digital design techniques. Around 1990, large-scale digital systems began being designed using hardware description languages (HDL) and automated synthesis tools. Broad-scale adoption of this modern design approach spread through the industry during this decade. Around 2000, hardware description languages and the modern digital design approach began to be taught in universities, mainly at the senior and graduate level. There were a variety of reasons that the modern digital design approach did not penetrate the lower levels of academia during this time. First, the design and simulation tools were difficult to use and overwhelmed freshman and sophomore students. Second, the ability to implement the designs in a laboratory setting was infeasible. The modern design tools at the time were targeted at custom integrated circuits, which are cost- and time-prohibitive to implement in a university setting. Between 2000 and 2005, rapid advances in programmable logic and design tools allowed the modern digital design approach to be implemented in a university setting, even in lower-level courses. This allowed students to learn the modern design approach based on HDLs and prototype their designs in real hardware, mainly field programmable gate arrays (FPGAs). This spurred an abundance of textbooks to be authored teaching hardware description languages and higher levels of design abstraction. This trend has continued until today. While abstraction is a critical tool for engineering design, the rapid movement toward teaching only the modern digital design techniques has left a void for freshman- and sophomore-level courses in digital circuitry. Legacy textbooks that teach the classical design approach are outdated and do not contain sufficient coverage of HDLs to prepare the students for follow-on classes. Newer textbooks that teach the modern digital design approach move immediately into high-level behavioral modeling with minimal or no coverage of the underlying hardware used to implement the systems. As a result, students are not being provided the resources to understand the fundamental hardware theory that lies beneath the modern abstraction such as interfacing, gate-level implementation, and technology optimization. Students moving too rapidly into high levels of abstraction have little understanding of what is going on when they click the “compile and synthesize” button of their design tool. This leads to graduates who can model a breadth of different systems in an HDL but have no depth into how the system is implemented in hardware. This becomes problematic when an issue arises in a real design and there is no foundational knowledge for the students to fall back on in order to debug the problem

    Introduction to Logic Circuits & Logic Design with Verilog

    Get PDF
    The overall goal of this book is to fill a void that has appeared in the instruction of digital circuits over the past decade due to the rapid abstraction of system design. Up until the mid-1980s, digital circuits were designed using classical techniques. Classical techniques relied heavily on manual design practices for the synthesis, minimization, and interfacing of digital systems. Corresponding to this design style, academic textbooks were developed that taught classical digital design techniques. Around 1990, large-scale digital systems began being designed using hardware description languages (HDL) and automated synthesis tools. Broad-scale adoption of this modern design approach spread through the industry during this decade. Around 2000, hardware description languages and the modern digital design approach began to be taught in universities, mainly at the senior and graduate level. There were a variety of reasons that the modern digital design approach did not penetrate the lower levels of academia during this time. First, the design and simulation tools were difficult to use and overwhelmed freshman and sophomore students. Second, the ability to implement the designs in a laboratory setting was infeasible. The modern design tools at the time were targeted at custom integrated circuits, which are cost- and time-prohibitive to implement in a university setting. Between 2000 and 2005, rapid advances in programmable logic and design tools allowed the modern digital design approach to be implemented in a university setting, even in lower-level courses. This allowed students to learn the modern design approach based on HDLs and prototype their designs in real hardware, mainly fieldprogrammable gate arrays (FPGAs). This spurred an abundance of textbooks to be authored, teaching hardware description languages and higher levels of design abstraction. This trend has continued until today. While abstraction is a critical tool for engineering design, the rapid movement toward teaching only the modern digital design techniques has left a void for freshman- and sophomore-level courses in digital circuitry. Legacy textbooks that teach the classical design approach are outdated and do not contain sufficient coverage of HDLs to prepare the students for follow-on classes. Newer textbooks that teach the modern digital design approach move immediately into high-level behavioral modeling with minimal or no coverage of the underlying hardware used to implement the systems. As a result, students are not being provided the resources to understand the fundamental hardware theory that lies beneath the modern abstraction such as interfacing, gate-level implementation, and technology optimization. Students moving too rapidly into high levels of abstraction have little understanding of what is going on when they click the “compile and synthesize” button of their design tool. This leads to graduates who can model a breadth of different systems in an HDL but have no depth into how the system is implemented in hardware. This becomes problematic when an issue arises in a real design and there is no foundational knowledge for the students to fall back on in order to debug the problem
    corecore