15 research outputs found

    Advanced photonic and electronic systems WILGA 2018

    Get PDF
    WILGA annual symposium on advanced photonic and electronic systems has been organized by young scientist for young scientists since two decades. It traditionally gathers around 400 young researchers and their tutors. Ph.D students and graduates present their recent achievements during well attended oral sessions. Wilga is a very good digest of Ph.D. works carried out at technical universities in electronics and photonics, as well as information sciences throughout Poland and some neighboring countries. Publishing patronage over Wilga keep Elektronika technical journal by SEP, IJET and Proceedings of SPIE. The latter world editorial series publishes annually more than 200 papers from Wilga. Wilga 2018 was the XLII edition of this meeting. The following topical tracks were distinguished: photonics, electronics, information technologies and system research. The article is a digest of some chosen works presented during Wilga 2018 symposium. WILGA 2017 works were published in Proc. SPIE vol.10445. WILGA 2018 works were published in Proc. SPIE vol.10808

    Hardware Implementation of Statecharts for FPGA-based Control in Scientific Facilities

    Get PDF
    Date of Conference: 20-22 Nov. 2019; Conference Location: Bilbao, Spain[Abstract] The problem of generating complex synchronization patterns using automated tools is addressed in this paper. This work was originally motivated by the need of fast and jitter free synchronization in scientific facilities, where a large number of sensors and actuators must be controlled at the right time in a variety of situations. Programmable processors cannot meet the real-time requirements, forcing to use dedicated circuits to produce and transmit the control signals. Designing application specific hardware by hand is a slow and error-prone task. Hence, a set of tools is required that allow specifying the control systems in a clear and efficient way and producing synthesizable HDL (hardware description language) code in an automated manner. Statechart diagrams have been selected as the input method, and this work focuses on how to translate those diagrams into HDL code. We present a tool that analyzes a Statecharts specification and implements the required control systems using FPGAs. A number of solutions are provided to deal with multiple triggering events and concurrent super-states. Also, an alternative microprogrammed implementation is proposed.This work was funded in part by the Ministry of Economy and Competitiveness of Spain, Project TIN2016-75845-P (AEI/FEDER, UE), Xunta de Galicia and FEDER funds of the EU under the Consolidation Program of Competitive Reference Groups (ED431C 2017/04), and under the Centro Singular de Investigaci ´on de Galicia accreditation 2016-2019 (ED431G/01)Xunta de Galicia; ED431C 2017/04Xunta de Galicia; ED431G/0

    A Survey on FPGA-Based Heterogeneous Clusters Architectures

    Get PDF
    In recent years, the most powerful supercomputers have already reached megawatt power consumption levels, an important issue that challenges sustainability and shows the impossibility of maintaining this trend. To this date, the prevalent approach to supercomputing is dominated by CPUs and GPUs. Given their fixed architectures with generic instruction sets, they have been favored with lots of tools and mature workflows which led to mass adoption and further growth. However, reconfigurable hardware such as FPGAs has repeatedly proven that it offers substantial advantages over this supercomputing approach concerning performance and power consumption. In this survey, we review the most relevant works that advanced the field of heterogeneous supercomputing using FPGAs focusing on their architectural characteristics. Each work was divided into three main parts: network, hardware, and software tools. All implementations face challenges that involve all three parts. These dependencies result in compromises that designers must take into account. The advantages and limitations of each approach are discussed and compared in detail. The classification and study of the architectures illustrate the trade-offs of the solutions and help identify open problems and research lines

    HyperFPGA: SoC-FPGA Cluster Architecture for Supercomputing and Scientific applications

    Get PDF
    Since their inception, supercomputers have addressed problems that far exceed those of a single computing device. Modern supercomputers are made up of tens of thousands of CPUs and GPUs in racks that are interconnected via elaborate and most of the time ad hoc networks. These large facilities provide scientists with unprecedented and ever-growing computing power capable of tackling more complex and larger problems. In recent years, the most powerful supercomputers have already reached megawatt power consumption levels, an important issue that challenges sustainability and shows the impossibility of maintaining this trend. With more pressure on energy efficiency, an alternative to traditional architectures is needed. Reconfigurable hardware, such as FPGAs, has repeatedly been shown to offer substantial advantages over the traditional supercomputing approach with respect to performance and power consumption. In fact, several works that advanced the field of heterogeneous supercomputing using FPGAs are described in this thesis \cite{survey-2002}. Each cluster and its architectural characteristics can be studied from three interconnected domains: network, hardware, and software tools, resulting in intertwined challenges that designers must take into account. The classification and study of the architectures illustrate the trade-offs of the solutions and help identify open problems and research lines, which in turn served as inspiration and background for the HyperFPGA. In this thesis, the HyperFPGA cluster is presented as a way to build scalable SoC-FPGA platforms to explore new architectures for improved performance and energy efficiency in high-performance computing, focusing on flexibility and openness. The HyperFPGA is a modular platform based on a SoM that includes power monitoring tools with high-speed general-purpose interconnects to offer a great level of flexibility and introspection. By exploiting the reconfigurability and programmability offered by the HyperFPGA infrastructure, which combines FPGAs and CPUs, with high-speed general-purpose connectors, novel computing paradigms can be implemented. A custom Linux OS and drivers, along with a custom script for hardware definition, provide a uniform interface from application to platform for a programmable framework that integrates existing tools. The development environment is demonstrated using the N-Queens problem, which is a classic benchmark for evaluating the performance of parallel computing systems. Overall, the results of the HyperFPGA using the N-Queens problem highlight the platform's ability to handle computationally intensive tasks and demonstrate its suitability for its use in supercomputing experiments.Since their inception, supercomputers have addressed problems that far exceed those of a single computing device. Modern supercomputers are made up of tens of thousands of CPUs and GPUs in racks that are interconnected via elaborate and most of the time ad hoc networks. These large facilities provide scientists with unprecedented and ever-growing computing power capable of tackling more complex and larger problems. In recent years, the most powerful supercomputers have already reached megawatt power consumption levels, an important issue that challenges sustainability and shows the impossibility of maintaining this trend. With more pressure on energy efficiency, an alternative to traditional architectures is needed. Reconfigurable hardware, such as FPGAs, has repeatedly been shown to offer substantial advantages over the traditional supercomputing approach with respect to performance and power consumption. In fact, several works that advanced the field of heterogeneous supercomputing using FPGAs are described in this thesis \cite{survey-2002}. Each cluster and its architectural characteristics can be studied from three interconnected domains: network, hardware, and software tools, resulting in intertwined challenges that designers must take into account. The classification and study of the architectures illustrate the trade-offs of the solutions and help identify open problems and research lines, which in turn served as inspiration and background for the HyperFPGA. In this thesis, the HyperFPGA cluster is presented as a way to build scalable SoC-FPGA platforms to explore new architectures for improved performance and energy efficiency in high-performance computing, focusing on flexibility and openness. The HyperFPGA is a modular platform based on a SoM that includes power monitoring tools with high-speed general-purpose interconnects to offer a great level of flexibility and introspection. By exploiting the reconfigurability and programmability offered by the HyperFPGA infrastructure, which combines FPGAs and CPUs, with high-speed general-purpose connectors, novel computing paradigms can be implemented. A custom Linux OS and drivers, along with a custom script for hardware definition, provide a uniform interface from application to platform for a programmable framework that integrates existing tools. The development environment is demonstrated using the N-Queens problem, which is a classic benchmark for evaluating the performance of parallel computing systems. Overall, the results of the HyperFPGA using the N-Queens problem highlight the platform's ability to handle computationally intensive tasks and demonstrate its suitability for its use in supercomputing experiments

    Timing Architecture for ESS

    Get PDF
    Programa Oficial de Doutoramento en Investigación en Tecnoloxías da Información. 5023V01[Resumo] O sistema de temporización é unha compoñente fundamental para o control e sincronización de instalacións industriais e científicas, coma aceleradores de partículas. Nesta tese traballamos na especificación e desenvolvemento do sistema de temporización para a European Spallation Source (ESS), a maior fonte de neutróns actualmente en construción. Abordamos este tra­ ballo a dous niveis: a especificación do sistema de temporización, e a imple­ mentación física de sistemas de control empregando circuítos reconfigurables. Con respecto á especificación do sistema de temporización, deseñamos e implementamos a configuración do protocolo de temporización para cumprir cos requirimentos do ESS e ideamos un modo de operación e unha aplicación para a configuración e control do sistema de temporización. Tamén presentamos unha ferramenta e unha metodoloxía para imple­ mentar sistemas de control empregando FPGAs, coma os nodos do sistema de temporización. ámbalas <lúas están baseadas en statecharts, unha repre­ sentación gráfica de sistemas que expande o concepto de máquinas de estados finitos, orientada a sistemas que necesitan ser reconfigurados rápidamente en múltiples localizacións minimizando a posibilidade de erros. A ferramenta crea automaticamente código VHDL sintetizable a partir do statechart do sistema. A metodoloxía explica o procedemento para implementar o state­ chart como unha arquitectura microprogramada en FPGAs.[Resumen] El sistema de temporización es un componente fundamental para el control y sincronización de instalaciones industriales y científicas, como aceleradores e partículas. En esta tesis trabajamos en la especificación y desarrollo el sistema de temporización para la European Spallation Source (ESS), la mayor fuente de neutrones actualmente en construcción. Abordamos este trabajo en dos niveles: la especificación del sistema de temporización, y la mplementación física de sistemas de control empleando circuitos reconfig­ rables. Con respecto a la especificación del sistema de temporización, diseñamos e implementamos la configuración del protocolo de temporización para cumplir on los requisitos de ESS e ideamos un modo de operación y una aplicación ara la configuración y control del sistema de temporización. También presentamos una herramienta y una metodología para imple­ entar sistemas de control empleando FPGAs, como los nodos del sistema e temporización. Ambas están basadas en statecharts) una representación gráfica de sistemas que expande el concepto de máquinas de estados fini­ os, orientada a sistemas que necesitan ser reconfigurados rápidamente en últiples localizaciones minimizando la posibilidad de errores. La her­ramienta crea automáticamente código VHDL sintetizable a partir del state­chart del sistema. La metodología explica el procedimiento para implemen­tar el statechart como una arquitectura microprogramada en FPGAs.[Abstract] The timing system is a key component for the control and synchronization of industrial and scientific facilities, such as particle accelerators. In this thesis we tackle the specification and development of the timing system for the European Spallation Source (ESS), the largest neutron source currently in construction. We approach this work at two levels: the specification of the timing system and the physical implementation of control systems using reconfigurable hardware. Regarding the specification of the timing system, we designed and imple­ mented the configuration of the timing protocol to fulfil the requirements of ESS and devised an operation mode andan application for the configuration and control of the timing system. We also present one too! and one methodology to implement control systems using FPGAs, such as the nodes of the timing system. Both are based on statecharts, a graphical representation of systems that expand the concepts of Finite State Machines, targeted at systems that need to be re­ configured quickly in multiple locations minimizing the chance of errors. The too! automatically creates synthesizable VHDL code from a statechart of the system. The methodology explains the procedure to implement the statechart as a microprogrammed architecture in FPGAs

    Research and development for the data, trigger and control card in preparation for Hi-Lumi lhc

    Get PDF
    When the Large Hadron Collider (LHC) increases its luminosity by an order of magnitude in the coming decade, the experiments that sit upon it must also be upgraded to continue to their physics performance in the increasingly demanding environment. To achieve this, the Compact Muon Solenoid (CMS) experiment will make use of tracking information in the Level-1 trigger for the first time, meaning that track reconstruction must be achieved in less than 4 μs in an all-FPGA architecture. MUonE is an experiment aiming to make an accurate measurement of the the hadronic contribution to the anomalous magnetic moment of the muon. It will achieve this by making use of similar apparatus to that designed for CMS and benefit from the research and development efforts there. This thesis presents both development and testing work for the readout chain from tracker module to back-end processing card, as well as the results and analysis of a beam test used to validate this chain for both CMS and the MUonE experiment.Open Acces

    Characterization and optimization of the prototype DEPFET modules for the Belle II Pixel Vertex Detector

    Get PDF
    Der Elektron-Positron-Speicherring KEKB wurde von 1999 bis 2010 am Hochenergie- und Beschleunigerforschungszentrum KEK in Tsukuba (Japan) betrieben, wobei die Schwerpunktsenergie hauptsächlich dem Anregungszustand des Y(4S)-Teilchens (10.58 GeV) entsprach. KEKB erreichte während seiner Betriebszeit eine integrierte Luminosität von 1041 fb^-1. Mit dem Belle-Detektor wurden die Zerfälle von B-Mesonen untersucht, die die Theorie über den Ursprung der CP-Verletzung im Standardmodell von Kobayashi und Maskawa bestätigten; dafür erhielten beide im Jahr 2008 den Nobelpreis. Der Speicherring KEKB wird zu SuperKEKB erneuert, um Antworten auf die vielen offenen Fragen des Standardmodells und möglicherweise „Neue Physik“ jenseits des Standardmodells zu finden. Die Teilchenstrahlen werden auf etwa 50 Nanometer am Wechselwirkungspunkt kollimiert (“nano-beam scheme“), damit die weltweit höchste instantane Luminosität von KEKB um einen weiteren Faktor 40 auf 8x10^35 cm^-2 s^-1 gesteigert werden kann. Die (physikalischen) Ziele des Projekts sind die präzise Vermessung der CP-Verletzung und die Suche nach seltenen oder sogar „verbotenen“ Zerfällen von B-Mesonen, um mögliche Abweichungen vom Standardmodell zu finden. Verschiedene Komponenten müssen von Belle erneuert werden (Belle II), um die hohe instantane Luminosität von SuperKEKB zu bewältigen. Nicht nur die Anzahl der Ereignisse nimmt zu, sondern auch der Untergrund, insbesondere der unvermeidbare Zwei-Photonen-Untergrundprozess. Mit einem Siliziumvertexdetektor werden im Experiment die Zerfallsvertices der B-Mesonen analysiert. Der Vertexdetektor soll so nah wie möglich um das Strahlrohr platziert werden, damit Extrapolationsfehler der Zerfallsvertices minimiert werden. Da ein Siliziumstreifendetektor, wie er in Belle benutzt wurde, den hohen Untergrund im geringen Abstand zum Strahlrohr nicht bewältigen kann, wird ein neuartiger Pixel-Detektor (PXD) installiert, der aus monolitischen DEPFET (DEPletierter p-Kanal Feld Effekt Transistor) Pixel-Sensoren besteht. Der DEPFET-Sensor kann bis zu 75 um gedünnt werden, um die Mehrfachstreuung zu minimieren, besitzt ein hohes Signal-Rausch-Verhältnis, verfügt über eine intrinsische Positionsausflösung von 15 um, unterstützt schnelle Auslesezeiten von weniger als 20 us und hat einen geringen Stromverbrauch. Der PXD besteht insgesamt aus 40 Sensor-Modulen, wobei jedes mit 14 ASICs für die Steuerung und Auslese bestückt ist. Die Module werden in zwei Lagen um das Strahlrohr montiert. Die vorliegende Arbeit fokussiert sich auf die Charakterisierung und Optimierung der ersten Prototypen der finalen PXD-Module. Die kombinierte Kontroll- und Ausleseelektronik wurde auf Prototyp-Modulen untersucht, verbessert und optimiert: Sechs Switcher pro Modul schalten die Pixelzeilen nacheinander ein (rolling-shutter Modus / zeilenweiser Auslesemodus), um die signalverstärkten Drainströme der DEPFET-Pixel zu messen und die Pixelzelle zurückzusetzen. Insgesamt messen 1000 ADCs auf jedem Modul die Drainströme mit einer PXD-Auslesefrequenz von 50 kHz. Damit die Pixel korrekt angesteuert werden, wurden Steuerungssequenzen für die Switcher simuliert und auf den Prototyp-Modulen getestet. Die systemrelevanten Aspekte, wie die inter-ASIC Kommunikation, Kontrollsequenzen und Synchronisationsprobleme wurden eingehend untersucht und optimiert. Zusätzlich wurden Messungen mit radioaktiven Quellen und Lasern durchgeführt, um die optimalen Operationsspannungen für verschiedene Betriebsmodi zu bestimmen. Der zeilenweise Auslesemodus von 20 us erscheint problematisch, wenn ein kurzzeitiger, periodischer Untergrund auftritt, beispielsweise während der Aufstockungsinjektion der Teilchenpakete in SuperKEKB. Um dieses Problem zu lösen, wurde ein neuer Arbeitsmodus vorgeschlagen und untersucht, welcher einen „gated“ Betriebsmodus des Detektors ermöglicht. Dies schaltet den Pixel-Vertex-Detektor für eine kurze Zeitspanne 1-2 us blind, während der hohe Untergrund erwartet wird. Ein Prototyp-Modul wurde im „Gated Mode“ betrieben; Ursachen von auftretenden Problemen wurden ausfindig gemacht. Die daraus resultierenden Verbesserungen trugen dem finalen Modul-Layout bei. Außerdem wurden zwei verschiedene Arten von Prototyp-Modulen erfolgreich in einer Strahltest-Kampagne betrieben. Die Ladungs-Cluster-Verteilungen, Positionsauflösung und Effizienzen wurden studiert, wobei deutlich wird, dass sich die Sensoren gut für den Betrieb in Belle II eignen.The Belle detector was located at the electron-positron collider KEKB in Tsukuba, Japan. It operated from 1999 to 2010, running mostly at the Y(4S) resonance, and achieved an integrated luminosity of 1041 fb^-1. The main research topic was the CP violation in the B meson system. The measured results on B meson decays confirmed the theory of Kobayashi and Maskawa (Nobel Prize 2008) on the origin of CP violation within the Standard Model. Since the Standard Model nevertheless leaves many open questions, the upgrade of KEKB to SuperKEKB has the potential to find New Physics beyond the Standard Model. SuperKEKB will increase the world-record instantaneous luminosity of KEKB by a factor of 40 to 8x10^35 cm^-2 s^-1 using the nano-beam scheme. The physics goals are the precise measurement of CP violation, searching for rare or even "forbidden" decays of B mesons and finding small deviations from the Standard Model with larger statistics and more precise measurements than ever before. To cope with the large luminosity of SuperKEKB various components of Belle need to be upgraded to the Belle II detector. Given the high luminosity, not only the number of events increases but also the background, in particular, the inevitable two-photon process. To minimize the extrapolation errors of the decay vertices of the B mesons the vertex detector should be situated as close as possible to the beam pipe. A silicon strip detector, as used in Belle, is not able to cope with the high background at SuperKEKB. Therefore, a novel pixel vertex detector (PXD) will be installed, featuring monolithic sensors using the DEPFET (DEPleted p-channel Field Effect Transistor) technology. The sensors can be thinned down to only 75 um to minimize multiple scattering, offer high signal-to-noise ratio, provide high intrinsic position resolution of ~15 um, support fast readout within 20 us and have low power consumption. The PXD consists of 40 sensors, each equipped with 14 custom-made ASICs for control and readout, which are mounted in two layers around the beam pipe. This thesis focuses on the characterization and optimization of the first full-size prototypes of the final sensor modules for the PXD. The combined control and readout electronics was investigated, improved and optimized on prototype modules equipped with the complete set of ASICs: six Switchers per module enable the pixel rows subsequently (rolling shutter mode) to measure the signal-amplified Drain currents from the DEPFETs and reset the device. A total of 1000 ADCs on each module sample the Drain currents resulting in a readout frequency of 50 kHz for the PXD. Switcher control sequences were simulated and applied for the prototypes to control the pixels properly. The system-related aspects like the inter-ASIC communication, control sequences and synchronization issues were studied and optimized. Measurements with radioactive sources and lasers were performed to determine optimal voltages for the different operation modes. The rolling shutter readout mode is problematic when transient intermittent high background is present, for instance during the top-up injection of SuperKEKB. To address this issue a new readout mode is proposed and investigated, which allows a "gated" or shutter-controlled operation of the detector. This makes the detector blind for a certain time interval in which high background is expected. A prototype module was operated in the Gated Mode; causes of encountered problems were identified and improvements were proposed and applied to the module layout. Two different kinds of prototype modules were operated successfully in a beam test campaign. The cluster charge distributions, position resolutions and efficiencies were studied and prove that the sensor is well suited for the operation at Belle II

    Design and validation of a scalable Digital Wireless Channel Emulator using an FPGA computing cluster

    Get PDF
    A Digital Wireless Channel Emulator (DWCE) is a system that is capable of emulating the RF environment for a group of wireless devices. The use of digital wireless channel emulators with networking radios is hampered by the inability to efficiently scale a DWCE to a large number of nodes. If such a large scale digital wireless channel emulator were to exist, a significant amount of time and money could be saved by testing networking radios in a laboratory before running lengthy and costly field tests. By utilizing the repeatability of a laboratory environment it will be possible to investigate and solve issues more quickly and efficiently. This will enable the performance of the radios to be known with a high degree of certainty before they are brought to the field. This dissertation investigates the use of an FPGA cluster configured as a distributed system to provide the computational and network structure to scale a DWCE to support 1250 or more wireless devices. This number of wireless devices is approximately two orders of magnitude larger than any other documented system. In this dissertation, the term ”scale” used for a DWCE is defined as an increase of three key factors: number of wireless devices, signal bandwidth emulated, and the fidelity of the emulation. It is possible to make tradeoffs and reduce any one of these to increase the other two. This dissertation shows a DWCE that can increase all of these factors in an efficient manner and thoroughly investigates the fidelity of the emulation it produces
    corecore