36 research outputs found
A novel fluorescence-based assay for the rapid detection and quantification of cellular deoxyribonucleoside triphosphates
Current methods for measuring deoxyribonucleoside triphosphates (dNTPs) employ reagent and labor-intensive assays utilizing radioisotopes in DNA polymerase-based assays and/or chromatography-based approaches. We have developed a rapid and sensitive 96-well fluorescence-based assay to quantify cellular dNTPs utilizing a standard real-time PCR thermocycler. This assay relies on the principle that incorporation of a limiting dNTP is required for primer-extension and Taq polymerase-mediated 5â3âČ exonuclease hydrolysis of a dual-quenched fluorophore-labeled probe resulting in fluorescence. The concentration of limiting dNTP is directly proportional to the fluorescence generated. The assay demonstrated excellent linearity (R2â>â0.99) and can be modified to detect between âŒ0.5 and 100âpmol of dNTP. The limits of detection (LOD) and quantification (LOQ) for all dNTPs were defined as <0.77 and <1.3âpmol, respectively. The intra-assay and inter-assay variation coefficients were determined to be <4.6% and <10%, respectively with an accuracy of 100â±â15% for all dNTPs. The assay quantified intracellular dNTPs with similar results obtained from a validated LCâMS/MS approach and successfully measured quantitative differences in dNTP pools in human cancer cells treated with inhibitors of thymidylate metabolism. This assay has important application in research that investigates the influence of pathological conditions or pharmacological agents on dNTP biosynthesis and regulation
Développement de méthodes de tests pour la qualification de composants et de systÚmes électroniques adaptés aux environnements de rayonnement des accélérateurs à haute énergie
The Large Hadron Collider (LHC), the largest and most powerful in the world, started in 2008 and is the last stage of CERN's accelerator complex. The LHC consists in a 27-kilometer ring of superconducting magnets allowing to accelerate two beams up to 7 TeV before colliding them at 14 TeV in one of the five experiments monitoring the result of the collision. The LHC allowed notably the discovery of the Higgs boson and other baryonic particles predicted by the standard model. The radiation environment of the LHC and its injection lines is composed of different particles over a large spectrum of energies, from GeV level down to meV level (e.g. thermal neutron). The electronic equipment operating in such a harsh radiation environment, mostly based on Commercial Off The Shelf (COTS) components, can experience failures induced by radiation effects. The criticality of the equipment can be very high, in the best case, the failure of a control system can lead to a beam dump, which can drastically the availability of the beam for science and in the worst case, the failure of a safety system can lead to the destruction of part of the machine. The new upgrade of the LHC planned for 2025, the High Luminosity LHC (HL-LHC) will achieve an annual luminosity five time higher than the current version of the LHC. Consequently, the levels of the radiation generated by the operation of the machine will also drastically increase. With such high radiation levels, a significant number of COTS-based systems will be exposed to radiation levels they cannot withstand. This will imply to either design more robust tolerant COTS-based systems and/or substitute preventively systems before their end of life. Thus, while in the previous years the Single Event Effects (SEEs) where the dominant cause of failure, in the future, cumulative radiation effect will as well become a major preoccupation. While a huge effort has been done in the past on the qualification process against SEE-induced failures, the qualification process for cumulative radiation effects, remained mostly unchanged. The aim of this work was, therefore, to investigate how the CERNâs Radiation Hardness Assurance (RHA) could be improved to respond to this new challenge and ensure that no system failures will impact the LHC operations. This involved several activities; (i) the study of the particularities of the LHC radiative environment and its impact on the components and systems exposed to it, (ii) the study of the suitability of current qualification methods and the development of approaches adapted to CERNâs needs and (iii) the study of reliable system lifetime estimation methods.Le Grand collisionneur de hadrons (LHC), le plus grand et le plus puissant au monde, a dĂ©marrĂ© en 2008 et constitue la derniĂšre Ă©tape du complexe des accĂ©lĂ©rateurs du CERN. Le LHC consiste en un anneau de 27 kilomĂštres d'aimants supraconducteurs permettant d'accĂ©lĂ©rer deux faisceaux jusqu'Ă 7 TeV avant de les faire entrer en collision Ă 14 TeV dans l'une des cinq expĂ©riences de contrĂŽle du rĂ©sultat de la collision. Le LHC a notamment permis la dĂ©couverte du boson de Higgs et d'autres particules baryoniques prĂ©dites par le modĂšle standard. L'environnement de rayonnement du LHC et de ses lignes d'injection est composĂ© de diffĂ©rentes particules sur un large spectre d'Ă©nergies, du niveau GeV jusqu'au niveau meV (par exemple le neutron thermique). L'Ă©quipement Ă©lectronique fonctionnant dans un environnement de rayonnement aussi rude, principalement basĂ© sur des composants commerciaux prĂȘts Ă l'emploi (COTS), peut subir des dĂ©faillances induites par des effets de rayonnement. La criticitĂ© de l'Ă©quipement peut ĂȘtre trĂšs Ă©levĂ©e, dans le meilleur des cas, la dĂ©faillance d'un systĂšme de contrĂŽle peut conduire Ă une chute du faisceau, ce qui peut drastiquement rendre le faisceau disponible pour la science et dans le pire des cas, la dĂ©faillance d'un systĂšme de sĂ©curitĂ© peut conduire Ă la destruction d'une partie de la machine. La nouvelle mise Ă niveau du LHC prĂ©vue pour 2025, le LHC Ă haute luminositĂ© (HL-LHC) atteindra une luminositĂ© annuelle cinq fois supĂ©rieure Ă celle de la version actuelle du LHC. Par consĂ©quent, les niveaux de rayonnement gĂ©nĂ©rĂ©s par le fonctionnement de la machine vont Ă©galement augmenter considĂ©rablement. Avec des niveaux de rayonnement aussi Ă©levĂ©s, un nombre important de systĂšmes commerciaux seront exposĂ©s Ă des niveaux de rayonnement auxquels ils ne peuvent rĂ©sister. Cela impliquera soit de concevoir des systĂšmes plus robustes et tolĂ©rants Ă base de COTS, soit de remplacer prĂ©ventivement les systĂšmes avant leur fin de vie utile. Ainsi, alors qu'au cours des annĂ©es prĂ©cĂ©dentes, les effets singuliers (EEI) Ă©taient la principale cause de dĂ©faillance, Ă l'avenir, l'effet cumulatif du rayonnement deviendra Ă©galement une prĂ©occupation majeure. Bien qu'un effort considĂ©rable ait Ă©tĂ© fait dans le passĂ© sur le processus de qualification contre les dĂ©faillances induites par les SEE, le processus de qualification pour les effets cumulatifs du rayonnement est restĂ© pratiquement inchangĂ©. L'objectif de ces travaux Ă©tait donc d'Ă©tudier comment la Radiation Hardness Assurance (RHA) du CERN pourrait ĂȘtre amĂ©liorĂ©e pour rĂ©pondre Ă ce nouveau dĂ©fi et s'assurer qu'aucune dĂ©faillance de systĂšme n'aura d'impact sur les opĂ©rations du LHC. Plusieurs activitĂ©s ont Ă©tĂ© menĂ©es Ă cet effet : (i) l'Ă©tude des particularitĂ©s de l'environnement radiatif du LHC et de son impact sur les composants et les systĂšmes qui y sont exposĂ©s, (ii) l'Ă©tude de l'adĂ©quation des mĂ©thodes de qualification actuelles et le dĂ©veloppement d'approches adaptĂ©es aux besoins du CERN et (iii) l'Ă©tude des mĂ©thodes fiables pour estimer la durĂ©e de vie des systĂšmes
General Purpose and Neural Network Approach for Benchmarking Microcontrollers Under Radiation
In this work a testing methodology for micro-controllers exposed to radiation is proposed. General purpose benchmarks are reviewed to provide a mean of testing all the macro-areas of a microcontroller, and a neural network benchmark is introduced as a representative class of novel computing algorithms for IoT devices. Metrics from literature are reviewed and a new metric, the Mean Energy per Unit Workload Between Failure, is introduced. It combines computing performance and energy consumption in a single unit, making it specifically useful to benchmark battery-operated edge nodes. A method to analyse reset causes is also introduced, giving important insights into failure mechanisms and potential patterns. The testing strategy has been validated on a representative class of four Cortex M0+ and Cortex M4 microcontrollers irradiated under a 200MeV proton beam with different fluences. Results from the irradiation campaign are presented and commented on to validate the benchmarks and metrics discussed
Analysis of Bipolar Integrated Circuit Degradation Mechanisms Against Combined TIDâDD Effects
Integrated circuits sensitive to both total ionizing dose (TID) and displacement damage (DD) effects can exhibit degradation profiles resulting from a combination of degradation mechanisms induced by both effects. This work presents circuit simulations based on experimental data to explain degradation mechanisms induced by combined TID and DD effects on a bipolar IC current source. First, the effect of the degradation of each internal transistor on the circuitâs response is evaluated by applying electrical parametric changes. Then simulations are performed from different degradation scenarios based on observed circuit behaviors to reproduce the different TID, DD, and combined TIDâDD responses. These simulations show that a synergistic interaction between a current leakage induced by DD on a transistor located in the bandgap reference part with the gain degradation of a current mirror induced by both TID and DD appears to be responsible for the combined TIDâDD response. It is also shown that the circuit degradation rate depends on the DDD/TID rate ratios encountered during the exposition
Design of a Radiation Tolerant System for Total Ionizing Dose Monitoring Using Floating Gate and RadFET Dosimeters
International audienceThe necessity to improve the accuracy of the Total Ionizing Dose (TID) measurements at CERNsâ radiation zones, has driven the research of new TID-measuring candidates. For this purpose, a TID Monitoring System (TIDMon) is designed, that investigates the effects of the TID on a Floating Gate Dosimeter (FGDOS) compared to Radiation-sensing Field-Effect Transistors (RadFETs). The monitoring system is characterized inside the CERN test facilities where the LHC mixed radiation field is reproduced. The architecture of the TIDMon, the radiation tolerance techniques and the design choices adopted for the system are presented in this wor
Investigation on the Sensitivity of a 65nm Flash-Based FPGA for CERN Applications
International audienceThe continuous need for upgrading the instrumentation and control electronics operating in various CERN experiments and along the LHC accelerator, has driven the qualification process of the new Flash-Based SmartFusion2 FPGAs; a leading candidate to be embedded in future systems. The radiation testing conditions have been chosen to fit the unique CERN environment, while the setup is carefully chosen to qualify all the subcomponents of the SoC FPGA that the engineering teams of CERN are considering for exploitation
FPGA Qualification and Failure Rate Estimation Methodology for LHC Environments Using Benchmarks Test Circuits
When studying the behavior of a field programmable gate array (FPGA) under radiation, the most commonly used methodology consists in evaluating the single-event effect (SEE) cross section of its elements individually. However, this method does not allow the estimation of the device failure rate when using a custom design. An alternative approach based on benchmark circuits is presented in this article. It allows standardized application-level testing, which makes the comparison between different FPGAs easier. Moreover, it allows the evaluation of the FPGA failure rate independent of the application that will be implemented. The employed benchmark circuit belongs to the ITCâ99 benchmark suite developed at Politecnico di Torino. Using the proposed methodology, the response of four FPGAsâthe NG-Medium, the ProASIC3, the SmartFusion2, and the PolarFireâwas evaluated under high-energy protons. Radiation tests with thermal neutrons were also conducted on the PolarFire to assess its potential sensitivity to them. Moreover, its performances in terms of total ionizing dose (TID) effects have been evaluated by measuring the degradation of the propagation delay during irradiation
Exploring Radiation-Induced Vulnerabilities in RFICs Through Traditional RF Metrics
This article describes how to analyze radiation-induced effects using traditional radio-frequency (RF) metrics in RF integrated circuits (RFICs) to be used in the implementation of software-defined radios (SDRs). The impacts of total ionizing dose (TID) and single-event effects (SEEs) on the device characteristics are shown and their consequences for an SDR are discussed. The analysis is based on the error vector magnitude (EVM), carrier frequency offset (CFO), and SEE in the device configuration. It has been applied to the AD9361 RF Agile transceiver. Experimental results are obtained using the high-energy proton beam of 200 MeV at the Paul Scherrer Institute. Two different applications are analyzed to emphasize the impact that radiation effects can have on different communication schemes: quadrature phase shift keying (QPSK) and quadrature amplitude modulation (64-QAM). Results show that single-event transients (SETs) on the analog circuitry of the RF transceiver or single-event upsets (SEUs) inside the configuration registers can lead to EVM degradation. In addition, TID effects lead to frequency drift of the RF carrier, generating an offset between the transmitter and receiver nodes that needs to be taken into account when selecting the recovery algorithms in the receiver
Impact of flux selection, pulsed beams and operation mode on system failure observability during radiation qualification
Systems and Systems on Chip (SoCs) under radiation can have complex failure modes with different probabilities. A system may have, due to its different modalities of operation and depending on the selected test flux, a probability that the failure with the highest probability of occurrence masks the others, preventing them from being detected. Flux selection becomes a key parameter in system-level testing to increase the observability of such events and prevent them from remaining hidden. The inability to detect and identify these events could lead to unexpected failures during operation. This work proposes a methodology to evaluate the degree of observability of low probability fault modes by varying the flux and demonstrates its validity through measurements