432 research outputs found

    Disruption prediction at JET through deep convolutional neural networks using spatiotemporal information from plasma profiles

    Get PDF
    In view of the future high power nuclear fusion experiments, the early identification of disruptions is a mandatory requirement, and presently the main goal is moving from the disruption mitigation to disruption avoidance and control. In this work, a deep-convolutional neural network (CNN) is proposed to provide early detection of disruptive events at JET. The CNN ability to learn relevant features, avoiding hand-engineered feature extraction, has been exploited to extract the spatiotemporal information from 1D plasma profiles. The model is trained with regularly terminated discharges and automatically selected disruptive phase of disruptions, coming from the recent ITER-like-wall experiments. The prediction performance is evaluated using a set of discharges representative of different operating scenarios, and an in-depth analysis is made to evaluate the performance evolution with respect to the considered experimental conditions. Finally, as real-time triggers and termination schemes are being developed at JET, the proposed model has been tested on a set of recent experiments dedicated to plasma termination for disruption avoidance and mitigation. The CNN model demonstrates very high performance, and the exploitation of 1D plasma profiles as model input allows us to understand the underlying physical phenomena behind the predictor decision

    Machine Learning and Deep Learning applications for the protection of nuclear fusion devices

    Get PDF
    This Thesis addresses the use of artificial intelligence methods for the protection of nuclear fusion devices with reference to the Joint European Torus (JET) Tokamak and the Wendenstein 7-X (W7-X) Stellarator. JET is currently the world's largest operational Tokamak and the only one operated with the Deuterium-Tritium fuel, while W7-X is the world's largest and most advanced Stellarator. For the work on JET, research focused on the prediction of “disruptions”, and sudden terminations of plasma confinement. For the development and testing of machine learning classifiers, a total of 198 disrupted discharges and 219 regularly terminated discharges from JET. Convolutional Neural Networks (CNNs) were proposed to extract the spatiotemporal characteristics from plasma temperature, density and radiation profiles. Since the CNN is a supervised algorithm, it is necessary to explicitly assign a label to the time windows of the dataset during training. All segments belonging to regularly terminated discharges were labelled as 'stable'. For each disrupted discharge, the labelling of 'unstable' was performed by automatically identifying the pre-disruption phase using an algorithm developed during the PhD. The CNN performance has been evaluated using disrupted and regularly terminated discharges from a decade of JET experimental campaigns, from 2011 to 2020, showing the robustness of the algorithm. Concerning W7-X, the research involved the real-time measurement of heat fluxes on plasma-facing components. THEODOR is a code currently used at W7-X for computing heat fluxes offline. However, for heat load control, fast heat flux estimation in real-time is required. Part of the PhD work was dedicated to refactoring and optimizing the THEODOR code, with the aim of speeding up calculation times and making it compatible with real-time use. In addition, a Physics Informed Neural Network (PINN) model was proposed to bring thermal flow computation to GPUs for real-time implementation

    2022 Review of Data-Driven Plasma Science

    Get PDF
    Data-driven science and technology offer transformative tools and methods to science. This review article highlights the latest development and progress in the interdisciplinary field of data-driven plasma science (DDPS), i.e., plasma science whose progress is driven strongly by data and data analyses. Plasma is considered to be the most ubiquitous form of observable matter in the universe. Data associated with plasmas can, therefore, cover extremely large spatial and temporal scales, and often provide essential information for other scientific disciplines. Thanks to the latest technological developments, plasma experiments, observations, and computation now produce a large amount of data that can no longer be analyzed or interpreted manually. This trend now necessitates a highly sophisticated use of high-performance computers for data analyses, making artificial intelligence and machine learning vital components of DDPS. This article contains seven primary sections, in addition to the introduction and summary. Following an overview of fundamental data-driven science, five other sections cover widely studied topics of plasma science and technologies, i.e., basic plasma physics and laboratory experiments, magnetic confinement fusion, inertial confinement fusion and high-energy-density physics, space and astronomical plasmas, and plasma technologies for industrial and other applications. The final section before the summary discusses plasma-related databases that could significantly contribute to DDPS. Each primary section starts with a brief introduction to the topic, discusses the state-of-the-art developments in the use of data and/or data-scientific approaches, and presents the summary and outlook. Despite the recent impressive signs of progress, the DDPS is still in its infancy. This article attempts to offer a broad perspective on the development of this field and identify where further innovations are required

    Neural Network Methods for Radiation Detectors and Imaging

    Full text link
    Recent advances in image data processing through machine learning and especially deep neural networks (DNNs) allow for new optimization and performance-enhancement schemes for radiation detectors and imaging hardware through data-endowed artificial intelligence. We give an overview of data generation at photon sources, deep learning-based methods for image processing tasks, and hardware solutions for deep learning acceleration. Most existing deep learning approaches are trained offline, typically using large amounts of computational resources. However, once trained, DNNs can achieve fast inference speeds and can be deployed to edge devices. A new trend is edge computing with less energy consumption (hundreds of watts or less) and real-time analysis potential. While popularly used for edge computing, electronic-based hardware accelerators ranging from general purpose processors such as central processing units (CPUs) to application-specific integrated circuits (ASICs) are constantly reaching performance limits in latency, energy consumption, and other physical constraints. These limits give rise to next-generation analog neuromorhpic hardware platforms, such as optical neural networks (ONNs), for high parallel, low latency, and low energy computing to boost deep learning acceleration

    Neural Networks for Modeling and Control of Particle Accelerators

    Full text link
    We describe some of the challenges of particle accelerator control, highlight recent advances in neural network techniques, discuss some promising avenues for incorporating neural networks into particle accelerator control systems, and describe a neural network-based control system that is being developed for resonance control of an RF electron gun at the Fermilab Accelerator Science and Technology (FAST) facility, including initial experimental results from a benchmark controller.Comment: 21 p

    Advanced photonic and electronic systems - WILGA 2017

    Get PDF
    WILGA annual symposium on advanced photonic and electronic systems has been organized by young scientist for young scientists since two decades. It traditionally gathers more than 350 young researchers and their tutors. Ph.D students and graduates present their recent achievements during well attended oral sessions. Wilga is a very good digest of Ph.D. works carried out at technical universities in electronics and photonics, as well as information sciences throughout Poland and some neighboring countries. Publishing patronage over Wilga keep Elektronika technical journal by SEP, IJET by PAN and Proceedings of SPIE. The latter world editorial series publishes annually more than 200 papers from Wilga. Wilga 2017 was the XL edition of this meeting. The following topical tracks were distinguished: photonics, electronics, information technologies and system research. The article is a digest of some chosen works presented during Wilga 2017 symposium. WILGA 2017 works were published in Proc. SPIE vol.10445

    Entropy in Image Analysis II

    Get PDF
    Image analysis is a fundamental task for any application where extracting information from images is required. The analysis requires highly sophisticated numerical and analytical methods, particularly for those applications in medicine, security, and other fields where the results of the processing consist of data of vital importance. This fact is evident from all the articles composing the Special Issue "Entropy in Image Analysis II", in which the authors used widely tested methods to verify their results. In the process of reading the present volume, the reader will appreciate the richness of their methods and applications, in particular for medical imaging and image security, and a remarkable cross-fertilization among the proposed research areas

    Biomedical Sensing and Imaging

    Get PDF
    This book mainly deals with recent advances in biomedical sensing and imaging. More recently, wearable/smart biosensors and devices, which facilitate diagnostics in a non-clinical setting, have become a hot topic. Combined with machine learning and artificial intelligence, they could revolutionize the biomedical diagnostic field. The aim of this book is to provide a research forum in biomedical sensing and imaging and extend the scientific frontier of this very important and significant biomedical endeavor

    Ion Irradiation-induced Microstructural Change in SiC

    Get PDF
    The high temperature radiation resistance of nuclear materials has become a key issue in developing future nuclear reactors. Because of its mechanical stability under high-energy neutron irradiation and high temperature, silicon carbide (SiC) has great potential as a structural material in advanced nuclear energy systems. A newly developed nano-engineered (NE) 3C SiC with a nano-layered stacking fault (SFs) structure has been recently considered as a prospective choice due to enhanced point defect annihilation between layer-type structures, leading to outstanding radiation durability. The objective of this project was to advance the understanding of gas bubble formation mechanisms under irradiation conditions in SiC. In this work, microstructural evolution induced by helium implantation and ion irradiation was investigated in single crystal and NE SiC. Elastic recoil detection analysis confirmed that the as-implanted helium depth profile did not change under irradiation to 30 dpa at 700 °C. Helium bubbles were found in NE SiC after heavy ion irradiation at a lower temperature than in previous literature results. These results expand the current understanding of helium migration mechanism of NE SiC under high temperature irradiation environment. No obvious bubble growth was observed after ion irradiation at 700 °C, suggesting a long helium bubble incubation process under continued irradiation at this temperature and dose. As determined by electron energy loss spectroscopy measurements, only 1 % of the implanted helium atoms are trapped in bubbles. Helium redistribution and release was observed in the TEM samples under in-situ irradiation at 800 °C. In-situ TEM analysis revealed that the nano-layered SF structure is radiation tolerant below a dose of about 15 dpa at 800 °C, but continued irradiation to 20 dpa under these in-situ conditions leads to loss of the stacking fault structure, which may be a manifestation of irradiating thin TEM foils. The irradiation stability of the SF structure under bulk irradiation remains unknown. This stacking fault structure is critical since it suppresses the formation of dislocation loops normally observed under these irradiation conditions. Systematic studies towards understanding the role of defect migration under irradiation on the evolution of helium bubbles in NE SiC were performed

    Psr1p interacts with SUN/sad1p and EB1/mal3p to establish the bipolar spindle

    Get PDF
    Regular Abstracts - Sunday Poster Presentations: no. 382During mitosis, interpolar microtubules from two spindle pole bodies (SPBs) interdigitate to create an antiparallel microtubule array for accommodating numerous regulatory proteins. Among these proteins, the kinesin-5 cut7p/Eg5 is the key player responsible for sliding apart antiparallel microtubules and thus helps in establishing the bipolar spindle. At the onset of mitosis, two SPBs are adjacent to one another with most microtubules running nearly parallel toward the nuclear envelope, creating an unfavorable microtubule configuration for the kinesin-5 kinesins. Therefore, how the cell organizes the antiparallel microtubule array in the first place at mitotic onset remains enigmatic. Here, we show that a novel protein psrp1p localizes to the SPB and plays a key role in organizing the antiparallel microtubule array. The absence of psr1+ leads to a transient monopolar spindle and massive chromosome loss. Further functional characterization demonstrates that psr1p is recruited to the SPB through interaction with the conserved SUN protein sad1p and that psr1p physically interacts with the conserved microtubule plus tip protein mal3p/EB1. These results suggest a model that psr1p serves as a linking protein between sad1p/SUN and mal3p/EB1 to allow microtubule plus ends to be coupled to the SPBs for organization of an antiparallel microtubule array. Thus, we conclude that psr1p is involved in organizing the antiparallel microtubule array in the first place at mitosis onset by interaction with SUN/sad1p and EB1/mal3p, thereby establishing the bipolar spindle.postprin
    • …
    corecore