169 research outputs found
ELM regime classification by conformal prediction on an information manifold
Characterization and control of plasma instabilities known as edge-localized modes (ELMs) is crucial for the operation of fusion reactors. Recently, machine learning methods have demonstrated good potential in making useful inferences from stochastic fusion data sets. However, traditional classification methods do not offer an inherent estimate of the goodness of their prediction. In this paper, a distance-based conformal predictor classifier integrated with a geometric-probabilistic framework is presented. The first benefit of the approach lies in its comprehensive treatment of highly stochastic fusion data sets, by modeling the measurements with probability distributions in a metric space. This enables calculation of a natural distance measure between probability distributions: the Rao geodesic distance. Second, the predictions are accompanied by estimates of their accuracy and reliability. The method is applied to the classification of regimes characterized by different types of ELMs based on the measurements of global parameters and their error bars. This yields promising success rates and outperforms state-of-the-art automatic techniques for recognizing ELM signatures. The estimates of goodness of the predictions increase the confidence of classification by ELM experts, while allowing more reliable decisions regarding plasma control and at the same time increasing the robustness of the control system
Test-bed of a real time detection system for L/H and H/L transitions implemented with the ITMS platform
A basic requirement of the data acquisition systems used in long pulse fusion experiments is to detect events of interest in the acquired signals in real time. Developing such applications is usually a complex task, so it is necessary to develop a set of hardware and software tools that simplify their implementation. An example of these tools is the Intelligent Test and Measurement System (ITMS), which offers distributed data acquisition, distribution and real time processing capabilities with advanced, but easy to use, software tools that simplify application development and system setup. This paper presents the application of the ITMS platform to solve the problem of detecting L/H and H/L transitions in real time based on the use of efficient pattern recognition algorithms
Exploiting graphic processing units parallelism to improve intelligent data acquisition system performance in JET's correlation reflectometer
The performance of intelligent data acquisition systems relies heavily on their processing capabilities and local bus bandwidth, especially in applications with high sample rates or high number of channels. This is the case of the self adaptive sampling rate data acquisition system installed as a pilot experiment in KG8B correlation reflectometer at JET. The system, which is based on the ITMS platform, continuously adapts the sample rate during the acquisition depending on the signal bandwidth. In order to do so it must transfer acquired data to a memory buffer in the host processor and run heavy computational algorithms for each data block. The processing capabilities of the host CPU and the bandwidth of the PXI bus limit the maximum sample rate that can be achieved, therefore limiting the maximum bandwidth of the phenomena that can be studied. Graphic processing units (GPU) are becoming an alternative for speeding up compute intensive kernels of scientific, imaging and simulation applications. However, integrating this technology into data acquisition systems is not a straight forward step, not to mention exploiting their parallelism efficiently. This paper discusses the use of GPUs with new high speed data bus interfaces to improve the performance of the self adaptive sampling rate data acquisition system installed on JET. Integration issues are discussed and performance evaluations are presente
Real time plasma disruptions detection in JET implemented with the ITMS platform using FPGA based IDAQ
The use of FPGAs in data acquisition cards for processing purposes allows an efficient real time pattern recognition algorithm implementation. Using 13 JETs database waveforms an algorithm for detecting incoming plasma disruptions has been implemented. This algorithm is written in MATLAB using floating point representation. In this work we show the methodology used to implement the real time version of the algorithm using Intelligent Data Acquisition Cards (IDAQ), DAQ devices with field programmable gate array (FPGA) for local processing. This methodology is based on the translation of the MATLAB code to LabVIEW and the final coding of specific pieces of code in LabVIEW for FPGA in fixed point format. The whole system for evaluating the real time disruption detection (RTDD) has been implemented using the Intelligent Test and Measurement System (ITMS) platform. ITMS offers distributed data acquisition, distribution and real time processing capabilities with advanced, but easy to use, software tools that simplify application development and system setup. The RTDD implementation uses a standard PXI/PXIe architecture. Two 8 channel analog output cards play JETs database signals, two 8 channel DAQ with FPGA acquire signals and computes a feature vector based in FFT analysis. Finally the vector acquired is used by the system CPU to execute a pattern recognition algorithm to estimate an incoming disruption
New information processing methods for control in magnetically confinement nuclear fusion
Thermonuclear plasmas are complex and highly non-linear physical objects and therefore, in the most advanced present day devices for the study of magnetic confinement fusion, thousands of signals have to be acquired for each experiment, in order to progress with the understanding indispensable for the final reactor. On the other hand, the resulting massive databases, more than 40 Tbytes in the case of the JET joint Undertaking, pose significant problems. In this paper, solutions to reduce the shear amount of data by different compression techniques and adaptive sampling frequency architectures are presented. As an example of methods capable of providing significant help in the data analysis and real time control, a Classification and Regression Tree software is applied to the problem of regime identification, to discriminate in an automatic way whether the plasma is in the L or H confinement mode
Self-adaptive sampling rate data acquisition in JET’s correlation reflectometer
Data acquisition systems with self-adaptive sampling rate capabilities have been proposed as a solution to reduce the shear amount of data collected in every discharge of present fusion devices. This paper discusses the design of such a system for its use in the KG8B correlation reflectometer at JET. The system, which is based on the ITMS platform, continuously adapts the sample rate during the acquisition depending on the signal bandwidth. Data are acquired continuously at the expected maximum sample rate and transferred to a memory buffer in the host processor. Thereafter the rest of the process is based on software. Data are read from the memory buffer in blocks and for each block an intelligent decimation algorithm is applied. The decimation algorithm determines the signal bandwidth for each block in order to choose the optimum sample rate for that block, and from there the decimation factor to be used. Memory buffers are used to adapt the throughput of the three main software modules _data acquisition, processing, and storage_ following a typical producer-consumer architecture. The system optimizes the amount of data collected while maintaining the same information. Design issues are discussed and results of performance evaluation are presented
Design of an advanced intelligent instrument with waveform recognition based on the ITMS platform
Searching for similar behavior in previous data plays a key role in fusion research, but can be quite challenging to implement from a practical point of view. This paper describes the design of an intelligent measurement instrument that uses similar waveform recognition systems (SWRS) to extract knowledge from the signals it acquires. The system is perceived as an Ethernet measurement instrument that permits to acquire several waveforms simultaneously and to identity similar behaviors by searching in previous data using distributed SWRS. The implementation is another example of the advantages that local processing capabilities can provide in data acquisition applications
Results of the JET real-time disruption predictor in the ITER-like wall campaigns
The impact of disruptions in JET became even more important with the replacement of the previous Carbon Fiber Composite (CFC) wall with a more fragile full metal ITER-like wall (ILW). The development of robust disruption mitigation systems is crucial for JET (and also for ITER). Moreover, a reliable real-time (RT) disruption predictor is a pre-requisite to any mitigation method. The Advance Predictor Of DISruptions (APODIS) has been installed in the JET Real-Time Data Network (RTDN) for the RT recognition of disruptions. The predictor operates with the new ILW but it has been trained only with discharges belonging to campaigns with the CFC wall. 7 realtime signals are used to characterize the plasma status (disruptive or non-disruptive) at regular intervals of 1 ms. After the first 3 JET ILW campaigns (991 discharges), the success rate of the predictor is 98.36% (alarms are triggered in average 426 ms before the disruptions). The false alarm and missed alarm rates are 0.92% and 1.64%
On the potential of physics-informed neural networks to solve inverse problems in tokamaks
Magnetic confinement nuclear fusion holds great promise as a source of clean and sustainable energy for the future. However, achieving net energy from fusion reactors requires a more profound understanding of the underlying physics and the development of efficient control strategies. Plasma diagnostics are vital to these efforts, but accessing local information often involves solving very ill-posed inverse problems. Regrettably, many of the current approaches for solving these problems rely on simplifying assumptions, sometimes inaccurate or not completely verified, with consequent imprecise outcomes. In order to overcome these challenges, the present study suggests employing physics-informed neural networks (PINNs) to tackle inverse problems in tokamaks. PINNs represent a type of neural network that is versatile and can offer several benefits over traditional methods, such as their capability of handling incomplete physics equations, of coping with noisy data, and of operating mesh-independently. In this work, PINNs are applied to three typical inverse problems in tokamak physics: equilibrium reconstruction, interferometer inversion, and bolometer tomography. The reconstructions are compared with measurements from other diagnostics and correlated phenomena, and the results clearly show that PINNs can be easily applied to these types of problems, delivering accurate results. Furthermore, we discuss the potential of PINNs as a powerful tool for integrated data analysis. Overall, this study demonstrates the great potential of PINNs for solving inverse problems in magnetic confinement thermonuclear fusion and highlights the benefits of using advanced machine learning techniques for the interpretation of various plasma diagnostics
Alternative Definitions of Complexity for Practical Applications of Model Selection Criteria
Defining and quantifying complexity is one of the major challenges of modern science and contemporary societies. This task is particularly critical for model selection, which is aimed at properly identifying the most adequate equations to interpret the available data. The traditional solution of equating the complexity of the models to the number of their parameters is clearly unsatisfactory. Three alternative approaches are proposed in this work. The first one estimates the flexibility of the proposed models to quantify their potential to overfit. The second interprets complexity as lack of stability and is implemented by computing the variations in the predictions due to uncertainties in their parameters. The third alternative is focused on assessing the consistency of extrapolation of the candidate models. All the upgrades are easy to implement and typically outperform the traditional versions of model selection criteria and constitute a good set of alternatives to be deployed, depending on the priorities of the investigators and the characteristics of the application
- …