42 research outputs found

    On the potential of physics-informed neural networks to solve inverse problems in tokamaks

    Get PDF
    Magnetic confinement nuclear fusion holds great promise as a source of clean and sustainable energy for the future. However, achieving net energy from fusion reactors requires a more profound understanding of the underlying physics and the development of efficient control strategies. Plasma diagnostics are vital to these efforts, but accessing local information often involves solving very ill-posed inverse problems. Regrettably, many of the current approaches for solving these problems rely on simplifying assumptions, sometimes inaccurate or not completely verified, with consequent imprecise outcomes. In order to overcome these challenges, the present study suggests employing physics-informed neural networks (PINNs) to tackle inverse problems in tokamaks. PINNs represent a type of neural network that is versatile and can offer several benefits over traditional methods, such as their capability of handling incomplete physics equations, of coping with noisy data, and of operating mesh-independently. In this work, PINNs are applied to three typical inverse problems in tokamak physics: equilibrium reconstruction, interferometer inversion, and bolometer tomography. The reconstructions are compared with measurements from other diagnostics and correlated phenomena, and the results clearly show that PINNs can be easily applied to these types of problems, delivering accurate results. Furthermore, we discuss the potential of PINNs as a powerful tool for integrated data analysis. Overall, this study demonstrates the great potential of PINNs for solving inverse problems in magnetic confinement thermonuclear fusion and highlights the benefits of using advanced machine learning techniques for the interpretation of various plasma diagnostics

    Complexity: Frontiers in Data-Driven Methods for Understanding, Prediction, and Control of Complex Systems 2022 on the Development of Information Theoretic Model Selection Criteria for the Analysis of Experimental Data

    Get PDF
    It can be argued that the identification of sound mathematical models is the ultimate goal of any scientific endeavour. On the other hand, particularly in the investigation of complex systems and nonlinear phenomena, discriminating between alternative models can be a very challenging task. Quite sophisticated model selection criteria are available but their deployment in practice can be problematic. In this work, the Akaike Information Criterion is reformulated with the help of purely information theoretic quantities, namely, the Gibbs-Shannon entropy and the Mutual Information. Systematic numerical tests have proven the improved performances of the proposed upgrades, including increased robustness against noise and the presence of outliers. The same modifications can be implemented to rewrite also Bayesian statistical criteria, such as the Schwartz indicator, in terms of information-theoretic quantities, proving the generality of the approach and the validity of the underlying assumptions

    Considerations on Stellarator's Optimization from the Perspective of the Energy Confinement Time Scaling Laws

    Get PDF
    The Stellarator is a magnetic configuration considered a realistic candidate for a future thermonuclear fusion commercial reactor. The most widely accepted scaling law of the energy confinement time for the Stellarator is the ISS04, which employs a renormalisation factor, fren, specific to each device and each level of optimisation for individual machines. The fren coefficient is believed to account for higher order effects not ascribable to variations in the 0D quantities, the only ones included in the database used to derive ISS04, the International Stellarator Confinement database. This hypothesis is put to the test with symbolic regression, which allows relaxing the assumption that the scaling laws must be in power monomial form. Specific and more general scaling laws for the different magnetic configurations have been identified and perform better than ISS04, even without relying on any renormalisation factor. The proposed new scalings typically present a coefficient of determination R2 around 0.9, which indicates that they basically exploit all the information included in the database. More importantly, the different optimisation levels are correctly reproduced and can be traced back to variations in the 0D quantities. These results indicate that fren is not indispensable to interpret the data because the different levels of optimisation leave clear signatures in the 0D quantities. Moreover, the main mechanism dominating transport, in reasonably optimised configurations, is expected to be turbulence, confirmed by a comparative analysis of the Tokamak in L mode, which shows very similar values of the energy confinement time. Not resorting to any renormalisation factor, the new scaling laws can also be extrapolated to the parameter regions of the most important reactor designs available

    Image-based methods to investigate synchronization between time series relevant for plasma fusion diagnostics

    Get PDF
    Advanced time series analysis and causality detection techniques have been successfully applied to the assessment of synchronization experiments in tokamaks, such as Edge Localized Modes (ELMs) and sawtooth pacing. Lag synchronization is a typical strategy for fusion plasma instability control by pace-making techniques. The major difficulty, in evaluating the efficiency of the pacing methods, is the coexistence of the causal effects with the periodic or quasi-periodic nature of the plasma instabilities. In the present work, a set of methods based on the image representation of time series, are investigated as tools for evaluating the efficiency of the pace-making techniques. The main options rely on the Gramian Angular Field (GAF), the Markov Transition Field (MTF), previously used for time series classification, and the Chaos Game Representation (CGR), employed for the visualization of large collections of long time series. The paper proposes an original variation of the Markov Transition Matrix, defined for a couple of time series. Additionally, a recently proposed method, based on the mapping of time series as cross-visibility networks and their representation as images, is included in this study. The performances of the method are evaluated on synthetic data and applied to JET measurements

    Alternative Detection of n = 1 Modes Slowing Down on ASDEX Upgrade

    Get PDF
    Disruptions in tokamaks are very often associated with the slowing down of magneto-hydrodynamic (MHD) instabilities and their subsequent locking to the wall. To improve the understanding of the chain of events ending with a disruption, a statistically robust and physically based criterion has been devised to track the slowing down of modes with toroidal mode numbers n = 1 and mostly poloidal mode numberm= 2, providing an alternative and earlier detection tool compared to simple threshold based indicators. A database of 370 discharges of axially symmetric divertor experiment—upgrade (AUG) has been studied and results compared with other indicators used in real time. The estimator is based on a weighted average value of the fast Fourier transform of the perturbed radial n = 1 magnetic field, caused by the rotation of the modes. The use of a carrier sinusoidal wave helps alleviating the spurious influence of non-sinusoidal magnetic perturbations induced by other instabilities like Edge localized modes (ELMs). The indicator constitutes a good candidate for further studies including machine learning approaches for mitigation and avoidance since, by deploying it systematically to evaluate the time instance for the expected locking, multi-machine databases can be populated. Furthermore, it can be thought as a contribution to a wider approach to dynamically tracking the chain of events leading to disruptions

    Effects of environmental conditions on COVID-19 morbidity as an example of multicausality: a multi-city case study in Italy

    Get PDF
    The coronavirus disease 2019 (COVID-19), caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), broke out in December 2019 in Wuhan city, in the Hubei province of China. Since then, it has spread practically all over the world, disrupting many human activities. In temperate climates overwhelming evidence indicates that its incidence increases significantly during the cold season. Italy was one of the first nations, in which COVID-19 reached epidemic proportions, already at the beginning of 2020. There is therefore enough data to perform a systematic investigation of the correlation between the spread of the virus and the environmental conditions. The objective of this study is the investigation of the relationship between the virus diffusion and the weather, including temperature, wind, humidity and air quality, before the rollout of any vaccine and including rapid variation of the pollutants (not only their long term effects as reported in the literature). Regarding them methodology, given the complexity of the problem and the sparse data, robust statistical tools based on ranking (Spearman and Kendall correlation coefficients) and innovative dynamical system analysis techniques (recurrence plots) have been deployed to disentangle the different influences. In terms of results, the evidence indicates that, even if temperature plays a fundamental role, the morbidity of COVID-19 depends also on other factors. At the aggregate level of major cities, air pollution and the environmental quantities affecting it, particularly the wind intensity, have no negligible effect. This evidence should motivate a rethinking of the public policies related to the containment of this type of airborne infectious diseases, particularly information gathering and traffic management

    An Unsupervised Spectrogram Cross-Correlation Method to Assess ELM Triggering Efficiency by Pellets

    Get PDF
    The high confinement mode (H-mode) is considered the optimal regime for the production of energy through nuclear fusion for industrial purposes since it allows to increase the energy confinement time of the plasma roughly by a factor of two. Consequently, it has been selected at the moment as the standard scenario for the next generation of devices, such as ITER. However, pressure-driven edge instabilities, known as edge localized modes (ELMs), are a distinct feature of this plasma regime. Their extrapolated thermal and particle peak loads on the plasma-facing components (PFC) of the next generation of devices are expected to be so high as to damage such structures, compromising the normal operations of the reactors themselves. Consequently, the induced loads have to be controlled; this can be achieved by mitigating ELMs. A possibility then lays in increasing the ELMs frequency to lower the loads on the PFCs. As already demonstrated at JET, the pellet pacing of ELMs is considered one of the most promising techniques for such scope, and its optimization is therefore of great interest for present and future operations of nuclear fusion facilities. In this work, we suggest a method to access primary pieces of information to perform statistics, assess and characterize the pacing efficiency. The method, tested on JET data, is based on the clustering (k-means) of convoluted signals, using so-called spectrogram cross-correlation, between the measured pellets and ELMs time traces. Results have also been obtained by taking advantage of a new type of diagnostic for measuring the ELMs dynamic, based on synthetic diamond sensors, faster than the standard spectroscopic cameras used at JET

    Maximum likelihood bolometry for ASDEX upgrade experiments

    Get PDF
    Bolometry is an essential diagnostic for calculating the power balances and for the understanding of different physical aspects of tokamak experiments. The reconstruction method based on the Maximum Likelihood (ML) principle, developed initially for JET, has been implemented for ASDEX Upgrade. Due to the availability of a limited number of views, the reconstruction problem is mathematically ill-posed. A regularizing procedure, based on the assumption of smoothness along the magnetic surfaces, given by plasma equilibrium, must also be implemented. A new anisotropic smoothing technique, which acts along locally oriented kernels, has been implemented. The performances of the method have been evaluated, in terms of shapes, resolution and of the derived radiated power, and compared with the bolometry method used routinely on ASDEX Upgrade. The specific advantage of the ML reconstruction algorithm consists of the possibility to assess the uncertainties of the reconstruction and to derive confidence intervals in the emitted radiation levels. The importance of this capability is illustrated

    Dust tracking techniques applied to the STARDUST facility: First results

    Get PDF
    An important issue related to future nuclear fusion reactors fueled with deuterium and tritium is the creation of large amounts of dust due to several mechanisms (disruptions, ELMs and VDEs). The dust size expected in nuclear fusion experiments (such as ITER) is in the order of microns (between 0.1 and 1000 μm). Almost the total amount of this dust remains in the vacuum vessel (VV). This radiological dust can re-suspend in case of LOVA (loss of vacuum accident) and these phenomena can cause explosions and serious damages to the health of the operators and to the integrity of the device. The authors have developed a facility, STARDUST, in order to reproduce the thermo fluid-dynamic conditions comparable to those expected inside the VV of the next generation of experiments such as ITER in case of LOVA. The dust used inside the STARDUST facility presents particle sizes and physical characteristics comparable with those that created inside the VV of nuclear fusion experiments. In this facility an experimental campaign has been conducted with the purpose of tracking the dust re-suspended at low pressurization rates (comparable to those expected in case of LOVA in ITER and suggested by the General Safety and Security Report ITER-GSSR) using a fast camera with a frame rate from 1000 to 10,000 images per second. The velocity fields of the mobilized dust are derived from the imaging of a two-dimensional slice of the flow illuminated by optically adapted laser beam. The aim of this work is to demonstrate the possibility of dust tracking by means of image processing with the objective of determining the velocity field values of dust re-suspended during a LOVA
    corecore