1,048 research outputs found

    A machine learning approach based on generative topographic mapping for disruption prevention and avoidance at JET

    Get PDF
    The need for predictive capabilities greater than 95% with very limited false alarms are demanding requirements for reliable disruption prediction systems in tokamaks such as JET or, in the near future, ITER. The prediction of an upcoming disruption must be provided sufficiently in advance in order to apply effective disruption avoidance or mitigation actions to prevent the machine from being damaged. In this paper, following the typical machine learning workflow, a generative topographic mapping (GTM) of the operational space of JET has been built using a set of disrupted and regularly terminated discharges. In order to build the predictive model, a suitable set of dimensionless, machine-independent, physics-based features have been synthesized, which make use of 1D plasma profile information, rather than simple zero-D time series. The use of such predicting features, together with the power of the GTM in fitting the model to the data, obtains, in an unsupervised way, a 2D map of the multi-dimensional parameter space of JET, where it is possible to identify a boundary separating the region free from disruption from the disruption region. In addition to helping in operational boundaries studies, the GTM map can also be used for disruption prediction exploiting the potential of the developed GTM toolbox to monitor the discharge dynamics. Following the trajectory of a discharge on the map throughout the different regions, an alarm is triggered depending on the disruption risk of these regions. The proposed approach to predict disruptions has been evaluated on a training and an independent test set and achieves very good performance with only one tardive detection and a limited number of false detections. The warning times are suitable for avoidance purposes and, more important, the detections are consistent with physical causes and mechanisms that destabilize the plasma leading to disruptions.Peer reviewe

    Machine Learning and Deep Learning applications for the protection of nuclear fusion devices

    Get PDF
    This Thesis addresses the use of artificial intelligence methods for the protection of nuclear fusion devices with reference to the Joint European Torus (JET) Tokamak and the Wendenstein 7-X (W7-X) Stellarator. JET is currently the world's largest operational Tokamak and the only one operated with the Deuterium-Tritium fuel, while W7-X is the world's largest and most advanced Stellarator. For the work on JET, research focused on the prediction of “disruptions”, and sudden terminations of plasma confinement. For the development and testing of machine learning classifiers, a total of 198 disrupted discharges and 219 regularly terminated discharges from JET. Convolutional Neural Networks (CNNs) were proposed to extract the spatiotemporal characteristics from plasma temperature, density and radiation profiles. Since the CNN is a supervised algorithm, it is necessary to explicitly assign a label to the time windows of the dataset during training. All segments belonging to regularly terminated discharges were labelled as 'stable'. For each disrupted discharge, the labelling of 'unstable' was performed by automatically identifying the pre-disruption phase using an algorithm developed during the PhD. The CNN performance has been evaluated using disrupted and regularly terminated discharges from a decade of JET experimental campaigns, from 2011 to 2020, showing the robustness of the algorithm. Concerning W7-X, the research involved the real-time measurement of heat fluxes on plasma-facing components. THEODOR is a code currently used at W7-X for computing heat fluxes offline. However, for heat load control, fast heat flux estimation in real-time is required. Part of the PhD work was dedicated to refactoring and optimizing the THEODOR code, with the aim of speeding up calculation times and making it compatible with real-time use. In addition, a Physics Informed Neural Network (PINN) model was proposed to bring thermal flow computation to GPUs for real-time implementation

    Performance Comparison of Machine Learning Disruption Predictors at JET

    Get PDF
    Reliable disruption prediction (DP) and disruption mitigation systems are considered unavoidable during international thermonuclear experimental reactor (ITER) operations and in the view of the next fusion reactors such as the DEMOnstration Power Plant (DEMO) and China Fusion Engineering Test Reactor (CFETR). In the last two decades, a great number of DP systems have been developed using data-driven methods. The performance of the DP models has been improved over the years both for a more appropriate choice of diagnostics and input features and for the availability of increasingly powerful data-driven modelling techniques. However, a direct comparison among the proposals has not yet been conducted. Such a comparison is mandatory, at least for the same device, to learn lessons from all these efforts and finally choose the best set of diagnostic signals and the best modelling approach. A first effort towards this goal is made in this paper, where different DP models will be compared using the same performance indices and the same device. In particular, the performance of a conventional Multilayer Perceptron Neural Network (MLP-NN) model is compared with those of two more sophisticated models, based on Generative Topographic Mapping (GTM) and Convolutional Neural Networks (CNN), on the same real time diagnostic signals from several experiments at the JET tokamak. The most common performance indices have been used to compare the different DP models and the results are deeply discussed. The comparison confirms the soundness of all the investigated machine learning approaches and the chosen diagnostics, enables us to highlight the pros and cons of each model, and helps to consciously choose the approach that best matches with the plasma protection needs

    Performance Comparison of Machine Learning Disruption Predictors at JET

    Get PDF
    Reliable disruption prediction (DP) and disruption mitigation systems are considered unavoidable during international thermonuclear experimental reactor (ITER) operations and in the view of the next fusion reactors such as the DEMOnstration Power Plant (DEMO) and China Fusion Engineering Test Reactor (CFETR). In the last two decades, a great number of DP systems have been developed using data-driven methods. The performance of the DP models has been improved over the years both for a more appropriate choice of diagnostics and input features and for the availability of increasingly powerful data-driven modelling techniques. However, a direct comparison among the proposals has not yet been conducted. Such a comparison is mandatory, at least for the same device, to learn lessons from all these efforts and finally choose the best set of diagnostic signals and the best modelling approach. A first effort towards this goal is made in this paper, where different DP models will be compared using the same performance indices and the same device. In particular, the performance of a conventional Multilayer Perceptron Neural Network (MLP-NN) model is compared with those of two more sophisticated models, based on Generative Topographic Mapping (GTM) and Convolutional Neural Networks (CNN), on the same real time diagnostic signals from several experiments at the JET tokamak. The most common performance indices have been used to compare the different DP models and the results are deeply discussed. The comparison confirms the soundness of all the investigated machine learning approaches and the chosen diagnostics, enables us to highlight the pros and cons of each model, and helps to consciously choose the approach that best matches with the plasma protection needs

    Aircraft Flight Envelope Determination using Upset Detection and Physical Modeling Methods

    Get PDF
    The development of flight control systems to enhance aircraft safety during periods of vehicle impairment or degraded operations has been the focus of extensive work in recent years. Conditions adversely affecting aircraft flight operations and safety may result from a number of causes, including environmental disturbances, degraded flight operations, and aerodynamic upsets. To enhance the effectiveness of adaptive and envelope limiting controls systems, it is desirable to examine methods for identifying the occurrence of anomalous conditions and for assessing the impact of these conditions on the aircraft operational limits. This paper describes initial work performed toward this end, examining the use of fault detection methods applied to the aircraft for aerodynamic performance degradation identification and model-based methods for envelope prediction. Results are presented in which a model-based fault detection filter is applied to the identification of aircraft control surface and stall departure failures/upsets. This application is supported by a distributed loading aerodynamics formulation for the flight dynamics system reference model. Extensions for estimating the flight envelope due to generalized aerodynamic performance degradation are also described

    A study and evaluation of image analysis techniques applied to remotely sensed data

    Get PDF
    An analysis of phenomena causing nonlinearities in the transformation from Landsat multispectral scanner coordinates to ground coordinates is presented. Experimental results comparing rms errors at ground control points indicated a slight improvement when a nonlinear (8-parameter) transformation was used instead of an affine (6-parameter) transformation. Using a preliminary ground truth map of a test site in Alabama covering the Mobile Bay area and six Landsat images of the same scene, several classification methods were assessed. A methodology was developed for automatic change detection using classification/cluster maps. A coding scheme was employed for generation of change depiction maps indicating specific types of changes. Inter- and intraseasonal data of the Mobile Bay test area were compared to illustrate the method. A beginning was made in the study of data compression by applying a Karhunen-Loeve transform technique to a small section of the test data set. The second part of the report provides a formal documentation of the several programs developed for the analysis and assessments presented

    Outlier detection in scatterometer data:Neural network approaches

    Get PDF
    Satellite-borne scatterometers are used to measure backscattered micro-wave radiation from the ocean surface. This data may be used to infer surface wind vectors where no direct measurements exist. Inherent in this data are outliers owing to aberrations on the water surface and measurement errors within the equipment. We present two techniques for identifying outliers using neural networks; the outliers may then be removed to improve models derived from the data. Firstly the generative topographic mapping (GTM) is used to create a probability density model; data with low probability under the model may be classed as outliers. In the second part of the paper, a sensor model with input-dependent noise is used and outliers are identified based on their probability under this model. GTM was successfully modified to incorporate prior knowledge of the shape of the observation manifold; however, GTM could not learn the double skinned nature of the observation manifold. To learn this double skinned manifold necessitated the use of a sensor model which imposes strong constraints on the mapping. The results using GTM with a fixed noise level suggested the noise level may vary as a function of wind speed. This was confirmed by experiments using a sensor model with input-dependent noise, where the variation in noise is most sensitive to the wind speed input. Both models successfully identified gross outliers with the largest differences between models occurring at low wind speeds. © 2003 Elsevier Science Ltd. All rights reserved
    corecore