6 research outputs found

    Manifold learning techniques and statistical approaches applied to the disruption prediction in tokamaks

    Get PDF
    The nuclear fusion arises as the unique clean energy source capable to meet the energy needs of the entire world in the future. On present days, several experimental fusion devices are operating to optimize the fusion process, confining the plasma by means of magnetic fields. The goal of plasma confined in a magnetic field can be achieved by linear cylindrical configurations or toroidal configurations, e.g., stellarator, reverse field pinch, or tokamak. Among the explored magnetic confinement techniques, the tokamak configuration is to date considered the most reliable. Unfortunately, the tokamak is vulnerable to instabilities that, in the most severe cases, can lead to lose the magnetic confinement; this phenomenon is called disruption. Disruptions are dangerous and irreversible events for the device during which the plasma energy is suddenly released on the first wall components and vacuum vessel causing runaway electrons, large mechanical forces and intense thermal loads, which may cause severe damage to the vessel wall and the plasma face components. Present devices are designed to resist the disruptive events; for this reason, today, the disruptions are generally tolerable. Furthermore, one of their aims is the investigation of disruptive boundaries in the operational space. However, on future devices, such as ITER, which must operate at high density and at high plasma current, only a limited number of disruptions will be tolerable. For these reasons, disruptions in tokamaks must be avoided, but, when a disruption is unavoidable, minimizing its severity is mandatory. Therefore, finding appropriate mitigating actions to reduce the damage of the reactor components is accepted as fundamental objective in the fusion community. The physical phenomena that lead plasma to disrupt are non-linear and very complex. The present understanding of disruption physics has not gone so far as to provide an analytical model describing the onset of these instabilities and the main effort has been devoted to develop data-based methods. In the present thesis the development of a reliable disruption prediction system has been investigated using several data-based approaches, starting from the strengths and the drawbacks of the methods proposed in the literature. In fact, literature reports numerous studies for disruption prediction using data-based models, such as neural networks. Even if the results are encouraging, they are not sufficient to explain the intrinsic structure of the data used to describe the complex behavior of the plasma. Recent studies demonstrated the urgency of developing sophisticated control schemes that allow exploring the operating limits of tokamak in order to increase the reactor performance. For this reason, one of the goal of the present thesis is to identify and to develop tools for visualization and analysis of multidimensional data from numerous plasma diagnostics available in the database of the machine. The identification of the boundaries of the disruption free plasma parameter space would lead to an increase in the knowledge of disruptions. A viable approach to understand disruptive events consists of identifying the intrinsic structure of the data used to describe the plasma operational space. Manifold learning algorithms attempt to identify these structures in order to find a low-dimensional representation of the data. Data for this thesis comes from ASDEX Upgrade (AUG). ASDEX Upgrade is a medium size tokamak experiment located at IPP Max-Planck-Institut fĂŒr Plasmaphysik, Garching bei MĂŒnchen (Germany). At present it is the largest tokamak in Germany. Among the available methods the attention has been mainly devoted to data clustering techniques. Data clustering consists on grouping a set of data in such a way that data in the same group (cluster) are more similar to each other than those in other groups. Due to the inherent predisposition for visualization, the most popular and widely used clustering technique, the Self-Organizing Map (SOM), has been firstly investigated. The SOM allows to extract information from the multidimensional operational space of AUG using 7 plasma parameters coming from successfully terminated (safe) and disruption terminated (disrupted) pulses. Data to train and test the SOM have been extracted from AUG experiments performed between July 2002 and November 2009. The SOM allowed to display the AUG operational space and to identify regions with high risk of disruption (disruptive regions) and those with low risk of disruption (safe regions). In addition to space visualization purposes, the SOM can be used also to monitor the time evolution of the discharges during an experiment. Thus, the SOM has been used as disruption predictor by introducing a suitable criterion, based on the trend of the trajectories on the map throughout the different regions. When a plasma configuration with a high risk of disruption is recognized, a disruption alarm is triggered allowing to perform disruption avoidance or mitigation actions. The data-based models, such as the SOM, are affected by the so-called "ageing effect". The ageing effect consists in the degradation of the predictor performance during the time. It is due to the fact that, during the operation of the predictor, new data may come from experiments different from those used for the training. In order to reduce such effect, a retraining of the predictor has been proposed. The retraining procedure consists of a new training procedure performed adding to the training set the new plasma configurations coming from more recent experimental campaigns. This aims to supply the novel information to the model to increase the prediction performances of the predictor. Another drawback of the SOM, common to all the proposed data-based models in literature, is the need of a dedicated set of experiments terminated with a disruption to implement the predictive model. Indeed, future fusion devices, like ITER, will tolerate only a limited number of disruptive events and hence the disruption database won't be available. In order to overcome this shortcoming, a disruption prediction system for AUG built using only input signals from safe pulses has been implemented. The predictor model is based on a Fault Detection and Isolation (FDI) approach. FDI is an important and active research field which allows to monitor a system and to determine when a fault happens. The majority of model-based FDI procedures are based on a statistical analysis of residuals. Given an empirical model identified on a reference dataset, obtained under Normal Operating Conditions (NOC), the discrepancies between the new observations and those estimated by the NOCs (residuals) are calculated. The residuals are considered as a random process with known statistical properties. If a fault happens, a change of these properties is detected. In this thesis, the safe pulses are assumed as the normal operation conditions of the process and the disruptions are assumed as status of fault. Thus, only safe pulses are used to train the NOC model. In order to have a graphical representation of the trajectory of the pulses, only three plasma parameters have been used to build the NOC model. Monitoring the time evolution of the residuals by introducing an alarm criterion based on a suitable threshold on the residual values, the NOC model properly identifies an incoming disruption. Data for the training and the tests of the NOC model have been extracted from AUG experiments executed between July 2002 and November 2009. The assessment of a specific disruptive phase for each disruptive discharge represents a relevant issue in understanding the disruptive events. Up to now at AUG disruption precursors have been assumed appearing into a prefixed time window, the last 45ms for all disrupted discharges. The choice of such a fixed temporal window could limit the prediction performance. In fact, it generates ambiguous information in cases of disruptions with disruptive phase different from 45ms. In this thesis, the Mahalanobis distance is applied to define a specific disruptive phase for each disruption. In particular, a different length of the disruptive phase has been selected for each disrupted pulse in the training set by labeling each sample as safe or disruptive depending on its own Mahalanobis distance from the set of the safe discharges. Then, with this new training set, the operational space of AUG has been mapped using the Generative Topography Mapping (GTM). The GTM is inspired by the SOM algorithm, with the aim to overcome its limitations. The GTM has been investigated in order to identify regions with high risk of disruption and those with low risk of disruption. For comparison purposes a second SOM has been built. Hence, GTM and SOM have been tested as disruption predictors. Data for the training and the tests of the SOM and the GTM have been extracted from AUG experiments executed from May 2007 to November 2012. The last method studied and applied in this thesis has been the Logistic regression model (Logit). The logistic regression is a well-known statistic method to analyze problems with dichotomous dependent variables. In this study the Logit models the probability that a generic sample belongs to the non-disruptive or the disruptive phase. The time evolution of the Logit Model output (LMO) has been used as disruption proximity index by introducing a suitable threshold. Data for the training and the tests of the Logit models have been extracted from AUG experiments executed from May 2007 to November 2012. Disruptive samples have been selected through the Mahalanobis distance criterion. Finally, in order to interpret the behavior of data-based predictors, a manual classification of disruptions has been performed for experiments occurred from May 2007 to November 2012. The manual classification has been performed by means of a visual analysis of several plasma parameters for each disruption. Moreover, the specific chains of events have been detected and used to classify disruptions and when possible, the same classes introduced for JET are adopte

    Manifold learning techniques and statistical approaches applied to the disruption prediction in tokamaks

    Get PDF
    The nuclear fusion arises as the unique clean energy source capable to meet the energy needs of the entire world in the future. On present days, several experimental fusion devices are operating to optimize the fusion process, confining the plasma by means of magnetic fields. The goal of plasma confined in a magnetic field can be achieved by linear cylindrical configurations or toroidal configurations, e.g., stellarator, reverse field pinch, or tokamak. Among the explored magnetic confinement techniques, the tokamak configuration is to date considered the most reliable. Unfortunately, the tokamak is vulnerable to instabilities that, in the most severe cases, can lead to lose the magnetic confinement; this phenomenon is called disruption. Disruptions are dangerous and irreversible events for the device during which the plasma energy is suddenly released on the first wall components and vacuum vessel causing runaway electrons, large mechanical forces and intense thermal loads, which may cause severe damage to the vessel wall and the plasma face components. Present devices are designed to resist the disruptive events; for this reason, today, the disruptions are generally tolerable. Furthermore, one of their aims is the investigation of disruptive boundaries in the operational space. However, on future devices, such as ITER, which must operate at high density and at high plasma current, only a limited number of disruptions will be tolerable. For these reasons, disruptions in tokamaks must be avoided, but, when a disruption is unavoidable, minimizing its severity is mandatory. Therefore, finding appropriate mitigating actions to reduce the damage of the reactor components is accepted as fundamental objective in the fusion community. The physical phenomena that lead plasma to disrupt are non-linear and very complex. The present understanding of disruption physics has not gone so far as to provide an analytical model describing the onset of these instabilities and the main effort has been devoted to develop data-based methods. In the present thesis the development of a reliable disruption prediction system has been investigated using several data-based approaches, starting from the strengths and the drawbacks of the methods proposed in the literature. In fact, literature reports numerous studies for disruption prediction using data-based models, such as neural networks. Even if the results are encouraging, they are not sufficient to explain the intrinsic structure of the data used to describe the complex behavior of the plasma. Recent studies demonstrated the urgency of developing sophisticated control schemes that allow exploring the operating limits of tokamak in order to increase the reactor performance. For this reason, one of the goal of the present thesis is to identify and to develop tools for visualization and analysis of multidimensional data from numerous plasma diagnostics available in the database of the machine. The identification of the boundaries of the disruption free plasma parameter space would lead to an increase in the knowledge of disruptions. A viable approach to understand disruptive events consists of identifying the intrinsic structure of the data used to describe the plasma operational space. Manifold learning algorithms attempt to identify these structures in order to find a low-dimensional representation of the data. Data for this thesis comes from ASDEX Upgrade (AUG). ASDEX Upgrade is a medium size tokamak experiment located at IPP Max-Planck-Institut fĂŒr Plasmaphysik, Garching bei MĂŒnchen (Germany). At present it is the largest tokamak in Germany. Among the available methods the attention has been mainly devoted to data clustering techniques. Data clustering consists on grouping a set of data in such a way that data in the same group (cluster) are more similar to each other than those in other groups. Due to the inherent predisposition for visualization, the most popular and widely used clustering technique, the Self-Organizing Map (SOM), has been firstly investigated. The SOM allows to extract information from the multidimensional operational space of AUG using 7 plasma parameters coming from successfully terminated (safe) and disruption terminated (disrupted) pulses. Data to train and test the SOM have been extracted from AUG experiments performed between July 2002 and November 2009. The SOM allowed to display the AUG operational space and to identify regions with high risk of disruption (disruptive regions) and those with low risk of disruption (safe regions). In addition to space visualization purposes, the SOM can be used also to monitor the time evolution of the discharges during an experiment. Thus, the SOM has been used as disruption predictor by introducing a suitable criterion, based on the trend of the trajectories on the map throughout the different regions. When a plasma configuration with a high risk of disruption is recognized, a disruption alarm is triggered allowing to perform disruption avoidance or mitigation actions. The data-based models, such as the SOM, are affected by the so-called "ageing effect". The ageing effect consists in the degradation of the predictor performance during the time. It is due to the fact that, during the operation of the predictor, new data may come from experiments different from those used for the training. In order to reduce such effect, a retraining of the predictor has been proposed. The retraining procedure consists of a new training procedure performed adding to the training set the new plasma configurations coming from more recent experimental campaigns. This aims to supply the novel information to the model to increase the prediction performances of the predictor. Another drawback of the SOM, common to all the proposed data-based models in literature, is the need of a dedicated set of experiments terminated with a disruption to implement the predictive model. Indeed, future fusion devices, like ITER, will tolerate only a limited number of disruptive events and hence the disruption database won't be available. In order to overcome this shortcoming, a disruption prediction system for AUG built using only input signals from safe pulses has been implemented. The predictor model is based on a Fault Detection and Isolation (FDI) approach. FDI is an important and active research field which allows to monitor a system and to determine when a fault happens. The majority of model-based FDI procedures are based on a statistical analysis of residuals. Given an empirical model identified on a reference dataset, obtained under Normal Operating Conditions (NOC), the discrepancies between the new observations and those estimated by the NOCs (residuals) are calculated. The residuals are considered as a random process with known statistical properties. If a fault happens, a change of these properties is detected. In this thesis, the safe pulses are assumed as the normal operation conditions of the process and the disruptions are assumed as status of fault. Thus, only safe pulses are used to train the NOC model. In order to have a graphical representation of the trajectory of the pulses, only three plasma parameters have been used to build the NOC model. Monitoring the time evolution of the residuals by introducing an alarm criterion based on a suitable threshold on the residual values, the NOC model properly identifies an incoming disruption. Data for the training and the tests of the NOC model have been extracted from AUG experiments executed between July 2002 and November 2009. The assessment of a specific disruptive phase for each disruptive discharge represents a relevant issue in understanding the disruptive events. Up to now at AUG disruption precursors have been assumed appearing into a prefixed time window, the last 45ms for all disrupted discharges. The choice of such a fixed temporal window could limit the prediction performance. In fact, it generates ambiguous information in cases of disruptions with disruptive phase different from 45ms. In this thesis, the Mahalanobis distance is applied to define a specific disruptive phase for each disruption. In particular, a different length of the disruptive phase has been selected for each disrupted pulse in the training set by labeling each sample as safe or disruptive depending on its own Mahalanobis distance from the set of the safe discharges. Then, with this new training set, the operational space of AUG has been mapped using the Generative Topography Mapping (GTM). The GTM is inspired by the SOM algorithm, with the aim to overcome its limitations. The GTM has been investigated in order to identify regions with high risk of disruption and those with low risk of disruption. For comparison purposes a second SOM has been built. Hence, GTM and SOM have been tested as disruption predictors. Data for the training and the tests of the SOM and the GTM have been extracted from AUG experiments executed from May 2007 to November 2012. The last method studied and applied in this thesis has been the Logistic regression model (Logit). The logistic regression is a well-known statistic method to analyze problems with dichotomous dependent variables. In this study the Logit models the probability that a generic sample belongs to the non-disruptive or the disruptive phase. The time evolution of the Logit Model output (LMO) has been used as disruption proximity index by introducing a suitable threshold. Data for the training and the tests of the Logit models have been extracted from AUG experiments executed from May 2007 to November 2012. Disruptive samples have been selected through the Mahalanobis distance criterion. Finally, in order to interpret the behavior of data-based predictors, a manual classification of disruptions has been performed for experiments occurred from May 2007 to November 2012. The manual classification has been performed by means of a visual analysis of several plasma parameters for each disruption. Moreover, the specific chains of events have been detected and used to classify disruptions and when possible, the same classes introduced for JET are adopte

    Pattern recognition in spaces of probability distributions for the analysis of edge-localized modes in tokamak plasmas

    Get PDF
    Magnetically confined fusion plasmas provide several data analysis challenges due to the occurrence of massive data sets, substantial measurement uncertainty, stochasticity and data dimensionality, and often nonlinear interactions between measured quantities. Recently, methods from the fields of machine learning and probability theory - some standard, some more advanced - have come to play an increasingly important role in analyzing data from fusion experiments. The capabilities offered by such methods to efficiently extract, possibly in real time, additional information from the data that is not immediately apparent to human experts, has attracted attention from an increasing number of researchers. In addition, innovative methods for real-time data processing can play an important role in plasma control, in order to ensure safe and reliable operation of the machine. Pattern recognition is a discipline within the information sciences that concerns the exploration of structure in (multidimensional) data sets using computer-based methods and algorithms. In this doctoral work, pattern recognition techniques are developed and applied to data from tokamak plasmas, in order to contribute to a systematic analysis of edge-localized modes (ELMs). ELMs are magnetohydrodynamic (MHD) instabilities occurring in the edge region of high-confinement (H-mode) fusion plasmas. The type I ELMy H-mode is the reference scenario for operation of the next-step fusion device ITER. On the one hand, ELMs have a beneficial effect on plasma operation through their role in impurity control. On the other hand, ELMs eject energy and particles from the plasma and, in ITER, large unmitigated ELMs are expected to cause intolerable heat loads on the plasma-facing components (PFCs). In interpreting experiments focused on ELM understanding and control, a significant challenge lies in handling the measurement uncertainties and the inherent stochasticity of ELM properties. In this work, we employ probabilistic models (distributions) for a quantitative data description geared towards an enhanced systematization of ELM phenomenology. Hence, we start from the point of view that the fundamental object resulting from the observation of a system is a probability distribution, with every single measurement providing a sample from this distribution. We argue that, particularly for richly stochastic phenomena like ELMs, the probability distribution of physical quantities contain significantly more information compared to mere averages. Consequently, in exploring the patterns emerging from the various ELM regimes and relations, we need methods that can handle the intrinsic probabilistic nature of the data. The original contributions of this work are twofold. First, several novel pattern recognition methods in non-Euclidean spaces of probability distribution functions (PDFs) are developed and validated. The second main contribution lies in the application of these and other techniques to a systematic analysis of ELMs in tokamak plasmas. In regard to the methodological aims of the work, we employ the framework of information geometry to develop pattern visualization and classification methods in spaces of probability distributions. In information geometry, a family of probability distributions is considered as a Riemannian manifold. Every point on the manifold represents a single PDF and the distribution parameters provide local coordinates on the manifold. The Fisher information plays the role of a Riemannian metric tensor, enabling calculation of geodesic curves on the surface. The length of such curves yields the geodesic distance (GD) on probabilistic manifolds, which is a natural similarity (distance) measure between PDFs. Equipped with a suitable distance measure, we extrapolate several distance-based pattern recognition methods to the manifold setting. This includes k-nearest neighbor (kNN) and conformal predictor (CP) methods for classification, as well as multidimensional scaling (MDS) and landmark multidimensional scaling (LMDS) for data visualization (dimensionality reduction). Furthermore, two new classification schemes are developed: a distance-to-centroid classifier (D2C) and a principal geodesic classifier (PGC). D2C classifies on the basis of the minimum GD to the class centroids and PGC considers the shape of the class on the manifold by determining the minimum distance to the principal geodesic of each class. The methods are validated by their application to the classification and retrieval of colored texture images represented in the wavelet domain. Both methods prove to be computationally efficient, yield high accuracy and also clearly exhibit the adequacy of the GD and its superiority over the Euclidean distance, for comparing PDFs. This also aids in demonstrating the utility and adaptability of the developed methods to a wide range of applications other than ELMs, which are the prime focus of analysis in this work. The second main goal of the work targets ELM analysis at three fronts, using pattern recognition and probabilistic modeling : i). We first concentrate on visualization of ELM characteristics by creating maps containing projections of multidimensional ELM data, as well as the corresponding probabilistic models. Such maps can provide physicists and machine operators with a convenient means and a useful tool for plasma monitoring and for studying data patterns reflecting key regimes and their underlying physics. In particular, GD-based MDS is used for representing the complete distributions of the multidimensional data characterizing the operational space of ELMs onto two-dimensional maps. Clusters corresponding to type I and type III ELMs are identified and the maps enable tracking of trends in plasma parameters across the operational space. It is shown that the maps can also be used with reasonable accuracy for predicting the values of the plasma parameters at a certain point in the operational space. ii). Our second application concerns fast, standardized and automated classification of ELM types. ELM types have so far been identified and characterized on an empirical and phenomenological basis. The presented classification schemes are aimed at complementing the phenomenological characterization using standardized methods that are less susceptible to subjective interpretation, while considerably reducing the effort of ELM experts in identifying ELM types. To this end, different classification paradigms (parametric and non-parametric) are explored and put to use. Discriminant analysis (DA) is used for determining a linear separation boundary between type I and III ELMs in terms of global plasma parameters, which can then be used for the prediction of ELM types as well as the study of ELM occurrence boundaries and ELM physics. However, DA makes an assumption about the underlying class distribution and presently cannot be applied in spaces of probability distributions, leading to a sub-optimal treatment of stochasticity. This is circumvented by the use of GD-based CP and kNN classifiers. CP provides estimates of its own accuracy and reliability and kNN is a simple, yet powerful classifier of ELM types. It is shown that a classification based on the distribution of ELM properties, namely inter-ELM time intervals and the distribution of global plasma parameters, is more informative and accurate than the classification based on average parameter values. iii). Finally, the correlation} between ELM energy loss (ELM size) and ELM waiting times (inverse ELM frequency) is studied for individual ELMs in a set of plasmas from the JET tokamak upgraded with the ITER-like wall (ILW). Typically, ELM control methods rely on the empirically observed inverse dependence of average ELM energy loss on average ELM frequency, even though ELM control is targeted at reducing the size of individual ELMs and not the average ELM loss. The analysis finds that for individual ELMs the correlation between ELM energy loss and waiting times varies from zero to a moderately positive value. A comparison is made with the results from a set of carbon-wall (CW) JET plasmas and nitrogen-seeded ILW JET plasmas. It is found that a high correlation between ELM energy loss and waiting time comparable to CW plasmas is only found in nitrogen-seeded ILW plasmas. Furthermore, most of the unseeded JET ILW plasmas have ELMs that are followed by a second phase referred to as the slow transport event (STE). The effect of the STEs on the distribution of ELM durations is studied, as well as their influence on the correlation between ELM energy loss and waiting times. This analysis has a clear outcome for the optimization of ELM control methods, while presenting insights for an improved physics understanding of ELMs.Die Analyse von experimentellen Daten magnetisch eingeschlossener Fusionsplasmen stellt wegen der großen Datenmengen, der hohen DimensionalitĂ€t, der Messunsicherheiten und auch der oft nichtlinearen Beziehungen untereinander eine große Herausforderung dar. Methoden der Datenanalyse aus den Feldern des maschinellen Lernens sowie der Wahrscheinlichkeitstheorie spielen daher in letzter Zeit eine immer grĂ¶ĂŸere Rolle bei der Analyse von Daten aus Fusionsexperimenten. Dabei interessiert vor allem die Möglichkeit, zusĂ€tzliche Information welche dem menschlichen Beobachter verborgen bleiben, systematisch zu extrahieren. ZusĂ€tzlich können innovative Methoden der Echtzeit-Datenverarbeitung eine wichtige Rolle fĂŒr Kontrollanwendungen in Fusionsexperimenten spielen. Mustererkennung ist eine Disziplin der Informationstheorie welche sich mit der Erforschung von Strukturen in multidimensionalen DatensĂ€tzen durch computergestĂŒtzte Methoden und Algorithmen beschĂ€ftigt. In dieser Doktorarbeit werden Methoden der Mustererkennung auf Daten von Tokamakexperimenten fĂŒr eine systematische Analyse von edge-localized modes (ELMs) angewendet. ELMs sind magnetohydrodynamische (MHD) InstabilitĂ€ten die am Plasmarand in ‘high-confinement‘ (H-mode) Fusionsplasmen auftreten. Die ‘Typ I ELMy H-mode' ist das Referenz-Betriebsszenario fĂŒr das zukĂŒnftige ITER Experiment. ELMs spielen einerseits eine positive Rolle fĂŒr den Plasmabetrieb da sie zur Verunreinigungskontrolle beitragen. Andererseits werfen ELMs Teilchen und Energie aus dem Plasma und könnten daher in ITER die IntegritĂ€t der ersten Wand gefĂ€hrden. Eine signifikante Herausforderung bei der Interpretation von Experimenten welche sich mit dem VerstĂ€ndnis und der Kontrolle von ELMs beschĂ€ftigen liegt in der Behandlung der Messunsicherheiten sowie der inhĂ€renten StochastizitĂ€t der ELM Parameter. In der vorliegenden Arbeit werden probabilistische Modelle (Verteilungen) zur quantitativen Beschreibung der Daten mit dem Ziel einer verbesserten systematischen Einteilung der ELM-PhĂ€nomenologie verwendet. Dabei wird davon ausgegangen, dass die fundamentale GrĂ¶ĂŸe eines Systems eine Wahrscheinlichkeitsverteilung ist, wobei jede Einzelmessung eine Stichprobe dieser Verteilung darstellt. Dabei wird angenommen dass, im Besonderen fĂŒr stark stochastische Ereignisse wie ELMs, die Wahrscheinlichkeitsverteilung der physikalischen Parameter deutlich mehr Information enthĂ€lt als deren Mittelwerte. Folglich erfordert die Erforschung der Struktur der unterschiedlichen ELM Regimes Methoden, welche die intrinsisch stochastische Natur der Daten berĂŒcksichtigen kann. Diese Arbeit liefert zwei grundsĂ€tzlich neue BeitrĂ€ge: zunĂ€chst werden neuartige Strukturerkennungs-Methoden in nicht-euklidischen RĂ€umen von Wahrscheinlichkeitsverteilungen entwickelt und validiert. Der zweite grundsĂ€tzliche Beitrag liegt in der Anwendung dieser und anderer Methoden auf eine systematische Analyse von ELMs in Tokamakplasmen. Aus methodologischer Sicht wird in dieser Arbeit die Informationsgeometrie angewendet um Methoden zur Mustererkennung und –klassifizierung in RĂ€umen von Wahrscheinlichkeitsverteilungen zu entwickeln. In der Informationsgeometrie wird eine Familie von Wahrscheinlichkeitsverteilungen als eine Riemannsche Mannigfaltigkeit aufgefasst. Jeder Punkt auf der Mannigfaltigkeit stellt eine Wahrscheinlichkeitsverteilung dar und die Verteilungsparameter sind lokale Koordinaten auf der Mannigfaltigkeit. Die Fisher Information spielt dabei die Rolle des Riemannschen metrischen Tensors und erlaubt es, geodĂ€tische Kurven auf der FlĂ€che zu berechnen. Die LĂ€nge einer solchen Kurve ergibt den geodĂ€tischen Abstand auf der Mannigfaltigkeit, welcher ein natĂŒrliches Maß fĂŒr den Abstand zwischen Verteilungsfunktionen ist. Mit diesem geeigneten Abstandsmaß werden mehrere Mustererkennungsmethoden welche auf dem Abstand basieren auf die Mannigfaltigkeit angewandt. Diese schließen die ‘k-nearest neighbor’ (kNN) und ‘conformal predictor’ (CP) Klassifikationsmethoden ein sowie ‘multidimensional scaling’ (MDS) und ‘landmark multidimensional scaling‘ (LMDS) zur Datenvisualisierung mit dem Ziel der Dimensionsreduktion. Desweitern werden zwei neue Klassifikationsmethoden entwickelt: ein ‘distance-to-centroid classifier’ (D2C) und ein ‘principal geodesic classifier’ (PGC). D2C klassifiziert auf Basis des minimalen geodĂ€tischen Abstands vom Schwerpunkt der Daten und PGC berĂŒcksichtigt die Form der Klasse auf der Mannigfaltigkeit indem der Abstand zur HauptgeodĂ€tischen jeder Klasse bestimmt wird. Diese Methoden werden durch Anwendung auf die Klassifizierung und Rekonstruktion von farbigen Texturbildern in der Waveletdarstellung validiert. Beide Methoden stellen sich als effizient im Rechenaufwand heraus und liefern hohe Genauigkeit, wobei der geodĂ€tische Abstand dem euklidischen Abstand deutlich ĂŒberlegen ist und somit als angemessen fĂŒr den Vergleich von Verteilungsfunktionen bestĂ€tigt wird. Dies dient auch dem Nachweis der Eignung der entwickelten Methoden fĂŒr eine Vielzahl von Anwendungen ĂŒber das in dieser Arbeit vorrangig behandelte Feld der ELMs hinaus. Das zweite Hauptziel der Arbeit ist die Analyse von ELMs mit den Methoden der Mustererkennung und der wahrscheinlichkeitstheoretischen Modellierung auf drei Gebieten: i). ZunĂ€chst wird die Visualisierung von ELM Eigenschaften durch Erstellung von Abbildungen behandelt welche multidimensionale ELM Daten projizieren. Solche Abbildungen können fĂŒr Physiker und Experimentatoren ein nĂŒtzliches Werkzeug zur Überwachung der Plasmaentladung darstellen und dienen darĂŒber hinaus zu Studien von Datenmustern, welche prinzipielle Regimes und deren zugrundeliegende Physik charakterisieren. Im speziellen wird die GD-basierte MDS zur Darstellung der gesamten Verteilung der multidimensionalen Daten, welche das Auftreten von ELMs beschreiben in zweidimensionalen Abbildungen verwendet. Cluster in welchen ‘Typ I’ und ‘Typ III’ ELMs auftreten werden identifiziert und die Abbildung ermöglicht es, Trends in der VerĂ€nderung von Plasmaparametern im Parameterraum zu erkennen. Es wird gezeigt, dass diese Abbildungen auch dazu verwendet werden können, die Plasmaparameter fĂŒr einen bestimmten Punkt im Betriebsbereich vorherzusagen. ii). Eine zweite Anwendung beschĂ€ftigt sich mit einer schnellen, standardisierten Klassifizierung des ELM Typs. ELM Typen wurden bisher auf einer empirisch-phĂ€nomenologischen Basis identifiziert. Die hier vorgestellten Klassifizierungs-Schemata dienen der ErgĂ€nzung der phĂ€nomenologischen Beschreibung durch standardisierte Methoden welche weniger anfĂ€llig fĂŒr subjektive Wahrnehmung und Interpretation sind und sollen auch den Aufwand bei der Bestimmung des ELM Typs verringern. Verschiedene Klassifizierungsmethoden, parametrisch und nicht-parametrisch, werden untersucht und eingesetzt. Discriminant Analysis (DA) wird fĂŒr die Bestimmung einer linearen Grenze zwischen Typ I und Typ III ELMs in globalen Plasmaparametern eingesetzt, die dann sowohl zur Vorhersage des ELM Typs als auch zur Untersuchung der Bereiche, in denen die unterschiedlichen ELM Typen auftreten, verwendet wird. Dabei basiert die DA allerdings auf einer Annahme ĂŒber die zugrunde liegende Verteilung der Klassen und kann nach derzeitigem Stand nicht auf RĂ€ume von Verteilungsfunktionen angewendet werden, was zu einer unzureichenden Behandlung der StochastizitĂ€t fĂŒhrt. Dies wird durch die Verwendung von GD-basierter CP und von kNN Klassifikatoren behoben. CP liefert eine AbschĂ€tzung ihrer Genauigkeit und ZuverlĂ€ssigkeit und kNN ist ein einfacher, aber leistungsstarker Klassifikator fĂŒr ELM-Typen. Es wird gezeigt dass eine Klassifizierung basierend auf der Verteilung der ELM Eigenschaften, namentlich der inter-ELM Zeitintervalle und der Verteilung der globalen Plasmaparameter, mehr Information enthĂ€lt als eine Klassifizierung welche auf gemittelten Werten basiert. iii).Schließlich wird die Korrelation zwischen ELM Energieverlust (ELM GrĂ¶ĂŸe) und ELM Wartezeiten (inverse ELM Frequenz) fĂŒr individuelle ELMs aus einer Datenbasis von Plasmaentladungen des JET Tokamaks in der ‚ITER-like wall‘ (ILW) Konfiguration untersucht. ELM Kontrollmethoden basieren typischerweise auf dem empirisch beobachteten inversen Zusammenhang zwischen mittlerem ELM-Verlust und mittlerer ELM-Frequenz, obwohl ELM Kontrolle die Reduktion der GrĂ¶ĂŸe individueller ELMs zum Ziel hat. Die Analyse zeigt, dass fĂŒr individuelle ELMs die Korrelation zwischen ELM-Energieverlust und Wartezeit generell niedrig ist. Dieses Ergebnis wird mit einem Datensatz von JET in der ‚carbon-wall‘ (CW) Konfiguration sowie einem Datensatz von Stickstoff-gekĂŒhlten ILW JET Plasmen verglichen. Es zeigt sich, dass eine hohe Korrelation zwischen ELM-Energieverlust und Wartezeit, vergleichbar zu CW Plasmen, nur in Stickstoff-gekĂŒhlten ILW Plasmen auftritt. DarĂŒber hinaus treten in den meisten JET ILW Plasmen ohne StickstoffkĂŒhlung ELMs auf, welche von einer zweiten Phase, slow transport event (STE) genannt, begleitet werden. Der Effekt der STEs auf die Verteilung der ELM Dauer sowie deren Einfluss auf die Korrelation zwischen ELM-Energieverlust und Wartezeit wird untersucht. Diese Untersuchung hat einerseits eine starke Relevanz fĂŒr die Optimierung von Methoden zur ELM Kontrolle, andererseits trĂ€gt sie zum tieferen Einblick in die den ELMs zugrunde liegende Physik bei

    Tracking of the Plasma States in a Nuclear Fusion Device using SOMs

    No full text
    Knowledge discovery consists of finding new knowledge from data bases where dimension, complexity or amount of data is prohibitively large for human observation alone. The Self Organizing Map (SOM) is a powerful neural network method for the analysis and visualization of high-dimensional data. The need for efficient data visualization and clustering is often faced, for instance, in the analysis, monitoring, fault detection, or prediction of various engineering plants. In this paper, the use of a SOM based method for prediction of disruptions in experimental devices for nuclear fusion is investigated. The choice of the SOM size is firstly faced, which heavily affects the performance of the mapping. Then, the ASDEX Upgrade Tokamak high dimensional operational space is mapped onto the 2-dimensional SOM, and, finally, the current process state and its history in time has been visualized as a trajectory on the map, in order to predict the safe or disruptive state of the plasma. © 2009 Springer-Verlag

    Tracking of the Plasma States in a Nuclear Fusion Device using SOMs

    No full text
    Knowledge discovery consists of finding new knowledge from databases where dimension, complexity, or amount of data is prohibitively large for human observation alone. The need for efficient data visualization and clustering is often faced, for instance, in the analysis, monitoring, fault detection, or prediction of various engineering plants. In this paper, two clustering techniques, K-means and Self-Organizing Maps, are used for the identification of characteristic regions for plasma scenario in nuclear fusion experimental devices. The choice of the number of clusters, which heavily affects the performance of the mapping, is firstly faced. Then, the ASDEX Upgrade Tokamak high-dimensional operational space is mapped into lower-dimensional maps, allowing to detect the regions with high risk of disruption, and, finally, the current process state and its history in time are visualized as a trajectory on the Self-Organizing Map, in order to predict the safe or disruptive state of the plasma
    corecore