2,935 research outputs found

    Pattern recognition in spaces of probability distributions for the analysis of edge-localized modes in tokamak plasmas

    Get PDF
    Magnetically confined fusion plasmas provide several data analysis challenges due to the occurrence of massive data sets, substantial measurement uncertainty, stochasticity and data dimensionality, and often nonlinear interactions between measured quantities. Recently, methods from the fields of machine learning and probability theory - some standard, some more advanced - have come to play an increasingly important role in analyzing data from fusion experiments. The capabilities offered by such methods to efficiently extract, possibly in real time, additional information from the data that is not immediately apparent to human experts, has attracted attention from an increasing number of researchers. In addition, innovative methods for real-time data processing can play an important role in plasma control, in order to ensure safe and reliable operation of the machine. Pattern recognition is a discipline within the information sciences that concerns the exploration of structure in (multidimensional) data sets using computer-based methods and algorithms. In this doctoral work, pattern recognition techniques are developed and applied to data from tokamak plasmas, in order to contribute to a systematic analysis of edge-localized modes (ELMs). ELMs are magnetohydrodynamic (MHD) instabilities occurring in the edge region of high-confinement (H-mode) fusion plasmas. The type I ELMy H-mode is the reference scenario for operation of the next-step fusion device ITER. On the one hand, ELMs have a beneficial effect on plasma operation through their role in impurity control. On the other hand, ELMs eject energy and particles from the plasma and, in ITER, large unmitigated ELMs are expected to cause intolerable heat loads on the plasma-facing components (PFCs). In interpreting experiments focused on ELM understanding and control, a significant challenge lies in handling the measurement uncertainties and the inherent stochasticity of ELM properties. In this work, we employ probabilistic models (distributions) for a quantitative data description geared towards an enhanced systematization of ELM phenomenology. Hence, we start from the point of view that the fundamental object resulting from the observation of a system is a probability distribution, with every single measurement providing a sample from this distribution. We argue that, particularly for richly stochastic phenomena like ELMs, the probability distribution of physical quantities contain significantly more information compared to mere averages. Consequently, in exploring the patterns emerging from the various ELM regimes and relations, we need methods that can handle the intrinsic probabilistic nature of the data. The original contributions of this work are twofold. First, several novel pattern recognition methods in non-Euclidean spaces of probability distribution functions (PDFs) are developed and validated. The second main contribution lies in the application of these and other techniques to a systematic analysis of ELMs in tokamak plasmas. In regard to the methodological aims of the work, we employ the framework of information geometry to develop pattern visualization and classification methods in spaces of probability distributions. In information geometry, a family of probability distributions is considered as a Riemannian manifold. Every point on the manifold represents a single PDF and the distribution parameters provide local coordinates on the manifold. The Fisher information plays the role of a Riemannian metric tensor, enabling calculation of geodesic curves on the surface. The length of such curves yields the geodesic distance (GD) on probabilistic manifolds, which is a natural similarity (distance) measure between PDFs. Equipped with a suitable distance measure, we extrapolate several distance-based pattern recognition methods to the manifold setting. This includes k-nearest neighbor (kNN) and conformal predictor (CP) methods for classification, as well as multidimensional scaling (MDS) and landmark multidimensional scaling (LMDS) for data visualization (dimensionality reduction). Furthermore, two new classification schemes are developed: a distance-to-centroid classifier (D2C) and a principal geodesic classifier (PGC). D2C classifies on the basis of the minimum GD to the class centroids and PGC considers the shape of the class on the manifold by determining the minimum distance to the principal geodesic of each class. The methods are validated by their application to the classification and retrieval of colored texture images represented in the wavelet domain. Both methods prove to be computationally efficient, yield high accuracy and also clearly exhibit the adequacy of the GD and its superiority over the Euclidean distance, for comparing PDFs. This also aids in demonstrating the utility and adaptability of the developed methods to a wide range of applications other than ELMs, which are the prime focus of analysis in this work. The second main goal of the work targets ELM analysis at three fronts, using pattern recognition and probabilistic modeling : i). We first concentrate on visualization of ELM characteristics by creating maps containing projections of multidimensional ELM data, as well as the corresponding probabilistic models. Such maps can provide physicists and machine operators with a convenient means and a useful tool for plasma monitoring and for studying data patterns reflecting key regimes and their underlying physics. In particular, GD-based MDS is used for representing the complete distributions of the multidimensional data characterizing the operational space of ELMs onto two-dimensional maps. Clusters corresponding to type I and type III ELMs are identified and the maps enable tracking of trends in plasma parameters across the operational space. It is shown that the maps can also be used with reasonable accuracy for predicting the values of the plasma parameters at a certain point in the operational space. ii). Our second application concerns fast, standardized and automated classification of ELM types. ELM types have so far been identified and characterized on an empirical and phenomenological basis. The presented classification schemes are aimed at complementing the phenomenological characterization using standardized methods that are less susceptible to subjective interpretation, while considerably reducing the effort of ELM experts in identifying ELM types. To this end, different classification paradigms (parametric and non-parametric) are explored and put to use. Discriminant analysis (DA) is used for determining a linear separation boundary between type I and III ELMs in terms of global plasma parameters, which can then be used for the prediction of ELM types as well as the study of ELM occurrence boundaries and ELM physics. However, DA makes an assumption about the underlying class distribution and presently cannot be applied in spaces of probability distributions, leading to a sub-optimal treatment of stochasticity. This is circumvented by the use of GD-based CP and kNN classifiers. CP provides estimates of its own accuracy and reliability and kNN is a simple, yet powerful classifier of ELM types. It is shown that a classification based on the distribution of ELM properties, namely inter-ELM time intervals and the distribution of global plasma parameters, is more informative and accurate than the classification based on average parameter values. iii). Finally, the correlation} between ELM energy loss (ELM size) and ELM waiting times (inverse ELM frequency) is studied for individual ELMs in a set of plasmas from the JET tokamak upgraded with the ITER-like wall (ILW). Typically, ELM control methods rely on the empirically observed inverse dependence of average ELM energy loss on average ELM frequency, even though ELM control is targeted at reducing the size of individual ELMs and not the average ELM loss. The analysis finds that for individual ELMs the correlation between ELM energy loss and waiting times varies from zero to a moderately positive value. A comparison is made with the results from a set of carbon-wall (CW) JET plasmas and nitrogen-seeded ILW JET plasmas. It is found that a high correlation between ELM energy loss and waiting time comparable to CW plasmas is only found in nitrogen-seeded ILW plasmas. Furthermore, most of the unseeded JET ILW plasmas have ELMs that are followed by a second phase referred to as the slow transport event (STE). The effect of the STEs on the distribution of ELM durations is studied, as well as their influence on the correlation between ELM energy loss and waiting times. This analysis has a clear outcome for the optimization of ELM control methods, while presenting insights for an improved physics understanding of ELMs.Die Analyse von experimentellen Daten magnetisch eingeschlossener Fusionsplasmen stellt wegen der großen Datenmengen, der hohen DimensionalitĂ€t, der Messunsicherheiten und auch der oft nichtlinearen Beziehungen untereinander eine große Herausforderung dar. Methoden der Datenanalyse aus den Feldern des maschinellen Lernens sowie der Wahrscheinlichkeitstheorie spielen daher in letzter Zeit eine immer grĂ¶ĂŸere Rolle bei der Analyse von Daten aus Fusionsexperimenten. Dabei interessiert vor allem die Möglichkeit, zusĂ€tzliche Information welche dem menschlichen Beobachter verborgen bleiben, systematisch zu extrahieren. ZusĂ€tzlich können innovative Methoden der Echtzeit-Datenverarbeitung eine wichtige Rolle fĂŒr Kontrollanwendungen in Fusionsexperimenten spielen. Mustererkennung ist eine Disziplin der Informationstheorie welche sich mit der Erforschung von Strukturen in multidimensionalen DatensĂ€tzen durch computergestĂŒtzte Methoden und Algorithmen beschĂ€ftigt. In dieser Doktorarbeit werden Methoden der Mustererkennung auf Daten von Tokamakexperimenten fĂŒr eine systematische Analyse von edge-localized modes (ELMs) angewendet. ELMs sind magnetohydrodynamische (MHD) InstabilitĂ€ten die am Plasmarand in ‘high-confinement‘ (H-mode) Fusionsplasmen auftreten. Die ‘Typ I ELMy H-mode' ist das Referenz-Betriebsszenario fĂŒr das zukĂŒnftige ITER Experiment. ELMs spielen einerseits eine positive Rolle fĂŒr den Plasmabetrieb da sie zur Verunreinigungskontrolle beitragen. Andererseits werfen ELMs Teilchen und Energie aus dem Plasma und könnten daher in ITER die IntegritĂ€t der ersten Wand gefĂ€hrden. Eine signifikante Herausforderung bei der Interpretation von Experimenten welche sich mit dem VerstĂ€ndnis und der Kontrolle von ELMs beschĂ€ftigen liegt in der Behandlung der Messunsicherheiten sowie der inhĂ€renten StochastizitĂ€t der ELM Parameter. In der vorliegenden Arbeit werden probabilistische Modelle (Verteilungen) zur quantitativen Beschreibung der Daten mit dem Ziel einer verbesserten systematischen Einteilung der ELM-PhĂ€nomenologie verwendet. Dabei wird davon ausgegangen, dass die fundamentale GrĂ¶ĂŸe eines Systems eine Wahrscheinlichkeitsverteilung ist, wobei jede Einzelmessung eine Stichprobe dieser Verteilung darstellt. Dabei wird angenommen dass, im Besonderen fĂŒr stark stochastische Ereignisse wie ELMs, die Wahrscheinlichkeitsverteilung der physikalischen Parameter deutlich mehr Information enthĂ€lt als deren Mittelwerte. Folglich erfordert die Erforschung der Struktur der unterschiedlichen ELM Regimes Methoden, welche die intrinsisch stochastische Natur der Daten berĂŒcksichtigen kann. Diese Arbeit liefert zwei grundsĂ€tzlich neue BeitrĂ€ge: zunĂ€chst werden neuartige Strukturerkennungs-Methoden in nicht-euklidischen RĂ€umen von Wahrscheinlichkeitsverteilungen entwickelt und validiert. Der zweite grundsĂ€tzliche Beitrag liegt in der Anwendung dieser und anderer Methoden auf eine systematische Analyse von ELMs in Tokamakplasmen. Aus methodologischer Sicht wird in dieser Arbeit die Informationsgeometrie angewendet um Methoden zur Mustererkennung und –klassifizierung in RĂ€umen von Wahrscheinlichkeitsverteilungen zu entwickeln. In der Informationsgeometrie wird eine Familie von Wahrscheinlichkeitsverteilungen als eine Riemannsche Mannigfaltigkeit aufgefasst. Jeder Punkt auf der Mannigfaltigkeit stellt eine Wahrscheinlichkeitsverteilung dar und die Verteilungsparameter sind lokale Koordinaten auf der Mannigfaltigkeit. Die Fisher Information spielt dabei die Rolle des Riemannschen metrischen Tensors und erlaubt es, geodĂ€tische Kurven auf der FlĂ€che zu berechnen. Die LĂ€nge einer solchen Kurve ergibt den geodĂ€tischen Abstand auf der Mannigfaltigkeit, welcher ein natĂŒrliches Maß fĂŒr den Abstand zwischen Verteilungsfunktionen ist. Mit diesem geeigneten Abstandsmaß werden mehrere Mustererkennungsmethoden welche auf dem Abstand basieren auf die Mannigfaltigkeit angewandt. Diese schließen die ‘k-nearest neighbor’ (kNN) und ‘conformal predictor’ (CP) Klassifikationsmethoden ein sowie ‘multidimensional scaling’ (MDS) und ‘landmark multidimensional scaling‘ (LMDS) zur Datenvisualisierung mit dem Ziel der Dimensionsreduktion. Desweitern werden zwei neue Klassifikationsmethoden entwickelt: ein ‘distance-to-centroid classifier’ (D2C) und ein ‘principal geodesic classifier’ (PGC). D2C klassifiziert auf Basis des minimalen geodĂ€tischen Abstands vom Schwerpunkt der Daten und PGC berĂŒcksichtigt die Form der Klasse auf der Mannigfaltigkeit indem der Abstand zur HauptgeodĂ€tischen jeder Klasse bestimmt wird. Diese Methoden werden durch Anwendung auf die Klassifizierung und Rekonstruktion von farbigen Texturbildern in der Waveletdarstellung validiert. Beide Methoden stellen sich als effizient im Rechenaufwand heraus und liefern hohe Genauigkeit, wobei der geodĂ€tische Abstand dem euklidischen Abstand deutlich ĂŒberlegen ist und somit als angemessen fĂŒr den Vergleich von Verteilungsfunktionen bestĂ€tigt wird. Dies dient auch dem Nachweis der Eignung der entwickelten Methoden fĂŒr eine Vielzahl von Anwendungen ĂŒber das in dieser Arbeit vorrangig behandelte Feld der ELMs hinaus. Das zweite Hauptziel der Arbeit ist die Analyse von ELMs mit den Methoden der Mustererkennung und der wahrscheinlichkeitstheoretischen Modellierung auf drei Gebieten: i). ZunĂ€chst wird die Visualisierung von ELM Eigenschaften durch Erstellung von Abbildungen behandelt welche multidimensionale ELM Daten projizieren. Solche Abbildungen können fĂŒr Physiker und Experimentatoren ein nĂŒtzliches Werkzeug zur Überwachung der Plasmaentladung darstellen und dienen darĂŒber hinaus zu Studien von Datenmustern, welche prinzipielle Regimes und deren zugrundeliegende Physik charakterisieren. Im speziellen wird die GD-basierte MDS zur Darstellung der gesamten Verteilung der multidimensionalen Daten, welche das Auftreten von ELMs beschreiben in zweidimensionalen Abbildungen verwendet. Cluster in welchen ‘Typ I’ und ‘Typ III’ ELMs auftreten werden identifiziert und die Abbildung ermöglicht es, Trends in der VerĂ€nderung von Plasmaparametern im Parameterraum zu erkennen. Es wird gezeigt, dass diese Abbildungen auch dazu verwendet werden können, die Plasmaparameter fĂŒr einen bestimmten Punkt im Betriebsbereich vorherzusagen. ii). Eine zweite Anwendung beschĂ€ftigt sich mit einer schnellen, standardisierten Klassifizierung des ELM Typs. ELM Typen wurden bisher auf einer empirisch-phĂ€nomenologischen Basis identifiziert. Die hier vorgestellten Klassifizierungs-Schemata dienen der ErgĂ€nzung der phĂ€nomenologischen Beschreibung durch standardisierte Methoden welche weniger anfĂ€llig fĂŒr subjektive Wahrnehmung und Interpretation sind und sollen auch den Aufwand bei der Bestimmung des ELM Typs verringern. Verschiedene Klassifizierungsmethoden, parametrisch und nicht-parametrisch, werden untersucht und eingesetzt. Discriminant Analysis (DA) wird fĂŒr die Bestimmung einer linearen Grenze zwischen Typ I und Typ III ELMs in globalen Plasmaparametern eingesetzt, die dann sowohl zur Vorhersage des ELM Typs als auch zur Untersuchung der Bereiche, in denen die unterschiedlichen ELM Typen auftreten, verwendet wird. Dabei basiert die DA allerdings auf einer Annahme ĂŒber die zugrunde liegende Verteilung der Klassen und kann nach derzeitigem Stand nicht auf RĂ€ume von Verteilungsfunktionen angewendet werden, was zu einer unzureichenden Behandlung der StochastizitĂ€t fĂŒhrt. Dies wird durch die Verwendung von GD-basierter CP und von kNN Klassifikatoren behoben. CP liefert eine AbschĂ€tzung ihrer Genauigkeit und ZuverlĂ€ssigkeit und kNN ist ein einfacher, aber leistungsstarker Klassifikator fĂŒr ELM-Typen. Es wird gezeigt dass eine Klassifizierung basierend auf der Verteilung der ELM Eigenschaften, namentlich der inter-ELM Zeitintervalle und der Verteilung der globalen Plasmaparameter, mehr Information enthĂ€lt als eine Klassifizierung welche auf gemittelten Werten basiert. iii).Schließlich wird die Korrelation zwischen ELM Energieverlust (ELM GrĂ¶ĂŸe) und ELM Wartezeiten (inverse ELM Frequenz) fĂŒr individuelle ELMs aus einer Datenbasis von Plasmaentladungen des JET Tokamaks in der ‚ITER-like wall‘ (ILW) Konfiguration untersucht. ELM Kontrollmethoden basieren typischerweise auf dem empirisch beobachteten inversen Zusammenhang zwischen mittlerem ELM-Verlust und mittlerer ELM-Frequenz, obwohl ELM Kontrolle die Reduktion der GrĂ¶ĂŸe individueller ELMs zum Ziel hat. Die Analyse zeigt, dass fĂŒr individuelle ELMs die Korrelation zwischen ELM-Energieverlust und Wartezeit generell niedrig ist. Dieses Ergebnis wird mit einem Datensatz von JET in der ‚carbon-wall‘ (CW) Konfiguration sowie einem Datensatz von Stickstoff-gekĂŒhlten ILW JET Plasmen verglichen. Es zeigt sich, dass eine hohe Korrelation zwischen ELM-Energieverlust und Wartezeit, vergleichbar zu CW Plasmen, nur in Stickstoff-gekĂŒhlten ILW Plasmen auftritt. DarĂŒber hinaus treten in den meisten JET ILW Plasmen ohne StickstoffkĂŒhlung ELMs auf, welche von einer zweiten Phase, slow transport event (STE) genannt, begleitet werden. Der Effekt der STEs auf die Verteilung der ELM Dauer sowie deren Einfluss auf die Korrelation zwischen ELM-Energieverlust und Wartezeit wird untersucht. Diese Untersuchung hat einerseits eine starke Relevanz fĂŒr die Optimierung von Methoden zur ELM Kontrolle, andererseits trĂ€gt sie zum tieferen Einblick in die den ELMs zugrunde liegende Physik bei

    Statistical analysis of high-dimensional biomedical data: a gentle introduction to analytical goals, common approaches and challenges

    Get PDF
    International audienceBackground: In high-dimensional data (HDD) settings, the number of variables associated with each observation is very large. Prominent examples of HDD in biomedical research include omics data with a large number of variables such as many measurements across the genome, proteome, or metabolome, as well as electronic health records data that have large numbers of variables recorded for each patient. The statistical analysis of such data requires knowledge and experience, sometimes of complex methods adapted to the respective research questions. Methods: Advances in statistical methodology and machine learning methods offer new opportunities for innovative analyses of HDD, but at the same time require a deeper understanding of some fundamental statistical concepts. Topic group TG9 “High-dimensional data” of the STRATOS (STRengthening Analytical Thinking for Observational Studies) initiative provides guidance for the analysis of observational studies, addressing particular statistical challenges and opportunities for the analysis of studies involving HDD. In this overview, we discuss key aspects of HDD analysis to provide a gentle introduction for non-statisticians and for classically trained statisticians with little experience specific to HDD. Results: The paper is organized with respect to subtopics that are most relevant for the analysis of HDD, in particular initial data analysis, exploratory data analysis, multiple testing, and prediction. For each subtopic, main analytical goals in HDD settings are outlined. For each of these goals, basic explanations for some commonly used analysis methods are provided. Situations are identified where traditional statistical methods cannot, or should not, be used in the HDD setting, or where adequate analytic tools are still lacking. Many key references are provided. Conclusions: This review aims to provide a solid statistical foundation for researchers, including statisticians and non-statisticians, who are new to research with HDD or simply want to better evaluate and understand the results of HDD analyses

    Models for Identifying Structures in the Data: A Performance Comparison

    Get PDF
    This paper reports on the unsupervised analysis of seismic signals recorded in Italy, respectively on the Vesuvius volcano, located in Naples, and on the Stromboli volcano, located North of Eastern Sicily. The Vesuvius dataset is composed of earthquakes and false events like thunders, man-made quarry and undersea explosions. The Stromboli dataset consists of explosion-quakes, landslides and volcanic microtremor signals. The aim of this paper is to apply on these datasets three projection methods, the linear Principal Component Analysis (PCA), the Self-Organizing Map (SOM), and the Curvilinear Component Analysis (CCA), in order to compare their performance. Since these algorithms are well known to be able to exploit structures and organize data providing a clear framework for understanding and interpreting their relationships, this work examines the category of structural information that they can provide on our specific sets. Moreover, the paper suggests a breakthrough in the application area of the SOM, used here for clustering different seismic signals. The results show that, among the three above techniques, SOM better visualizes the complex set of high-dimensional data discovering their intrinsic structure and eventually appropriately clustering the different signal typologies under examination, discriminating the explosionquakes from the landslides and microtremor recorded at the Stromboli volcano, and the earthquakes from natural (thunders) and artificial (quarry blasts and undersea explosions) events recorded at the Vesuvius volcano

    Effective and Trustworthy Dimensionality Reduction Approaches for High Dimensional Data Understanding and Visualization

    Get PDF
    In recent years, the huge expansion of digital technologies has vastly increased the volume of data to be explored. Reducing the dimensionality of data is an essential step in data exploration and visualisation. The integrity of a dimensionality reduction technique relates to the goodness of maintaining the data structure. The visualisation of a low dimensional data that has not captured the high dimensional space data structure is untrustworthy. The scale of maintained data structure by a method depends on several factors, such as the type of data considered and tuning parameters. The type of the data includes linear and nonlinear data, and the tuning parameters include the number of neighbours and perplexity. In reality, most of the data under consideration are nonlinear, and the process to tune parameters could be costly since it depends on the number of data samples considered. Currently, the existing dimensionality reduction approaches suffer from the following problems: 1) Only work well with linear data, 2) The scale of maintained data structure is related to the number of data samples considered, and/or 3) Tear problem and false neighbours problem.To deal with all the above-mentioned problems, this research has developed Same Degree Distribution (SDD), multi-SDD (MSDD) and parameter-free SDD approaches , that 1) Saves computational time because its tuning parameter does not 2) Produces more trustworthy visualisation by using degree-distribution that is smooth enough to capture local and global data structure, and 3) Does not suffer from tear and false neighbours problems due to using the same degree-distribution in the high and low dimensional spaces to calculate the similarities between data samples. The developed dimensionality reduction methods are tested with several popu- lar synthetics and real datasets. The scale of the maintained data structure is evaluated using different quality metrics, i.e., Kendall’s Tau coefficient, Trustworthiness, Continuity, LCMC, and Co-ranking matrix. Also, the theoretical analysis of the impact of dissimilarity measure in structure capturing has been supported by simulations results conducted in two different datasets evaluated by Kendall’s Tau and Co-ranking matrix. The SDD, MSDD, and parameter-free SDD methods do not outperform other global methods such as Isomap in data with a large fraction of large pairwise distances, and it remains a further work task. Reducing the computational complexity is another objective for further work

    Generalized Multi-manifold Graph Ensemble Embedding for Multi-View Dimensionality Reduction

    Get PDF
    In this paper, we propose a new dimension reduction (DR) algorithm called ensemble graph-based locality preserving projections (EGLPP); to overcome the neighborhood size k sensitivity in locally preserving projections (LPP). EGLPP constructs a homogeneous ensemble of adjacency graphs by varying neighborhood size k and finally uses the integrated embedded graph to optimize the low-dimensional projections. Furthermore, to appropriately handle the intrinsic geometrical structure of the multi-view data and overcome the dimensionality curse, we propose a generalized multi-manifold graph ensemble embedding framework (MLGEE). MLGEE aims to utilize multi-manifold graphs for the adjacency estimation with automatically weight each manifold to derive the integrated heterogeneous graph. Experimental results on various computer vision databases verify the effectiveness of proposed EGLPP and MLGEE over existing comparative DR methods

    Automics: an integrated platform for NMR-based metabonomics spectral processing and data analysis

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Spectral processing and post-experimental data analysis are the major tasks in NMR-based metabonomics studies. While there are commercial and free licensed software tools available to assist these tasks, researchers usually have to use multiple software packages for their studies because software packages generally focus on specific tasks. It would be beneficial to have a highly integrated platform, in which these tasks can be completed within one package. Moreover, with open source architecture, newly proposed algorithms or methods for spectral processing and data analysis can be implemented much more easily and accessed freely by the public.</p> <p>Results</p> <p>In this paper, we report an open source software tool, Automics, which is specifically designed for NMR-based metabonomics studies. Automics is a highly integrated platform that provides functions covering almost all the stages of NMR-based metabonomics studies. Automics provides high throughput automatic modules with most recently proposed algorithms and powerful manual modules for 1D NMR spectral processing. In addition to spectral processing functions, powerful features for data organization, data pre-processing, and data analysis have been implemented. Nine statistical methods can be applied to analyses including: feature selection (Fisher's criterion), data reduction (PCA, LDA, ULDA), unsupervised clustering (K-Mean) and supervised regression and classification (PLS/PLS-DA, KNN, SIMCA, SVM). Moreover, Automics has a user-friendly graphical interface for visualizing NMR spectra and data analysis results. The functional ability of Automics is demonstrated with an analysis of a type 2 diabetes metabolic profile.</p> <p>Conclusion</p> <p>Automics facilitates high throughput 1D NMR spectral processing and high dimensional data analysis for NMR-based metabonomics applications. Using Automics, users can complete spectral processing and data analysis within one software package in most cases. Moreover, with its open source architecture, interested researchers can further develop and extend this software based on the existing infrastructure.</p
    • 

    corecore