124 research outputs found

    Feature Selection and Classifier Development for Radio Frequency Device Identification

    Get PDF
    The proliferation of simple and low-cost devices, such as IEEE 802.15.4 ZigBee and Z-Wave, in Critical Infrastructure (CI) increases security concerns. Radio Frequency Distinct Native Attribute (RF-DNA) Fingerprinting facilitates biometric-like identification of electronic devices emissions from variances in device hardware. Developing reliable classifier models using RF-DNA fingerprints is thus important for device discrimination to enable reliable Device Classification (a one-to-many looks most like assessment) and Device ID Verification (a one-to-one looks how much like assessment). AFITs prior RF-DNA work focused on Multiple Discriminant Analysis/Maximum Likelihood (MDA/ML) and Generalized Relevance Learning Vector Quantized Improved (GRLVQI) classifiers. This work 1) introduces a new GRLVQI-Distance (GRLVQI-D) classifier that extends prior GRLVQI work by supporting alternative distance measures, 2) formalizes a framework for selecting competing distance measures for GRLVQI-D, 3) introducing response surface methods for optimizing GRLVQI and GRLVQI-D algorithm settings, 4) develops an MDA-based Loadings Fusion (MLF) Dimensional Reduction Analysis (DRA) method for improved classifier-based feature selection, 5) introduces the F-test as a DRA method for RF-DNA fingerprints, 6) provides a phenomenological understanding of test statistics and p-values, with KS-test and F-test statistic values being superior to p-values for DRA, and 7) introduces quantitative dimensionality assessment methods for DRA subset selection

    Applications of Modern Statistical Methods to Analysis of Data in Physical Science

    Get PDF
    Modern methods of statistical and computational analysis offer solutions to dilemmas confronting researchers in physical science. Although the ideas behind modern statistical and computational analysis methods were originally introduced in the 1970’s, most scientists still rely on methods written during the early era of computing. These researchers, who analyze increasingly voluminous and multivariate data sets, need modern analysis methods to extract the best results from their studies. The first section of this work showcases applications of modern linear regression. Since the 1960’s, many researchers in spectroscopy have used classical stepwise regression techniques to derive molecular constants. However, problems with thresholds of entry and exit for model variables plagues this analysis method. Other criticisms of this kind of stepwise procedure include its inefficient searching method, the order in which variables enter or leave the model and problems with overfitting data. We implement an information scoring technique that overcomes the assumptions inherent in the stepwise regression process to calculate molecular model parameters. We believe that this kind of information based model evaluation can be applied to more general analysis situations in physical science. The second section proposes new methods of multivariate cluster analysis. The K-means algorithm and the EM algorithm, introduced in the 1960’s and 1970’s respectively, formed the basis of multivariate cluster analysis methodology for many years. However, several shortcomings of these methods include strong dependence on initial seed values and inaccurate results when the data seriously depart from hypersphericity. We propose new cluster analysis methods based on genetic algorithms that overcomes the strong dependence on initial seed values. In addition, we propose a generalization of the Genetic K-means algorithm which can accurately identify clusters with complex hyperellipsoidal covariance structures. We then use this new algorithm in a genetic algorithm based Expectation-Maximization process that can accurately calculate parameters describing complex clusters in a mixture model routine. Using the accuracy of this GEM algorithm, we assign information scores to cluster calculations in order to best identify the number of mixture components in a multivariate data set. We will showcase how these algorithms can be used to process multivariate data from astronomical observations

    Machine learning: statistical physics based theory and smart industry applications

    Get PDF
    The increasing computational power and the availability of data have made it possible to train ever-bigger artificial neural networks. These so-called deep neural networks have been used for impressive applications, like advanced driver assistance and support in medical diagnoses. However, various vulnerabilities have been revealed and there are many open questions concerning the workings of neural networks. Theoretical analyses are therefore essential for further progress. One current question is: why is it that networks with Rectified Linear Unit (ReLU) activation seemingly perform better than networks with sigmoidal activation?We contribute to the answer to this question by comparing ReLU networks with sigmoidal networks in diverse theoretical learning scenarios. In contrast to analysing specific datasets, we use a theoretical modelling using methods from statistical physics. They give the typical learning behaviour for chosen model scenarios. We analyse both the learning behaviour on a fixed dataset and on a data stream in the presence of a changing task. The emphasis is on the analysis of the network’s transition to a state wherein specific concepts have been learnt. We find significant benefits of ReLU networks: they exhibit continuous increases of their performance and adapt more quickly to changing tasks.In the second part of the thesis we treat applications of machine learning: we design a quick quality control method for material in a production line and study the relationship with product faults. Furthermore, we introduce a methodology for the interpretable classification of time series data

    Proceedings of the Third International Workshop on Neural Networks and Fuzzy Logic, volume 2

    Get PDF
    Papers presented at the Neural Networks and Fuzzy Logic Workshop sponsored by the National Aeronautics and Space Administration and cosponsored by the University of Houston, Clear Lake, held 1-3 Jun. 1992 at the Lyndon B. Johnson Space Center in Houston, Texas are included. During the three days approximately 50 papers were presented. Technical topics addressed included adaptive systems; learning algorithms; network architectures; vision; robotics; neurobiological connections; speech recognition and synthesis; fuzzy set theory and application, control and dynamics processing; space applications; fuzzy logic and neural network computers; approximate reasoning; and multiobject decision making

    Semi-Supervised Learning Vector Quantization method enhanced with regularization for anomaly detection in air conditioning time-series data

    Get PDF
    Researchers of semi-supervised learning methods have been developing the family of Learning Vector Quantization models which originated from the well-known Self-Organizing Map algorithm. The models of this type can be characterized as prototype-based, self-explanatory and flexible. The thesis contributes to the development of one of the LVQ models – Semi-Supervised Relational Prototype Classifier for dissimilarity data. The model implementation is developed based on the related research work and thesis author findings, and applied to the task of anomaly detection from a real-time air condition data. We propose a regularization algorithm for gradient descent in order to achieve better convergence and a new strategy for initializing prototypes. We develop an innovative framework involving a human expert as a source of labeled data. The framework detects anomalies of environment parameters in both real-time and long-run observations and updates the model according to findings. The data set used for experiments is collected in real-time from sensors installed inside the Aalto Mechanical Engineering building located at Otakaari, 4, Espoo. Installation was done as a part of the project of VTT and Korean National Research Institute. The data consists of 3 main parameters – air temperature, humidity and CO2 concentration. Total number of deployed sensors is around 150. One month recorded data observations contains approximately 1.5M of data points. The results of the project demonstrate the efficiency of the developed regularized LVQ method for classification in given settings. Its regularized version generally overperforms its parent and various baseline methods on air conditioning, synthetic and UCI data. Together with the proposed classification framework, the system has shown its robustness and efficiency and is ready for deployment to a production environment

    Integration of Auxiliary Data Knowledge in Prototype Based Vector Quantization and Classification Models

    Get PDF
    This thesis deals with the integration of auxiliary data knowledge into machine learning methods especially prototype based classification models. The problem of classification is diverse and evaluation of the result by using only the accuracy is not adequate in many applications. Therefore, the classification tasks are analyzed more deeply. Possibilities to extend prototype based methods to integrate extra knowledge about the data or the classification goal is presented to obtain problem adequate models. One of the proposed extensions is Generalized Learning Vector Quantization for direct optimization of statistical measurements besides the classification accuracy. But also modifying the metric adaptation of the Generalized Learning Vector Quantization for functional data, i. e. data with lateral dependencies in the features, is considered.:Symbols and Abbreviations 1 Introduction 1.1 Motivation and Problem Description . . . . . . . . . . . . . . . . . 1 1.2 Utilized Data Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2 Prototype Based Methods 19 2.1 Unsupervised Vector Quantization . . . . . . . . . . . . . . . . . . 22 2.1.1 C-means . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.1.2 Self-Organizing Map . . . . . . . . . . . . . . . . . . . . . . 25 2.1.3 Neural Gas . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.1.4 Common Generalizations . . . . . . . . . . . . . . . . . . . 30 2.2 Supervised Vector Quantization . . . . . . . . . . . . . . . . . . . . 35 2.2.1 The Family of Learning Vector Quantizers - LVQ . . . . . . 36 2.2.2 Generalized Learning Vector Quantization . . . . . . . . . 38 2.3 Semi-Supervised Vector Quantization . . . . . . . . . . . . . . . . 42 2.3.1 Learning Associations by Self-Organization . . . . . . . . . 42 2.3.2 Fuzzy Labeled Self-Organizing Map . . . . . . . . . . . . . 43 2.3.3 Fuzzy Labeled Neural Gas . . . . . . . . . . . . . . . . . . 45 2.4 Dissimilarity Measures . . . . . . . . . . . . . . . . . . . . . . . . . 47 2.4.1 Differentiable Kernels in Generalized LVQ . . . . . . . . . 52 2.4.2 Dissimilarity Adaptation for Performance Improvement . 56 3 Deeper Insights into Classification Problems - From the Perspective of Generalized LVQ- 81 3.1 Classification Models . . . . . . . . . . . . . . . . . . . . . . . . . . 81 3.2 The Classification Task . . . . . . . . . . . . . . . . . . . . . . . . . 84 3.3 Evaluation of Classification Results . . . . . . . . . . . . . . . . . . 88 3.4 The Classification Task as an Ill-Posed Problem . . . . . . . . . . . 92 4 Auxiliary Structure Information and Appropriate Dissimilarity Adaptation in Prototype Based Methods 93 4.1 Supervised Vector Quantization for Functional Data . . . . . . . . 93 4.1.1 Functional Relevance/Matrix LVQ . . . . . . . . . . . . . . 95 4.1.2 Enhancement Generalized Relevance/Matrix LVQ . . . . 109 4.2 Fuzzy Information About the Labels . . . . . . . . . . . . . . . . . 121 4.2.1 Fuzzy Semi-Supervised Self-Organizing Maps . . . . . . . 122 4.2.2 Fuzzy Semi-Supervised Neural Gas . . . . . . . . . . . . . 123 5 Variants of Classification Costs and Class Sensitive Learning 137 5.1 Border Sensitive Learning in Generalized LVQ . . . . . . . . . . . 137 5.1.1 Border Sensitivity by Additive Penalty Function . . . . . . 138 5.1.2 Border Sensitivity by Parameterized Transfer Function . . 139 5.2 Optimizing Different Validation Measures by the Generalized LVQ 147 5.2.1 Attention Based Learning Strategy . . . . . . . . . . . . . . 148 5.2.2 Optimizing Statistical Validation Measurements for Binary Class Problems in the GLVQ . . . . . . . . . . . . . 155 5.3 Integration of Structural Knowledge about the Labeling in Fuzzy Supervised Neural Gas . . . . . . . . . . . . . . . . . . . . . . . . . 160 6 Conclusion and Future Work 165 My Publications 168 A Appendix 173 A.1 Stochastic Gradient Descent (SGD) . . . . . . . . . . . . . . . . . . 173 A.2 Support Vector Machine . . . . . . . . . . . . . . . . . . . . . . . . 175 A.3 Fuzzy Supervised Neural Gas Algorithm Solved by SGD . . . . . 179 Bibliography 182 Acknowledgements 20

    13th SC@RUG 2016 proceedings 2015-2016

    Get PDF

    13th SC@RUG 2016 proceedings 2015-2016

    Get PDF
    • …
    corecore