82 research outputs found

    Improving learning vector quantization using data reduction

    Get PDF
    Learning Vector Quantization (LVQ) is a supervised learning algorithm commonly used for statistical classification and pattern recognition. The competitive layer in LVQ studies the input vectors and classifies them into the correct classes. The amount of data involved in the learning process can be reduced by using data reduction methods. In this paper, we propose a data reduction method that uses geometrical proximity of the data. The basic idea is to drop sets of data that have many similarities and keep one representation for each set. By certain adjustments, the data reduction methods can decrease the amount of data involved in the learning process while still maintain the existing accuracy. The amount of data involved in the learning process can be reduced down to 33.22% for the abalone dataset and 55.02% for the bank marketing dataset, respectively

    Computerised electrocardiogram classification

    Get PDF
    Advances in computing have resulted in many engineering processes being automated. Electrocardiogram (ECG) classification is one such process. The analysis of ECGs can benefit from the wide availability and power of modern computers. This study presents the usage of computer technology in the field of computerised ECG classification. Computerised electrocardiogram classification can help to reduce healthcare costs by enabling suitably equipped general practitioners to refer to hospital only those people with serious heart problems. Computerised ECG classification can also be very useful in shortening hospital waiting lists and saving life by discovering heart diseases early. The thesis investigates the automatic classification of ECGs into different disease categories using Artificial Intelligence (AI) techniques. A comparison of the use of different feature sets and AI classifiers is presented. The feature sets include conventional cardiological features, as well as features taken directly from time domain samples of an ECG. The benchmark AI classifiers tested include those based on neural network, k-Nearest Neighbour and inductive learning techniques. The research proposes two modifications to the learning vector quantisation (LVQ) neural network, namely the All Weights Updating-LVQ (AWU-LVQ) algorithm and the Neighbouring Weights Updating-LVQ (NWU-LVQ) algorithm, yielding an "intelligent" diagnostic heart system with higher accuracy and reduced training time compared to existing AI techniques.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Computerised electrocardiogram classification

    Get PDF
    Advances in computing have resulted in many engineering processes being automated. Electrocardiogram (ECG) classification is one such process. The analysis of ECGs can benefit from the wide availability and power of modern computers. This study presents the usage of computer technology in the field of computerised ECG classification. Computerised electrocardiogram classification can help to reduce healthcare costs by enabling suitably equipped general practitioners to refer to hospital only those people with serious heart problems. Computerised ECG classification can also be very useful in shortening hospital waiting lists and saving life by discovering heart diseases early. The thesis investigates the automatic classification of ECGs into different disease categories using Artificial Intelligence (AI) techniques. A comparison of the use of different feature sets and AI classifiers is presented. The feature sets include conventional cardiological features, as well as features taken directly from time domain samples of an ECG. The benchmark AI classifiers tested include those based on neural network, k-Nearest Neighbour and inductive learning techniques. The research proposes two modifications to the learning vector quantisation (LVQ) neural network, namely the All Weights Updating-LVQ (AWU-LVQ) algorithm and the Neighbouring Weights Updating-LVQ (NWU-LVQ) algorithm, yielding an "intelligent" diagnostic heart system with higher accuracy and reduced training time compared to existing AI techniques

    An Optimization Framework for Generalized Relevance Learning Vector Quantization with Application to Z-Wave Device Fingerprinting

    Get PDF
    Z-Wave is low-power, low-cost Wireless Personal Area Network (WPAN) technology supporting Critical Infrastructure (CI) systems that are interconnected by government-to-internet pathways. Given that Z-wave is a relatively unsecure technology, Radio Frequency Distinct Native Attribute (RF-DNA) Fingerprinting is considered here to augment security by exploiting statistical features from selected signal responses. Related RF-DNA efforts include use of Multiple Discriminant Analysis (MDA) and Generalized Relevance Learning Vector Quantization-Improved (GRLVQI) classifiers, with GRLVQI outperforming MDA using empirically determined parameters. GRLVQI is optimized here for Z-Wave using a full factorial experiment with spreadsheet search and response surface methods. Two optimization measures are developed for assessing Z-Wave discrimination: 1) Relative Accuracy Percentage (RAP) for device classification, and 2) Mean Area Under the Curve (AUCM) for device identity (ID) verification. Primary benefits of the approach include: 1) generalizability to other wireless device technologies, and 2) improvement in GRLVQI device classification and device ID verification performance

    An Approach to Pattern Recognition by Evolutionary Computation

    Get PDF
    Evolutionary Computation has been inspired by the natural phenomena of evolution. It provides a quite general heuristic, exploiting few basic concepts: reproduction of individuals, variation phenomena that affect the likelihood of survival of individuals, inheritance of parents features by offspring. EC has been widely used in the last years to effectively solve hard, non linear and very complex problems. Among the others, EC–based algorithms have also been used to tackle classification problems. Classification is a process according to which an object is attributed to one of a finite set of classes or, in other words, it is recognized as belonging to a set of equal or similar entities, identified by a label. Most likely, the main aspect of classification concerns the generation of prototypes to be used to recognize unknown patterns. The role of prototypes is that of representing patterns belonging to the different classes defined within a given problem. For most of the problems of practical interest, the generation of such prototypes is a very hard problem, since a prototype must be able to represent patterns belonging to the same class, which may be significantly dissimilar each other. They must also be able to discriminate patterns belonging to classes different from the one that they represent. Moreover, a prototype should contain the minimum amount of information required to satisfy the requirements just mentioned. The research presented in this thesis, has led to the definition of an EC–based framework to be used for prototype generation. The defined framework does not provide for the use of any particular kind of prototypes. In fact, it can generate any kind of prototype once an encoding scheme for the used prototypes has been defined. The generality of the framework can be exploited to develop many applications. The framework has been employed to implement two specific applications for prototype generation. The developed applications have been tested on several data sets and the results compared with those obtained by other approaches previously presented in the literature

    Feature Selection and Classifier Development for Radio Frequency Device Identification

    Get PDF
    The proliferation of simple and low-cost devices, such as IEEE 802.15.4 ZigBee and Z-Wave, in Critical Infrastructure (CI) increases security concerns. Radio Frequency Distinct Native Attribute (RF-DNA) Fingerprinting facilitates biometric-like identification of electronic devices emissions from variances in device hardware. Developing reliable classifier models using RF-DNA fingerprints is thus important for device discrimination to enable reliable Device Classification (a one-to-many looks most like assessment) and Device ID Verification (a one-to-one looks how much like assessment). AFITs prior RF-DNA work focused on Multiple Discriminant Analysis/Maximum Likelihood (MDA/ML) and Generalized Relevance Learning Vector Quantized Improved (GRLVQI) classifiers. This work 1) introduces a new GRLVQI-Distance (GRLVQI-D) classifier that extends prior GRLVQI work by supporting alternative distance measures, 2) formalizes a framework for selecting competing distance measures for GRLVQI-D, 3) introducing response surface methods for optimizing GRLVQI and GRLVQI-D algorithm settings, 4) develops an MDA-based Loadings Fusion (MLF) Dimensional Reduction Analysis (DRA) method for improved classifier-based feature selection, 5) introduces the F-test as a DRA method for RF-DNA fingerprints, 6) provides a phenomenological understanding of test statistics and p-values, with KS-test and F-test statistic values being superior to p-values for DRA, and 7) introduces quantitative dimensionality assessment methods for DRA subset selection

    A review of neural networks in plant disease detection using hyperspectral data

    Get PDF
    © 2018 China Agricultural University This paper reviews advanced Neural Network (NN) techniques available to process hyperspectral data, with a special emphasis on plant disease detection. Firstly, we provide a review on NN mechanism, types, models, and classifiers that use different algorithms to process hyperspectral data. Then we highlight the current state of imaging and non-imaging hyperspectral data for early disease detection. The hybridization of NN-hyperspectral approach has emerged as a powerful tool for disease detection and diagnosis. Spectral Disease Index (SDI) is the ratio of different spectral bands of pure disease spectra. Subsequently, we introduce NN techniques for rapid development of SDI. We also highlight current challenges and future trends of hyperspectral data

    Intelligent strategies for mobile robotics in laboratory automation

    Get PDF
    In this thesis a new intelligent framework is presented for the mobile robots in laboratory automation, which includes: a new multi-floor indoor navigation method is presented and an intelligent multi-floor path planning is proposed; a new signal filtering method is presented for the robots to forecast their indoor coordinates; a new human feature based strategy is proposed for the robot-human smart collision avoidance; a new robot power forecasting method is proposed to decide a distributed transportation task; a new blind approach is presented for the arm manipulations for the robots

    Soft computing applied to optimization, computer vision and medicine

    Get PDF
    Artificial intelligence has permeated almost every area of life in modern society, and its significance continues to grow. As a result, in recent years, Soft Computing has emerged as a powerful set of methodologies that propose innovative and robust solutions to a variety of complex problems. Soft Computing methods, because of their broad range of application, have the potential to significantly improve human living conditions. The motivation for the present research emerged from this background and possibility. This research aims to accomplish two main objectives: On the one hand, it endeavors to bridge the gap between Soft Computing techniques and their application to intricate problems. On the other hand, it explores the hypothetical benefits of Soft Computing methodologies as novel effective tools for such problems. This thesis synthesizes the results of extensive research on Soft Computing methods and their applications to optimization, Computer Vision, and medicine. This work is composed of several individual projects, which employ classical and new optimization algorithms. The manuscript presented here intends to provide an overview of the different aspects of Soft Computing methods in order to enable the reader to reach a global understanding of the field. Therefore, this document is assembled as a monograph that summarizes the outcomes of these projects across 12 chapters. The chapters are structured so that they can be read independently. The key focus of this work is the application and design of Soft Computing approaches for solving problems in the following: Block Matching, Pattern Detection, Thresholding, Corner Detection, Template Matching, Circle Detection, Color Segmentation, Leukocyte Detection, and Breast Thermogram Analysis. One of the outcomes presented in this thesis involves the development of two evolutionary approaches for global optimization. These were tested over complex benchmark datasets and showed promising results, thus opening the debate for future applications. Moreover, the applications for Computer Vision and medicine presented in this work have highlighted the utility of different Soft Computing methodologies in the solution of problems in such subjects. A milestone in this area is the translation of the Computer Vision and medical issues into optimization problems. Additionally, this work also strives to provide tools for combating public health issues by expanding the concepts to automated detection and diagnosis aid for pathologies such as Leukemia and breast cancer. The application of Soft Computing techniques in this field has attracted great interest worldwide due to the exponential growth of these diseases. Lastly, the use of Fuzzy Logic, Artificial Neural Networks, and Expert Systems in many everyday domestic appliances, such as washing machines, cookers, and refrigerators is now a reality. Many other industrial and commercial applications of Soft Computing have also been integrated into everyday use, and this is expected to increase within the next decade. Therefore, the research conducted here contributes an important piece for expanding these developments. The applications presented in this work are intended to serve as technological tools that can then be used in the development of new devices

    Evaluation of LVQ4 artificial neural network model for Predicting spatial distribution pattern of Tuta absoluta in Ramhormoz, Iran

    Get PDF
    In this research, a Learning Vector Quantization (LVQ) neural network model was developed to predict the spatial distribution of Tuta absoluta in tomato fields of the city of Ramhormoz, Iran. Pest density was assessed through 10 m × 10 m grid pattern on the field with a total of 100 sampling units. Some statistical tests, such as means comparison, variance and statistical distribution were performed between the sampling point data and the estimated pest values in order to evaluate the performance of prediction of pest distribution. In training and test phase, there was no significant difference in average, variance, statistical distribution and coefficient of determination at 95% confidence level. The results suggest that LVQ neural network can learn pest density model precisely and trained LVQ neural network high capability (88%) of predicting pest density for non-sampled points. The LVQNN successfully predicted and mapped the spatial distribution of Tuta absoluta whose aggregation distribution implied the possibility of using site-specific pest control in the field
    corecore