1,843 research outputs found

    A Comparative Study of Machine Learning Models for Tabular Data Through Challenge of Monitoring Parkinson's Disease Progression Using Voice Recordings

    Full text link
    People with Parkinson's disease must be regularly monitored by their physician to observe how the disease is progressing and potentially adjust treatment plans to mitigate the symptoms. Monitoring the progression of the disease through a voice recording captured by the patient at their own home can make the process faster and less stressful. Using a dataset of voice recordings of 42 people with early-stage Parkinson's disease over a time span of 6 months, we applied multiple machine learning techniques to find a correlation between the voice recording and the patient's motor UPDRS score. We approached this problem using a multitude of both regression and classification techniques. Much of this paper is dedicated to mapping the voice data to motor UPDRS scores using regression techniques in order to obtain a more precise value for unknown instances. Through this comparative study of variant machine learning methods, we realized some old machine learning methods like trees outperform cutting edge deep learning models on numerous tabular datasets.Comment: Accepted at "HIMS'20 - The 6th Int'l Conf on Health Informatics and Medical Systems"; https://americancse.org/events/csce2020/conferences/hims2

    Comparative study of state-of-the-art machine learning models for analytics-driven embedded systems

    Get PDF
    Analytics-driven embedded systems are gaining foothold faster than ever in the current digital era. The innovation of Internet of Things(IoT) has generated an entire ecosystem of devices, communicating and exchanging data automatically in an interconnected global network. The ability to efficiently process and utilize the enormous amount of data being generated from an ensemble of embedded devices like RFID tags, sensors etc., enables engineers to build smart real-world systems. Analytics-driven embedded system explores and processes the data in-situ or remotely to identify a pattern in the behavior of the system and in turn can be used to automate actions and embark decision making capability to a device. Designing an intelligent data processing model is paramount for reaping the benefits of data analytics, because a poorly designed analytics infrastructure would degrade the system’s performance and effectiveness. There are many different aspects of this data that make it a more complex and challenging analytics task and hence a suitable candidate for big data. Big data is mainly characterized by its high volume, hugely varied data types and high speed of data receipt; all these properties mandate the choice of correct data mining techniques to be used for designing the analytics model. Datasets with images like face recognition, satellite images would perform better with deep learning algorithms, time-series datasets like sensor data from wearable devices would give better results with clustering and supervised learning models. A regression model would suit best for a multivariate dataset like appliances energy prediction data, forest fire data etc. Each machine learning task has a varied range of algorithms which can be used in combination to create an intelligent data analysis model. In this study, a comprehensive comparative analysis was conducted using different datasets freely available on online machine learning repository, to analyze the performance of state-of-art machine learning algorithms. WEKA data mining toolkit was used to evaluate C4.5, Naïve Bayes, Random Forest, kNN, SVM and Multilayer Perceptron for classification models. Linear regression, Gradient Boosting Machine(GBM), Multilayer Perceptron, kNN, Random Forest and Support Vector Machines (SVM) were applied to dataset fit for regression machine learning. Datasets were trained and analyzed in different experimental setups and a qualitative comparative analysis was performed with k-fold Cross Validation(CV) and paired t-test in Weka experimenter

    Modeling Financial Time Series with Artificial Neural Networks

    Full text link
    Financial time series convey the decisions and actions of a population of human actors over time. Econometric and regressive models have been developed in the past decades for analyzing these time series. More recently, biologically inspired artificial neural network models have been shown to overcome some of the main challenges of traditional techniques by better exploiting the non-linear, non-stationary, and oscillatory nature of noisy, chaotic human interactions. This review paper explores the options, benefits, and weaknesses of the various forms of artificial neural networks as compared with regression techniques in the field of financial time series analysis.CELEST, a National Science Foundation Science of Learning Center (SBE-0354378); SyNAPSE program of the Defense Advanced Research Project Agency (HR001109-03-0001

    Neural networks in geophysical applications

    Get PDF
    Neural networks are increasingly popular in geophysics. Because they are universal approximators, these tools can approximate any continuous function with an arbitrary precision. Hence, they may yield important contributions to finding solutions to a variety of geophysical applications. However, knowledge of many methods and techniques recently developed to increase the performance and to facilitate the use of neural networks does not seem to be widespread in the geophysical community. Therefore, the power of these tools has not yet been explored to their full extent. In this paper, techniques are described for faster training, better overall performance, i.e., generalization,and the automatic estimation of network size and architecture

    Adaptive non linear system identification and channel equalization usinf functional link artificial neural network

    Get PDF
    In system theory, characterization and identification are fundamental problems. When the plant behavior is completely unknown, it may be characterized using certain model and then, its identification may be carried out with some artificial neural networks(ANN) like multilayer perceptron(MLP) or functional link artificial neural network(FLANN) using some learning rules such as back propagation (BP) algorithm. They offer flexibility, adaptability and versatility, so that a variety of approaches may be used to meet a specific goal, depending upon the circumstances and the requirements of the design specifications. The primary aim of the present thesis is to provide a framework for the systematic design of adaptation laws for nonlinear system identification and channel equalization. While constructing an artificial neural network the designer is often faced with the problem of choosing a network of the right size for the task. The advantages of using a smaller neural network are cheaper cost of computation and better generalization ability. However, a network which is too small may never solve the problem, while a larger network may even have the advantage of a faster learning rate. Thus it makes sense to start with a large network and then reduce its size. For this reason a Genetic Algorithm (GA) based pruning strategy is reported. GA is based upon the process of natural selection and does not require error gradient statistics. As a consequence, a GA is able to find a global error minimum. Transmission bandwidth is one of the most precious resources in digital communication systems. Communication channels are usually modeled as band-limited linear finite impulse response (FIR) filters with low pass frequency response. When the amplitude and the envelope delay response are not constant within the bandwidth of the filter, the channel distorts the transmitted signal causing intersymbol interference (ISI). The addition of noise during propagation also degrades the quality of the received signal. All the signal processing methods used at the receiver's end to compensate the introduced channel distortion and recover the transmitted symbols are referred as channel equalization techniques.When the nonlinearity associated with the system or the channel is more the number of branches in FLANN increases even some cases give poor performance. To decrease the number of branches and increase the performance a two stage FLANN called cascaded FLANN (CFLANN) is proposed.This thesis presents a comprehensive study covering artificial neural network (ANN) implementation for nonlinear system identification and channel equalization. Three ANN structures, MLP, FLANN, CFLANN and their conventional gradient-descent training methods are extensively studied. Simulation results demonstrate that FLANN and CFLANN methods are directly applicable for a large class of nonlinear control systems and communication problems

    Missing Value Imputation With Unsupervised Backpropagation

    Full text link
    Many data mining and data analysis techniques operate on dense matrices or complete tables of data. Real-world data sets, however, often contain unknown values. Even many classification algorithms that are designed to operate with missing values still exhibit deteriorated accuracy. One approach to handling missing values is to fill in (impute) the missing values. In this paper, we present a technique for unsupervised learning called Unsupervised Backpropagation (UBP), which trains a multi-layer perceptron to fit to the manifold sampled by a set of observed point-vectors. We evaluate UBP with the task of imputing missing values in datasets, and show that UBP is able to predict missing values with significantly lower sum-squared error than other collaborative filtering and imputation techniques. We also demonstrate with 24 datasets and 9 supervised learning algorithms that classification accuracy is usually higher when randomly-withheld values are imputed using UBP, rather than with other methods

    Predictive modelling of hospital readmissions in diabetic patients clusters

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Knowledge Management and Business IntelligenceDiabetes is a global public health problem with increasing incidence over the past 10 years. This disease's social and economic impacts are widely assessed worldwide, showing a direct and gradual decrease in the individual's ability to work, a gradual loss in the scale of quality of life and a burden on personal finances. The recurrence of hospitalisation is one of the most significant indexes in measuring the quality of care and the opportunity to optimise resources. Numerous techniques identify the patient who will need to be readmitted, such as LACE and HOSPITAL. The purpose of this study was to use a dataset related to the risk of hospital readmission in patients with Diabetes first to apply a clustering of subgroups by similarity. Then structures a predictive analysis with the main algorithms to identify the methodology of best performance. Numerous approaches were performed to prepare the dataset for these two interventions. The results found in the first phase were two clusters based on the total number of hospital recurrences and others on total administrative costs, with K=3. In the second phase, the best algorithm found was Neural Network 3, with a ROC of 0.68 and a misclassification rate of 0.37. When applied the same algorithm in the clusters, there were no gains in the confidence of the indexes, suggesting that there are no substantial gains in the division of subpopulations since the disease has the same behaviour and needs throughout its development
    corecore