452 research outputs found

    Prediction of nonlinear nonstationary time series data using a digital filter and support vector regression

    No full text
    Volatility is a key parameter when measuring the size of the errors made in modelling returns and other nonlinear nonstationary time series data. The Autoregressive Integrated Moving- Average (ARIMA) model is a linear process in time series; whilst in the nonlinear system, the Generalised Autoregressive Conditional Heteroskedasticity (GARCH) and Markov Switching GARCH (MS-GARCH) models have been widely applied. In statistical learning theory, Support Vector Regression (SVR) plays an important role in predicting nonlinear and nonstationary time series data. We propose a new class model comprised of a combination of a novel derivative Empirical Mode Decomposition (EMD), averaging intrinsic mode function (aIMF) and a novel of multiclass SVR using mean reversion and coefficient of variance (CV) to predict financial data i.e. EUR-USD exchange rates. The proposed novel aIMF is capable of smoothing and reducing noise, whereas the novel of multiclass SVR model can predict exchange rates. Our simulation results show that our model significantly outperforms simulations by state-of-art ARIMA, GARCH, Markov Switching generalised Autoregressive conditional Heteroskedasticity (MS-GARCH), Markov Switching Regression (MSR) models and Markov chain Monte Carlo (MCMC) regression.Open Acces

    Electroencephalographic Signal Processing and Classification Techniques for Noninvasive Motor Imagery Based Brain Computer Interface

    Get PDF
    In motor imagery (MI) based brain-computer interface (BCI), success depends on reliable processing of the noisy, non-linear, and non-stationary brain activity signals for extraction of features and effective classification of MI activity as well as translation to the corresponding intended actions. In this study, signal processing and classification techniques are presented for electroencephalogram (EEG) signals for motor imagery based brain-computer interface. EEG signals have been acquired placing the electrodes following the international 10-20 system. The acquired signals have been pre-processed removing artifacts using empirical mode decomposition (EMD) and two extended versions of EMD, ensemble empirical mode decomposition (EEMD), and multivariate empirical mode decomposition (MEMD) leading to better signal to noise ratio (SNR) and reduced mean square error (MSE) compared to independent component analysis (ICA). EEG signals have been decomposed into independent mode function (IMFs) that are further processed to extract features like sample entropy (SampEn) and band power (BP). The extracted features have been used in support vector machines to characterize and identify MI activities. EMD and its variants, EEMD, MEMD have been compared with common spatial pattern (CSP) for different MI activities. SNR values from EMD, EEMD and MEMD (4.3, 7.64, 10.62) are much better than ICA (2.1) but accuracy of MI activity identification is slightly better for ICA than EMD using BP and SampEn. Further work is outlined to include more features with larger database for better classification accuracy

    Gait Cycle-Inspired Learning Strategy for Continuous Prediction of Knee Joint Trajectory from sEMG

    Full text link
    Predicting lower limb motion intent is vital for controlling exoskeleton robots and prosthetic limbs. Surface electromyography (sEMG) attracts increasing attention in recent years as it enables ahead-of-time prediction of motion intentions before actual movement. However, the estimation performance of human joint trajectory remains a challenging problem due to the inter- and intra-subject variations. The former is related to physiological differences (such as height and weight) and preferred walking patterns of individuals, while the latter is mainly caused by irregular and gait-irrelevant muscle activity. This paper proposes a model integrating two gait cycle-inspired learning strategies to mitigate the challenge for predicting human knee joint trajectory. The first strategy is to decouple knee joint angles into motion patterns and amplitudes former exhibit low variability while latter show high variability among individuals. By learning through separate network entities, the model manages to capture both the common and personalized gait features. In the second, muscle principal activation masks are extracted from gait cycles in a prolonged walk. These masks are used to filter out components unrelated to walking from raw sEMG and provide auxiliary guidance to capture more gait-related features. Experimental results indicate that our model could predict knee angles with the average root mean square error (RMSE) of 3.03(0.49) degrees and 50ms ahead of time. To our knowledge this is the best performance in relevant literatures that has been reported, with reduced RMSE by at least 9.5%

    Deep Learning Techniques for Music Generation -- A Survey

    Full text link
    This paper is a survey and an analysis of different ways of using deep learning (deep artificial neural networks) to generate musical content. We propose a methodology based on five dimensions for our analysis: Objective - What musical content is to be generated? Examples are: melody, polyphony, accompaniment or counterpoint. - For what destination and for what use? To be performed by a human(s) (in the case of a musical score), or by a machine (in the case of an audio file). Representation - What are the concepts to be manipulated? Examples are: waveform, spectrogram, note, chord, meter and beat. - What format is to be used? Examples are: MIDI, piano roll or text. - How will the representation be encoded? Examples are: scalar, one-hot or many-hot. Architecture - What type(s) of deep neural network is (are) to be used? Examples are: feedforward network, recurrent network, autoencoder or generative adversarial networks. Challenge - What are the limitations and open challenges? Examples are: variability, interactivity and creativity. Strategy - How do we model and control the process of generation? Examples are: single-step feedforward, iterative feedforward, sampling or input manipulation. For each dimension, we conduct a comparative analysis of various models and techniques and we propose some tentative multidimensional typology. This typology is bottom-up, based on the analysis of many existing deep-learning based systems for music generation selected from the relevant literature. These systems are described and are used to exemplify the various choices of objective, representation, architecture, challenge and strategy. The last section includes some discussion and some prospects.Comment: 209 pages. This paper is a simplified version of the book: J.-P. Briot, G. Hadjeres and F.-D. Pachet, Deep Learning Techniques for Music Generation, Computational Synthesis and Creative Systems, Springer, 201

    Recent Advances in Embedded Computing, Intelligence and Applications

    Get PDF
    The latest proliferation of Internet of Things deployments and edge computing combined with artificial intelligence has led to new exciting application scenarios, where embedded digital devices are essential enablers. Moreover, new powerful and efficient devices are appearing to cope with workloads formerly reserved for the cloud, such as deep learning. These devices allow processing close to where data are generated, avoiding bottlenecks due to communication limitations. The efficient integration of hardware, software and artificial intelligence capabilities deployed in real sensing contexts empowers the edge intelligence paradigm, which will ultimately contribute to the fostering of the offloading processing functionalities to the edge. In this Special Issue, researchers have contributed nine peer-reviewed papers covering a wide range of topics in the area of edge intelligence. Among them are hardware-accelerated implementations of deep neural networks, IoT platforms for extreme edge computing, neuro-evolvable and neuromorphic machine learning, and embedded recommender systems

    Deep Learning in Medical Image Analysis

    Get PDF
    The accelerating power of deep learning in diagnosing diseases will empower physicians and speed up decision making in clinical environments. Applications of modern medical instruments and digitalization of medical care have generated enormous amounts of medical images in recent years. In this big data arena, new deep learning methods and computational models for efficient data processing, analysis, and modeling of the generated data are crucially important for clinical applications and understanding the underlying biological process. This book presents and highlights novel algorithms, architectures, techniques, and applications of deep learning for medical image analysis

    Bioinformatics Applications Based On Machine Learning

    Get PDF
    The great advances in information technology (IT) have implications for many sectors, such as bioinformatics, and has considerably increased their possibilities. This book presents a collection of 11 original research papers, all of them related to the application of IT-related techniques within the bioinformatics sector: from new applications created from the adaptation and application of existing techniques to the creation of new methodologies to solve existing problems

    Music emotion recognition: a multimodal machine learning approach

    Get PDF
    Music emotion recognition (MER) is an emerging domain of the Music Information Retrieval (MIR) scientific community, and besides, music searches through emotions are one of the major selection preferred by web users. As the world goes to digital, the musical contents in online databases, such as Last.fm have expanded exponentially, which require substantial manual efforts for managing them and also keeping them updated. Therefore, the demand for innovative and adaptable search mechanisms, which can be personalized according to users’ emotional state, has gained increasing consideration in recent years. This thesis concentrates on addressing music emotion recognition problem by presenting several classification models, which were fed by textual features, as well as audio attributes extracted from the music. In this study, we build both supervised and semisupervised classification designs under four research experiments, that addresses the emotional role of audio features, such as tempo, acousticness, and energy, and also the impact of textual features extracted by two different approaches, which are TF-IDF and Word2Vec. Furthermore, we proposed a multi-modal approach by using a combined feature-set consisting of the features from the audio content, as well as from context-aware data. For this purpose, we generated a ground truth dataset containing over 1500 labeled song lyrics and also unlabeled big data, which stands for more than 2.5 million Turkish documents, for achieving to generate an accurate automatic emotion classification system. The analytical models were conducted by adopting several algorithms on the crossvalidated data by using Python. As a conclusion of the experiments, the best-attained performance was 44.2% when employing only audio features, whereas, with the usage of textual features, better performances were observed with 46.3% and 51.3% accuracy scores considering supervised and semi-supervised learning paradigms, respectively. As of last, even though we created a comprehensive feature set with the combination of audio and textual features, this approach did not display any significant improvement for classification performanc

    Adversarial training to improve robustness of adversarial deep neural classifiers in the NOvA experiment

    Get PDF
    The NOvA experiment is a long-baseline neutrino oscillation experiment. Consisting of two functionally identical detectors situated off-axis in Fermilab’s NuMI neutrino beam. The Near Detector observes the unoscillated beam at Fermilab, while the Far Detector observes the oscillated beam 810 km away. This allows for measurements of the oscillation probabilities for multiple oscillation channels, ν_µ → ν_µ, anti ν_µ → anti ν_µ, ν_µ → ν_e and anti ν_µ → anti ν_e, leading to measurements of the neutrino oscillation parameters, sinθ_23, ∆m^2_32 and δ_CP. These measurements are produced from an extensive analysis of the recorded data. Deep neural networks are deployed at multiple stages of this analysis. The Event CVN network is deployed for the purposes of identifying and classifying the interaction types of selected neutrino events. The effects of systematic uncertainties present in the measurements on the network performance are investigated and are found to cause negligible variations. The robustness of these network trainings is therefore demonstrated which further justifies their current usage in the analysis beyond the standard validation. The effects on the network performance for larger systematic alterations to the training datasets beyond the systematic uncertainties, such as an exchange of the neutrino event generators, are investigated. The differences in network performance corresponding to the introduced variations are found to be minimal. Domain adaptation techniques are implemented in the AdCVN framework. These methods are deployed for the purpose of improving the Event CVN robustness for scenarios with systematic variations in the underlying data
    corecore