1,786 research outputs found

    Linear Time Feature Selection for Regularized Least-Squares

    Full text link
    We propose a novel algorithm for greedy forward feature selection for regularized least-squares (RLS) regression and classification, also known as the least-squares support vector machine or ridge regression. The algorithm, which we call greedy RLS, starts from the empty feature set, and on each iteration adds the feature whose addition provides the best leave-one-out cross-validation performance. Our method is considerably faster than the previously proposed ones, since its time complexity is linear in the number of training examples, the number of features in the original data set, and the desired size of the set of selected features. Therefore, as a side effect we obtain a new training algorithm for learning sparse linear RLS predictors which can be used for large scale learning. This speed is possible due to matrix calculus based short-cuts for leave-one-out and feature addition. We experimentally demonstrate the scalability of our algorithm and its ability to find good quality feature sets.Comment: 17 pages, 15 figure

    DropIn: Making Reservoir Computing Neural Networks Robust to Missing Inputs by Dropout

    Full text link
    The paper presents a novel, principled approach to train recurrent neural networks from the Reservoir Computing family that are robust to missing part of the input features at prediction time. By building on the ensembling properties of Dropout regularization, we propose a methodology, named DropIn, which efficiently trains a neural model as a committee machine of subnetworks, each capable of predicting with a subset of the original input features. We discuss the application of the DropIn methodology in the context of Reservoir Computing models and targeting applications characterized by input sources that are unreliable or prone to be disconnected, such as in pervasive wireless sensor networks and ambient intelligence. We provide an experimental assessment using real-world data from such application domains, showing how the Dropin methodology allows to maintain predictive performances comparable to those of a model without missing features, even when 20\%-50\% of the inputs are not available

    Identification of nonlinear time-varying systems using an online sliding-window and common model structure selection (CMSS) approach with applications to EEG

    Get PDF
    The identification of nonlinear time-varying systems using linear-in-the-parameter models is investigated. A new efficient Common Model Structure Selection (CMSS) algorithm is proposed to select a common model structure. The main idea and key procedure is: First, generate K 1 data sets (the first K data sets are used for training, and theK 1 th one is used for testing) using an online sliding window method; then detect significant model terms to form a common model structure which fits over all the K training data sets using the new proposed CMSS approach. Finally, estimate and refine the time-varying parameters for the identified common-structured model using a Recursive Least Squares (RLS) parameter estimation method. The new method can effectively detect and adaptively track the transient variation of nonstationary signals. Two examples are presented to illustrate the effectiveness of the new approach including an application to an EEG data set

    In silico case studies of compliant robots: AMARSI deliverable 3.3

    Get PDF
    In the deliverable 3.2 we presented how the morphological computing ap- proach can significantly facilitate the control strategy in several scenarios, e.g. quadruped locomotion, bipedal locomotion and reaching. In particular, the Kitty experimental platform is an example of the use of morphological computation to allow quadruped locomotion. In this deliverable we continue with the simulation studies on the application of the different morphological computation strategies to control a robotic system

    A Training Sample Sequence Planning Method for Pattern Recognition Problems

    Get PDF
    In solving pattern recognition problems, many classification methods, such as the nearest-neighbor (NN) rule, need to determine prototypes from a training set. To improve the performance of these classifiers in finding an efficient set of prototypes, this paper introduces a training sample sequence planning method. In particular, by estimating the relative nearness of the training samples to the decision boundary, the approach proposed here incrementally increases the number of prototypes until the desired classification accuracy has been reached. This approach has been tested with a NN classification method and a neural network training approach. Studies based on both artificial and real data demonstrate that higher classification accuracy can be achieved with fewer prototypes

    Development of Neurofuzzy Architectures for Electricity Price Forecasting

    Get PDF
    In 20th century, many countries have liberalized their electricity market. This power markets liberalization has directed generation companies as well as wholesale buyers to undertake a greater intense risk exposure compared to the old centralized framework. In this framework, electricity price prediction has become crucial for any market player in their decision‐making process as well as strategic planning. In this study, a prototype asymmetric‐based neuro‐fuzzy network (AGFINN) architecture has been implemented for short‐term electricity prices forecasting for ISO New England market. AGFINN framework has been designed through two different defuzzification schemes. Fuzzy clustering has been explored as an initial step for defining the fuzzy rules while an asymmetric Gaussian membership function has been utilized in the fuzzification part of the model. Results related to the minimum and maximum electricity prices for ISO New England, emphasize the superiority of the proposed model over well‐established learning‐based models

    Time-varying model identification for time-frequency feature extraction from EEG data

    Get PDF
    A novel modelling scheme that can be used to estimate and track time-varying properties of nonstationary signals is investigated. This scheme is based on a class of time-varying AutoRegressive with an eXogenous input (ARX) models where the associated time-varying parameters are represented by multi-wavelet basis functions. The orthogonal least square (OLS) algorithm is then applied to refine the model parameter estimates of the time-varying ARX model. The main features of the multi-wavelet approach is that it enables smooth trends to be tracked but also to capture sharp changes in the time-varying process parameters. Simulation studies and applications to real EEG data show that the proposed algorithm can provide important transient information on the inherent dynamics of nonstationary processes

    Feedback control by online learning an inverse model

    Get PDF
    A model, predictor, or error estimator is often used by a feedback controller to control a plant. Creating such a model is difficult when the plant exhibits nonlinear behavior. In this paper, a novel online learning control framework is proposed that does not require explicit knowledge about the plant. This framework uses two learning modules, one for creating an inverse model, and the other for actually controlling the plant. Except for their inputs, they are identical. The inverse model learns by the exploration performed by the not yet fully trained controller, while the actual controller is based on the currently learned model. The proposed framework allows fast online learning of an accurate controller. The controller can be applied on a broad range of tasks with different dynamic characteristics. We validate this claim by applying our control framework on several control tasks: 1) the heating tank problem (slow nonlinear dynamics); 2) flight pitch control (slow linear dynamics); and 3) the balancing problem of a double inverted pendulum (fast linear and nonlinear dynamics). The results of these experiments show that fast learning and accurate control can be achieved. Furthermore, a comparison is made with some classical control approaches, and observations concerning convergence and stability are made

    Extreme Learning Machine Based Non-Iterative and Iterative Nonlinearity Mitigation for LED Communications

    Full text link
    This work concerns receiver design for light emitting diode (LED) communications where the LED nonlinearity can severely degrade the performance of communications. We propose extreme learning machine (ELM) based non-iterative receivers and iterative receivers to effectively handle the LED nonlinearity and memory effects. For the iterative receiver design, we also develop a data-aided receiver, where data is used as virtual training sequence in ELM training. It is shown that the ELM based receivers significantly outperform conventional polynomial based receivers; iterative receivers can achieve huge performance gain compared to non-iterative receivers; and the data-aided receiver can reduce training overhead considerably. This work can also be extended to radio frequency communications, e.g., to deal with the nonlinearity of power amplifiers
    corecore