387 research outputs found

    Training Echo State Networks with Regularization through Dimensionality Reduction

    Get PDF
    In this paper we introduce a new framework to train an Echo State Network to predict real valued time-series. The method consists in projecting the output of the internal layer of the network on a space with lower dimensionality, before training the output layer to learn the target task. Notably, we enforce a regularization constraint that leads to better generalization capabilities. We evaluate the performances of our approach on several benchmark tests, using different techniques to train the readout of the network, achieving superior predictive performance when using the proposed framework. Finally, we provide an insight on the effectiveness of the implemented mechanics through a visualization of the trajectory in the phase space and relying on the methodologies of nonlinear time-series analysis. By applying our method on well known chaotic systems, we provide evidence that the lower dimensional embedding retains the dynamical properties of the underlying system better than the full-dimensional internal states of the network

    Robust PCA as Bilinear Decomposition with Outlier-Sparsity Regularization

    Full text link
    Principal component analysis (PCA) is widely used for dimensionality reduction, with well-documented merits in various applications involving high-dimensional data, including computer vision, preference measurement, and bioinformatics. In this context, the fresh look advocated here permeates benefits from variable selection and compressive sampling, to robustify PCA against outliers. A least-trimmed squares estimator of a low-rank bilinear factor analysis model is shown closely related to that obtained from an 0\ell_0-(pseudo)norm-regularized criterion encouraging sparsity in a matrix explicitly modeling the outliers. This connection suggests robust PCA schemes based on convex relaxation, which lead naturally to a family of robust estimators encompassing Huber's optimal M-class as a special case. Outliers are identified by tuning a regularization parameter, which amounts to controlling sparsity of the outlier matrix along the whole robustification path of (group) least-absolute shrinkage and selection operator (Lasso) solutions. Beyond its neat ties to robust statistics, the developed outlier-aware PCA framework is versatile to accommodate novel and scalable algorithms to: i) track the low-rank signal subspace robustly, as new data are acquired in real time; and ii) determine principal components robustly in (possibly) infinite-dimensional feature spaces. Synthetic and real data tests corroborate the effectiveness of the proposed robust PCA schemes, when used to identify aberrant responses in personality assessment surveys, as well as unveil communities in social networks, and intruders from video surveillance data.Comment: 30 pages, submitted to IEEE Transactions on Signal Processin

    Noise Effects on a Proposed Algorithm for Signal Reconstruction and Bandwidth Optimization

    Get PDF
    The development of wireless technology in recent years has increased the demand for channel resources within a limited spectrum. The system\u27s performance can be improved through bandwidth optimization, as the spectrum is a scarce resource. To reconstruct the signal, given incomplete knowledge about the original signal, signal reconstruction algorithms are needed. In this paper, we propose a new scheme for reducing the effect of adding additive white Gaussian noise (AWGN) using a noise reject filter (NRF) on a previously discussed algorithm for baseband signal transmission and reconstruction that can reconstruct most of the signal’s energy without any need to send most of the signal’s concentrated power like the conventional methods, thus achieving bandwidth optimization. The proposed scheme for noise reduction was tested for a pulse signal and stream of pulses with different rates (2, 4, 6, and 8 Mbps) and showed good reconstruction performance in terms of the normalized mean squared error (NMSE) and achieved an average enhancement of around 48%. The proposed schemes for signal reconstruction and noise reduction can be applied to different applications, such as ultra-wideband (UWB) communications, radio frequency identification (RFID) systems, mobile communication networks, and radar systems

    On the Use of KPCA to Extract Artifacts in One-Dimensional Biomedical Signals

    Get PDF
    Kernel principal component analysis(KPCA) is a nonlinear projective technique that can be applied to decompose multi-dimensional signals and extract informative features as well as reduce any noise contributions. In this work we extend KPCA to extract and remove artifact-related contributions as well as noise from one-dimensional signal recordings. We introduce an embedding step which transforms the one-dimensional signal into a multi-dimensional vector. The latter is decomposed in feature space to extract artifact related contaminations. We further address the preimage problem and propose an initialization procedure to the fixed-point algorithm which renders it more efficient. Finally we apply KPCA to extract dominant Electrooculogram (EOG) artifacts contaminating Electroencephalogram (EEG) recordings in a frontal channel.info:eu-repo/semantics/publishedVersio

    Inverse filtering and principal component analysis techniques for speech dereverberation

    Get PDF
    In this work, we present a single channel approach for early and late reverberation suppression. This approach can be decomposed into two stages. The first stage employs the inverse filter to augment the signal-to-reverberant energy ratio. The second stage uses the kernel PCA algorithm to enhance the obtained dereverberant signal. It consists in extracting the main non-linear features from the speech signal after inverse filtering. Our approach appears to be efficient mainly in far field conditions and in highly reverberant environments

    Feature Reduction Based on Sum-of-SNR (SOSNR) Optimization

    Get PDF
    Dimensionality reduction plays an important role in machine learning techniques. In classification, data transformation aims to reduce the number of feature dimensions, whereas attempts to enhance the class separability. To this end, we propose a new classifier-independent criterion called 'Sum-of-Signal-to-Noise-Ratio' (SoSNR). A framework designed for maximization with respect to this criterion is presented and three types of algorithms, respectively based on (1) gradient, (2) deflation and (3) sparsity, are proposed. The techniques are conducted on standard UCI databases and compared to other related methods. Results show trade-offs between computational complexity and classification accuracy among different approaches
    corecore