83 research outputs found

    Fourth Moments and Independent Component Analysis

    Full text link
    In independent component analysis it is assumed that the components of the observed random vector are linear combinations of latent independent random variables, and the aim is then to find an estimate for a transformation matrix back to these independent components. In the engineering literature, there are several traditional estimation procedures based on the use of fourth moments, such as FOBI (fourth order blind identification), JADE (joint approximate diagonalization of eigenmatrices), and FastICA, but the statistical properties of these estimates are not well known. In this paper various independent component functionals based on the fourth moments are discussed in detail, starting with the corresponding optimization problems, deriving the estimating equations and estimation algorithms, and finding asymptotic statistical properties of the estimates. Comparisons of the asymptotic variances of the estimates in wide independent component models show that in most cases JADE and the symmetric version of FastICA perform better than their competitors.Comment: Published at http://dx.doi.org/10.1214/15-STS520 in the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Stationary subspace analysis based on second-order statistics

    Full text link
    In stationary subspace analysis (SSA) one assumes that the observable p-variate time series is a linear mixture of a k-variate nonstationary time series and a (p-k)-variate stationary time series. The aim is then to estimate the unmixing matrix which transforms the observed multivariate time series onto stationary and nonstationary components. In the classical approach multivariate data are projected onto stationary and nonstationary subspaces by minimizing a Kullback-Leibler divergence between Gaussian distributions, and the method only detects nonstationarities in the first two moments. In this paper we consider SSA in a more general multivariate time series setting and propose SSA methods which are able to detect nonstationarities in mean, variance and autocorrelation, or in all of them. Simulation studies illustrate the performances of proposed methods, and it is shown that especially the method that detects all three types of nonstationarities performs well in various time series settings. The paper is concluded with an illustrative example

    Diginatiiveja vai digimuukalaisia? : Suomalaisten korkeakouluopiskelijoiden digitaaliset valmiudet

    Get PDF
    Artikkelissa tarkastellaan suomalaisten korkeakouluopiskelijoiden digitaalisia opiskeluvalmiuksia Eurostudent VIII -tutkimushankkeessa kerätyn aineiston pohjalta. Kyselyaineiston analyysillä pyritään kartoittamaan korkeakouluopiskelijoiden digitaalisia opiskeluvalmiuksia ja niiden yhteyttä opintojen etenemiseen ja opiskelumotivaatioon, ja tarkastellaan opintojen ohjauksen ja tuen riittävyyttä. Lisäksi artikkelissa keskustellaan tuloksista aikaisemman kirjallisuuden ja tutkimusten valossa. Tutkimuksen kysely toteutettiin Suomessa keväällä 2022 verkkokyselynä. Otokseen valikoitui noin 25 000 korkeakouluopiskelijaa. Tässä työssä käytetty aineisto koostui 6 837 opiskelijan vastauksista. Tutkimuksen aineiston analysoinnissa käytetään sekä tilastotieteen kuvailevia menetelmiä että tilastollista mallintamista. Tulosten perusteella voidaan todeta, että yli 40-vuotiailla opiskelijoilla, naisilla sekä kansainvälisillä tutkinto-opiskelijoilla oli suurempi riski kokea digitaaliset opiskeluvalmiutensa riittämättömiksi. Hyvät opiskeluolosuhteet pienensivät riskiä, jossa merkitseväksi tekijäksi nousi etä- ja lähiopetuksen määrä. Vähiten tyytyväisiä oltiin verkossa tapahtuvaan opintoneuvontaan. Pandemialla on ollut suurin negatiivinen vaikutus ammattikorkeakoulujen päivätoteutuksen opiskelijoiden opintojen suoritusaikaan. Pandemia on vaikuttanut kielteisimmin opiskelumotivaatioon ja opetuksen laatuun. Tutkintotyypistä ja toteutustavasta riippumatta opiskelijat toivoivat etäopetuksen määrän vähentämistä

    Dimension Reduction for Time Series in a Blind Source Separation Context Using R

    Get PDF
    Multivariate time series observations are increasingly common in multiple fields of science but the complex dependencies of such data often translate into intractable models with large number of parameters. An alternative is given by first reducing the dimension of the series and then modelling the resulting uncorrelated signals univariately, avoiding the need for any covariance parameters. A popular and effective framework for this is blind source separation. In this paper we review the dimension reduction tools for time series available in the R package tsBSS. These include methods for estimating the signal dimension of second-order stationary time series, dimension reduction techniques for stochastic volatility models and supervised dimension reduction tools for time series regression. Several examples are provided to illustrate the functionality of the package

    2022 International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME)

    Get PDF
    Many modern multivariate time series datasets contain a large amount of noise, and the first step of the data analysis is to separate the noise channels from the signals of interest. A crucial part of this dimension reduction is determining the number of signals. In this paper we approach this problem by considering a noisy latent variable time series model which comprises many popular blind source separation models. We propose a general framework for the estimation of the signal dimension that is based on testing for sub-sphericity and give examples of different tests suitable for time series settings. In the inference we rely on bootstrap null distributions. Several simulation studies are used to demonstrate the performances of the tests in different time series settings.</p

    A review of second-order blind identification methods

    Get PDF
    Second-order source separation (SOS) is a data analysis tool which can be used for revealing hidden structures in multivariate time series data or as a tool for dimension reduction. Such methods are nowadays increasingly important as more and more high-dimensional multivariate time series data are measured in numerous fields of applied science. Dimension reduction is crucial, as modeling such high-dimensional data with multivariate time series models is often impractical as the number of parameters describing dependencies between the component time series is usually too high. SOS methods have their roots in the signal processing literature, where they were first used to separate source signals from an observed signal mixture. The SOS model assumes that the observed time series (signals) is a linear mixture of latent time series (sources) with uncorrelated components. The methods make use of the second-order statistics-hence the name "second-order source separation." In this review, we discuss the classical SOS methods and their extensions to more complex settings. An example illustrates how SOS can be performed.This article is categorized under:Statistical Models > Time Series ModelsStatistical and Graphical Methods of Data Analysis > Dimension ReductionData: Types and Structure > Time Series, Stochastic Processes, and Functional Dat

    Dimension Reduction for Time Series in a Blind Source Separation Context Using R

    Get PDF
    Multivariate time series observations are increasingly common in multiple fields of science but the complex dependencies of such data often translate into intractable models with large number of parameters. An alternative is given by first reducing the dimension of the series and then modelling the resulting uncorrelated signals univariately, avoiding the need for any covariance parameters. A popular and effective framework for this is blind source separation. In this paper we review the dimension reduction tools for time series available in the R package tsBSS. These include methods for estimating the signal dimension of second-order stationary time series, dimension reduction techniques for stochastic volatility models and supervised dimension reduction tools for time series regression. Several examples are provided to illustrate the functionality of the package.</p

    On Independent Component Analysis with Stochastic Volatility Models

    Get PDF
    Consider a multivariate time series where each component series is assumed to be a linear mixture of latent mutually independent stationary time series. Classical independent component analysis (ICA) tools, such as fastICA, are often used to extract latent series, but they don't utilize any information on temporal dependence. Also nancial time series often have periods of low and high volatility. In such settings second order source separation methods, such as SOBI, fail. We review here some classical methods used for time series with stochastic volatility, and suggest modi cations of them by proposing a family of vSOBI estimators. These estimators use dierent nonlinearity functions to capture nonlinear autocorrelation of the time series and extract the independent components. Simulation study shows that the proposed method outperforms the existing methods when latent components follow GARCH and SV models. This paper is an invited extended version of the paper presented at the CDAM 2016 conference.</p
    corecore