6 research outputs found

    Novel whitening approaches in functional settings

    Get PDF
    Whitening is a critical normalization method to enhance statistical reduction via reparametrization to unit covariance. This article introduces the notion of whitening for random functions assumed to reside in a real separable Hilbert space. We compare the properties of different whitening transformations stemming from the factorization of a bounded precision operator under a particular geometrical structure. The practical performance of the estimators is shown in a simulation study, providing helpful insights into their optimization. Computational algorithms for the estimation of the proposed whitening transformations in terms of basis expansions of a functional data set are also provided.Ministry of Science and Innovation, Spain (MICINN) Instituto de Salud Carlos III Spanish Government PID2020-113961GB-I00Methusalem, Vlaamse regerin

    Independent component analysis for non-standard data structures

    Get PDF
    Independent component analysis is a classical multivariate tool used for estimating independent sources among collections of mixed signals. However, modern forms of data are typically too complex for the basic theory to adequately handle. In this thesis extensions of independent component analysis to three cases of non-standard data structures are developed: noisy multivariate data, tensor-valued data and multivariate functional data. In each case we define the corresponding independent component model along with the related assumptions and implications. The proposed estimators are mostly based on the use of kurtosis and its analogues for the considered structures, resulting into functionals of rather unified form, regardless of the type of the data. We prove the Fisher consistencies of the estimators and particular weight is given to their limiting distributions, using which comparisons between the methods are also made.Riippumattomien komponenttien analyysi on moniulotteisen tilastotieteen työkalu,jota käytetään estimoimaan riippumattomia lähdesignaaleja sekoitettujen signaalien joukosta. Modernit havaintoaineistot ovat kuitenkin tyypillisesti rakenteeltaan liian monimutkaisia, jotta niitä voitaisiin lähestyä alan perinteisillä menetelmillä. Tässä väitöskirjatyössä esitellään laajennukset riippumattomien komponenttien analyysin teoriasta kolmelle epästandardille aineiston muodolle: kohinaiselle moniulotteiselle datalle, tensoriarvoiselle datalle ja moniulotteiselle funktionaaliselle datalle. Kaikissa tapauksissa määriteläään vastaava riippumattomien komponenttien malli oletuksineen ja seurauksineen. Esitellyt estimaattorit pohjautuvat enimmäkseen huipukkuuden ja sen laajennuksien käyttöönottoon ja saatavat funktionaalit ovat analyyttisesti varsin yhtenäisen muotoisia riippumatta aineiston tyypistä. Kaikille estimaattoreille näytetään niiden Fisher-konsistenttisuus ja painotettuna on erityisesti estimaattoreiden rajajakaumat, jotka mahdollistavat teoreettiset vertailut eri menetelmien välillä

    A review of second-order blind identification methods

    Get PDF
    Second-order source separation (SOS) is a data analysis tool which can be used for revealing hidden structures in multivariate time series data or as a tool for dimension reduction. Such methods are nowadays increasingly important as more and more high-dimensional multivariate time series data are measured in numerous fields of applied science. Dimension reduction is crucial, as modeling such high-dimensional data with multivariate time series models is often impractical as the number of parameters describing dependencies between the component time series is usually too high. SOS methods have their roots in the signal processing literature, where they were first used to separate source signals from an observed signal mixture. The SOS model assumes that the observed time series (signals) is a linear mixture of latent time series (sources) with uncorrelated components. The methods make use of the second-order statistics-hence the name "second-order source separation." In this review, we discuss the classical SOS methods and their extensions to more complex settings. An example illustrates how SOS can be performed.This article is categorized under:Statistical Models > Time Series ModelsStatistical and Graphical Methods of Data Analysis > Dimension ReductionData: Types and Structure > Time Series, Stochastic Processes, and Functional Dat

    ICS for multivariate functional anomaly detection with applications to predictive maintenance and quality control

    Get PDF
    Multivariate functional anomaly detection has received a large amount of attention recently. Accounting both the time dimension and the correlations between variables is challenging due to the existence of different types of outliers and the dimension of the data. Most of the existing methods focus on a small number of variables. In the context of predictive maintenance and quality control however, data sets often contain a large number of functional variables. Moreover, in fields that have high reliability standards, detecting a small number of potential multivariate functional outliers with as few false positives as possible is crucial. In such a context, the adaptation of the Invariant Component Selection (ICS) method from the multivariate to the multivariate functional case is of particular interest. Two extensions of ICS are proposed: point-wise and global. For both methods, the choice of the relevant components together with outlier identification and interpretation are discussed. A comparison is made on a predictive maintenance example from the avionics field and a quality control example from the microelectronics field. It appears that in such a context, point-wise and global ICS with a small number of selected components are complementary and can be recommended

    Independent component analysis for multivariate functional data

    No full text
    We extend two methods of independent component analysis, fourth order blind identification and joint approximate diagonalization of eigen-matrices, to vector-valued functional data. Multivariate functional data occur naturally and frequently in modern applications, and extending independent component analysis to this setting allows us to distill important information from this type of data, going a step further than the functional principal component analysis. To allow the inversion of the covariance operator we make the assumption that the dependency between the component functions lies in a finite-dimensional subspace. In this subspace we define fourth cross-cumulant operators and use them to construct the two novel, Fisher consistent methods for solving the independent component problem for vector-valued functions. Both simulations and an application on a hand gesture data set show the usefulness and advantages of the proposed methods over functional principal component analysis.Peer reviewe
    corecore