175 research outputs found

    Predicting Fluid Intelligence of Children using T1-weighted MR Images and a StackNet

    Full text link
    In this work, we utilize T1-weighted MR images and StackNet to predict fluid intelligence in adolescents. Our framework includes feature extraction, feature normalization, feature denoising, feature selection, training a StackNet, and predicting fluid intelligence. The extracted feature is the distribution of different brain tissues in different brain parcellation regions. The proposed StackNet consists of three layers and 11 models. Each layer uses the predictions from all previous layers including the input layer. The proposed StackNet is tested on a public benchmark Adolescent Brain Cognitive Development Neurocognitive Prediction Challenge 2019 and achieves a mean squared error of 82.42 on the combined training and validation set with 10-fold cross-validation. In addition, the proposed StackNet also achieves a mean squared error of 94.25 on the testing data. The source code is available on GitHub.Comment: 8 pages, 2 figures, 3 tables, Accepted by MICCAI ABCD-NP Challenge 2019; Added ND

    Combining and Steganography of 3D Face Textures

    Get PDF
    One of the serious issues in communication between people is hiding information from others, and the best way for this, is deceiving them. Since nowadays face images are mostly used in three dimensional format, in this paper we are going to steganography 3D face images, detecting which by curious people will be impossible. As in detecting face only its texture is important, we separate texture from shape matrices, for eliminating half of the extra information, steganography is done only for face texture, and for reconstructing 3D face, we can use any other shape. Moreover, we will indicate that, by using two textures, how two 3D faces can be combined. For a complete description of the process, first, 2D faces are used as an input for building 3D faces, and then 3D textures are hidden within other images.Comment: 6 pages, 10 figures, 16 equations, 5 section

    Selecting the rank of truncated SVD by Maximum Approximation Capacity

    Full text link
    Truncated Singular Value Decomposition (SVD) calculates the closest rank-kk approximation of a given input matrix. Selecting the appropriate rank kk defines a critical model order choice in most applications of SVD. To obtain a principled cut-off criterion for the spectrum, we convert the underlying optimization problem into a noisy channel coding problem. The optimal approximation capacity of this channel controls the appropriate strength of regularization to suppress noise. In simulation experiments, this information theoretic method to determine the optimal rank competes with state-of-the art model selection techniques.Comment: 7 pages, 5 figures; Will be presented at the IEEE International Symposium on Information Theory (ISIT) 2011. The conference version has only 5 pages. This version has an extended appendi

    Noise and nonlinearities in high-throughput data

    Full text link
    High-throughput data analyses are becoming common in biology, communications, economics and sociology. The vast amounts of data are usually represented in the form of matrices and can be considered as knowledge networks. Spectra-based approaches have proved useful in extracting hidden information within such networks and for estimating missing data, but these methods are based essentially on linear assumptions. The physical models of matching, when applicable, often suggest non-linear mechanisms, that may sometimes be identified as noise. The use of non-linear models in data analysis, however, may require the introduction of many parameters, which lowers the statistical weight of the model. According to the quality of data, a simpler linear analysis may be more convenient than more complex approaches. In this paper, we show how a simple non-parametric Bayesian model may be used to explore the role of non-linearities and noise in synthetic and experimental data sets.Comment: 12 pages, 3 figure

    Capture of manufacturing uncertainty in turbine blades through probabilistic techniques

    No full text
    Efficient designing of the turbine blades is critical to the performance of an aircraft engine. An area of significant research interest is the capture of manufacturing uncertainty in the shapes of these turbine blades. The available data used for estimation of this manufacturing uncertainty inevitably contains the effects of measurement error/noise. In the present work, we propose the application of Principal Component Analysis (PCA) for de-noising the measurement data and quantifying the underlying manufacturing uncertainty. Once the PCA is performed, a method for dimensionality reduction has been proposed which utilizes prior information available on the variance of measurement error for different measurement types. Numerical studies indicate that approximately 82% of the variation in the measurements from their design values is accounted for by the manufacturing uncertainty, while the remaining 18% variation is filtered out as measurement error

    Exact Dimensionality Selection for Bayesian PCA

    Get PDF
    We present a Bayesian model selection approach to estimate the intrinsic dimensionality of a high-dimensional dataset. To this end, we introduce a novel formulation of the probabilisitic principal component analysis model based on a normal-gamma prior distribution. In this context, we exhibit a closed-form expression of the marginal likelihood which allows to infer an optimal number of components. We also propose a heuristic based on the expected shape of the marginal likelihood curve in order to choose the hyperparameters. In non-asymptotic frameworks, we show on simulated data that this exact dimensionality selection approach is competitive with both Bayesian and frequentist state-of-the-art methods

    Model order selection criteria: comparative study and applications

    Get PDF
    A practical application of information theoretic criteria is presented in this paper. Eigenvalue decomposition of the signal correlation matrixbased AIC, MDL and MIBS criteria are investigated and used for online estimation of time varying parameters of harmonic signals in power systems.===PL===Artykuł przedstawia kryteria i porównanie metod redukcji modelu procesu. Przedstawiono i porównano różne kryteria bazujące na dekompozycji macierzy korelacji według wartości własnych: AIC, MDL i MIBS. Porównania dokonano na sygnałach harmonicznych odpowiadających układowi niestacjonarnemu

    Bayesian dimensionality reduction with PCA using penalized semi-integrated likelihood

    Full text link
    We discuss the problem of estimating the number of principal components in Principal Com- ponents Analysis (PCA). Despite of the importance of the problem and the multitude of solutions proposed in the literature, it comes as a surprise that there does not exist a coherent asymptotic framework which would justify different approaches depending on the actual size of the data set. In this paper we address this issue by presenting an approximate Bayesian approach based on Laplace approximation and introducing a general method for building the model selection criteria, called PEnalized SEmi-integrated Likelihood (PESEL). Our general framework encompasses a variety of existing approaches based on probabilistic models, like e.g. Bayesian Information Criterion for the Probabilistic PCA (PPCA), and allows for construction of new criteria, depending on the size of the data set at hand. Specifically, we define PESEL when the number of variables substantially exceeds the number of observations. We also report results of extensive simulation studies and real data analysis, which illustrate good properties of our proposed criteria as compared to the state-of- the-art methods and very recent proposals. Specifially, these simulations show that PESEL based criteria can be quite robust against deviations from the probabilistic model assumptions. Selected PESEL based criteria for the estimation of the number of principal components are implemented in R package varclust, which is available on github (https://github.com/psobczyk/varclust).Comment: 31 pages, 7 figure
    • …
    corecore