252 research outputs found

    Neural activity with spatial and temporal correlations as a basis to simulate fMRI data

    Get PDF
    In the development of data analysis techniques, simulation studies are constantly gaining more interest. The largest challenge in setting up a simulation study is to create realistic data. This is especially true for generating fMRI data, since there is no consensus about the biological and physical relationships underlying the BOLD signal. Most existing simulation studies start from empirically acquired resting data to obtain realistic noise and add known activity (e.g., Bianciardi et al., 2004). However, since you have no control over the noise, it is hard to use these kinds of data in simulation studies. Others use the Bloch equations to simulate fMRI data (e.g., Drobnjak et al., 2006). Even though they get realistic data, this process is very slow involving a lot of calculations which might be unnecessary in a simulation study. We propose a new basis for generating fMRI data starting from a neural activation map where the neural activity is correlated between different locations, both spatial and temporal. A biologically inspired model can then be used to simulate the BOLD respons

    neuRosim: an R package for simulation of fMRI magnitude data with realistic noise

    Get PDF
    Statistical analysis techniques for highly complex structured data such as fMRI data should be thoroughly validated. In this process, knowing the ground truth is essential. Unfortunately, establishing the ground truth of fMRI data is only possible with highly invasive procedures (i.e. intracranial EEG). Therefore, generating the data artificially is often the only viable solution. However, there is currently no consensus among researchers on how to simulate fMRI data. Research groups develop their own methods and use only inhouse software routines. A general validition of these methods is lacking, probably due to the nonexistance of well-documented and freely available software

    Secondary generalisation in categorisation: an exemplar-based account

    Get PDF
    The parallel rule activation and rule synthesis (PRAS) model is a computational model for generalisation in category learning, proposed by Vandierendonck (1995). An important concept underlying the PRAS model is the distinction between primary and secondary generalisation. In Vandierendonck (1995), an empirical study is reported that provides support for the concept of secondary generalisation. In this paper, we re-analyse the data reported by Vandierendonck (1995) by fitting three different variants of the Generalised Context Model (GCM) which do not rely on secondary generalisation. Although some of the GCM variants outperformed the PRAS model in terms of global fit, they all have difficulty in providing a qualitatively good fit of a specific critical pattern

    A review of R-packages for random-intercept probit regression in small clusters

    Get PDF
    Generalized Linear Mixed Models (GLMMs) are widely used to model clustered categorical outcomes. To tackle the intractable integration over the random effects distributions, several approximation approaches have been developed for likelihood-based inference. As these seldom yield satisfactory results when analyzing binary outcomes from small clusters, estimation within the Structural Equation Modeling (SEM) framework is proposed as an alternative. We compare the performance of R-packages for random-intercept probit regression relying on: the Laplace approximation, adaptive Gaussian quadrature (AGQ), Penalized Quasi-Likelihood (PQL), an MCMC-implementation, and integrated nested Laplace approximation within the GLMM-framework, and a robust diagonally weighted least squares estimation within the SEM-framework. In terms of bias for the fixed and random effect estimators, SEM usually performs best for cluster size two, while AGQ prevails in terms of precision (mainly because of SEM's robust standard errors). As the cluster size increases, however, AGQ becomes the best choice for both bias and precision

    The influence of problem features and individual differences on strategic performance in simple arithmetic

    Get PDF
    The present study examined the influence of features differing across problems (problem size and operation) and differing across individuals (daily arithmetic practice, the amount of calculator use, arithmetic skill, and gender) on simple-arithmetic performance. Regression analyses were used to investigate the role of these variables in both strategy selection and strategy efficiency. Results showed that more-skilled and highly practiced students used memory retrieval more often and executed their strategies more efficiently than less-skilled and less practiced students. Furthermore, calculator use was correlated with retrieval efficiency and procedural efficiency but not with strategy selection. Only very small associations with gender were observed, with boys retrieving slightly faster than girls. Implications of the present findings for views on models of mental arithmetic are discussed

    Small sample solutions for structural equation modeling

    Get PDF
    Structural equation modeling (SEM) is a widely used statistical technique for studying relationships in multivariate data. Unfortunately, when the sample size is small, several problems may arise. Some problems relate to point estimation, whereas other problems relate to small sample inference. This chapter contains several potential solutions for point estimation, including penalized likelihood estimation, a method based on model-implied instrumental variables, two-step estimation, and factor score regression. This chapter also contains a brief discussion of inference, including several corrections for the chi-square test statistic, local fit statistics, and some suggestions to improve the quality of standard errors and confidence intervals

    lavaan: An R Package for Structural Equation Modeling

    Get PDF
    Structural equation modeling (SEM) is a vast field and widely used by many applied researchers in the social and behavioral sciences. Over the years, many software packages for structural equation modeling have been developed, both free and commercial. However, perhaps the best state-of-the-art software packages in this field are still closed-source and/or commercial. The R package lavaan has been developed to provide applied researchers, teachers, and statisticians, a free, fully open-source, but commercial-quality package for latent variable modeling. This paper explains the aims behind the development of the package, gives an overview of its most important features, and provides some examples to illustrate how lavaan works in practice

    Activation detection in event-related FMRI through clustering of wavelet distributions

    Get PDF
    We propose a new method for the detection of activated voxels in event-related BOLD fMRI data. We model the statistics of the wavelet histograms derived from each voxel time series independently through a generalized Gaussian distribution (GGD). We perform k-means clustering of the GGDs characterizing the voxel data in a synthetic data set, using the symmetrized Kullback-Leibler divergence (KLD) as a similarity measure. We compare our technique with GLM modeling and with another clustering method for activation detection that directly uses the wavelet coefficients as features. Our method is shown to be considerably more stable against realistic hemodynamic variability

    Factor score path analysis : an alternative for SEM

    Get PDF
    corecore