49 research outputs found
A three domain covariance framework for EEG/MEG data
In this paper we introduce a covariance framework for the analysis of EEG and
MEG data that takes into account observed temporal stationarity on small time
scales and trial-to-trial variations. We formulate a model for the covariance
matrix, which is a Kronecker product of three components that correspond to
space, time and epochs/trials, and consider maximum likelihood estimation of
the unknown parameter values. An iterative algorithm that finds approximations
of the maximum likelihood estimates is proposed. We perform a simulation study
to assess the performance of the estimator and investigate the influence of
different assumptions about the covariance factors on the estimated covariance
matrix and on its components. Apart from that, we illustrate our method on real
EEG and MEG data sets.
The proposed covariance model is applicable in a variety of cases where
spontaneous EEG or MEG acts as source of noise and realistic noise covariance
estimates are needed for accurate dipole localization, such as in evoked
activity studies, or where the properties of spontaneous EEG or MEG are
themselves the topic of interest, such as in combined EEG/fMRI experiments in
which the correlation between EEG and fMRI signals is investigated.Comment: 25 pages, 8 figures, 1 tabl
A non-Markovian model for cell population growth: speed of convergence and central limit theorem
In De Gunst (1989) a stochastic model was developed for the growth of a batch culture of plant cells. In this paper the mathematical properties of the model are considered. We investigate the asymptotic behaviour of the population growth as predicted by the model when the initial cell number of population members tends to infinity. In particular it is shown that the total cell number, which is a non-Markovian counting process, converges almost surely, uniformly on the real line to a non-random function and the rate of convergence is established. Moreover, a central limit theorem is proved. Computer simulations illustrate the behaviour of the process. The model is graphically compared with experimental data
Wild Bootstrap for Counting Process-Based Statistics
The wild bootstrap is a popular resampling method in the context of
time-to-event data analyses. Previous works established the large sample
properties of it for applications to different estimators and test statistics.
It can be used to justify the accuracy of inference procedures such as
hypothesis tests or time-simultaneous confidence bands. This paper consists of
two parts: in Part~I, a general framework is developed in which the large
sample properties are established in a unified way by using martingale
structures. The framework includes most of the well-known non- and
semiparametric statistical methods in time-to-event analysis and parametric
approaches. In Part II, the Fine-Gray proportional sub-hazards model
exemplifies the theory for inference on cumulative incidence functions given
the covariates. The model falls within the framework if the data are
censoring-complete. A simulation study demonstrates the reliability of the
method and an application to a data set about hospital-acquired infections
illustrates the statistical procedure.Comment: 2 parts, 115 pages, 2 figures, 13 table
Inference via Wild Bootstrap and Multiple Imputation under Fine-Gray Models with Incomplete Data
Fine-Gray models specify the subdistribution hazards for one out of multiple
competing risks to be proportional. The estimators of parameters and cumulative
incidence functions under Fine-Gray models have a simpler structure when data
are censoring-complete than when they are more generally incomplete. This paper
considers the case of incomplete data but it exploits the above-mentioned
simpler estimator structure for which there exists a wild bootstrap approach
for inferential purposes. The present idea is to link the methodology under
censoring-completeness with the more general right-censoring regime with the
help of multiple imputation. In a simulation study, this approach is compared
to the estimation procedure proposed in the original paper by Fine and Gray
when it is combined with a bootstrap approach. An application to a data set
about hospital-acquired infections illustrates the method.Comment: 32 pages, 2 figures, 1 tabl
Novel Candidate Genes Associated with Hippocampal Oscillations
The hippocampus is critical for a wide range of emotional and cognitive behaviors. Here, we performed the first genome-wide search for genes influencing hippocampal oscillations. We measured local field potentials (LFPs) using 64-channel multi-electrode arrays in acute hippocampal slices of 29 BXD recombinant inbred mouse strains. Spontaneous activity and carbachol-induced fast network oscillations were analyzed with spectral and cross-correlation methods and the resulting traits were used for mapping quantitative trait loci (QTLs), i.e., regions on the genome that may influence hippocampal function. Using genome-wide hippocampal gene expression data, we narrowed the QTLs to eight candidate genes, including Plcb1, a phospholipase that is known to influence hippocampal oscillations. We also identified two genes coding for calcium channels, Cacna1b and Cacna1e, which mediate presynaptic transmitter release and have not been shown to regulate hippocampal network activity previously. Furthermore, we showed that the amplitude of the hippocampal oscillations is genetically correlated with hippocampal volume and several measures of novel environment exploration
LLM3D: a log-linear modeling-based method to predict functional gene regulatory interactions from genome-wide expression data
All cellular processes are regulated by condition-specific and time-dependent interactions between transcription factors and their target genes. While in simple organisms, e.g. bacteria and yeast, a large amount of experimental data is available to support functional transcription regulatory interactions, in mammalian systems reconstruction of gene regulatory networks still heavily depends on the accurate prediction of transcription factor binding sites. Here, we present a new method, log-linear modeling of 3D contingency tables (LLM3D), to predict functional transcription factor binding sites. LLM3D combines gene expression data, gene ontology annotation and computationally predicted transcription factor binding sites in a single statistical analysis, and offers a methodological improvement over existing enrichment-based methods. We show that LLM3D successfully identifies novel transcriptional regulators of the yeast metabolic cycle, and correctly predicts key regulators of mouse embryonic stem cell self-renewal more accurately than existing enrichment-based methods. Moreover, in a clinically relevant in vivo injury model of mammalian neurons, LLM3D identified peroxisome proliferator-activated receptor γ (PPARγ) as a neuron-intrinsic transcriptional regulator of regenerative axon growth. In conclusion, LLM3D provides a significant improvement over existing methods in predicting functional transcription regulatory interactions in the absence of experimental transcription factor binding data