83 research outputs found

    Does Filtering Preclude Us from Studying ERP Time-Courses?

    Get PDF
    Filtering can distort signals (Lyons, 2004), a problem well documented for ERP data (see, e.g., Luck, 2005; Kappenman and Luck, 2010; May and Tiitinen, 2010). It is thus recommended to filter ERPs as little as possible (Luck, 2005). Recently, VanRullen (2011) provided a healthy reminder of filtering dangers. Using simulated data, VanRullen demonstrated that an effect occurring randomly between 150 and 180 ms post-stimulus can be smeared back in time by a 30-Hz low-pass filter, and appears to start at 100 ms. From this result, VanRullen concluded that if researchers filter their data, they cannot interpret the onsets of ERP effects and should limit their conclusions to peak amplitudes and latencies, without interpreting precise ERP time-courses. However, as we are going to see, we can study ERP onsets by using causal filters

    Improving standards in brain-behavior correlation analyses

    Get PDF
    Associations between two variables, for instance between brain and behavioral measurements, are often studied using correlations, and in particular Pearson correlation. However, Pearson correlation is not robust: outliers can introduce false correlations or mask existing ones. These problems are exacerbated in brain imaging by a widespread lack of control for multiple comparisons, and several issues with data interpretations. We illustrate these important problems associated with brain-behavior correlations, drawing examples from published articles. We make several propositions to alleviate these problems

    投稿規程

    Get PDF
    To summarise skewed (asymmetric) distributions, such as reaction times, typically the mean or the median are used as measures of central tendency. Using the mean might seem surprising, given that it provides a poor measure of central tendency for skewed distributions, whereas the median provides a better indication of the location of the bulk of the observations. However, the sample median is biased: with small sample sizes, it tends to overestimate the population median. This is not the case for the mean. Based on this observation, Miller (1988) concluded that "sample medians must not be used to compare reaction times across experimental conditions when there are unequal numbers of trials in the conditions". Here we replicate and extend Miller (1988), and demonstrate that his conclusion was ill-advised for several reasons. First, the median's bias can be corrected using a percentile bootstrap bias correction. Second, a careful examination of the sampling distributions reveals that the sample median is median unbiased, whereas the mean is median biased when dealing with skewed distributions. That is, on average the sample mean estimates the population mean, but typically this is not the case. In addition, simulations of false and true positives in various situations show that no method dominates. Crucially, neither the mean nor the median are sufficient or even necessary to compare skewed distributions. Different questions require different methods and it would be unwise to use the mean or the median in all situations. Better tools are available to get a deeper understanding of how distributions differ: we illustrate the hierarchical shift function, a powerful alternative that relies on quantile estimation. All the code and data to reproduce the figures and analyses in the article are available online

    A Quantile Shift Approach To Main Effects And Interactions In A 2-By-2 Design

    Full text link
    When comparing two independent groups, shift functions are basically techniques that compare multiple quantiles rather than a single measure of location, the goal being to get a more detailed understanding of how the distributions differ. Various versions have been proposed and studied. This paper deals with extensions of these methods to main effects and interactions in a between-by-between, 2-by-2 design. Two approaches are studied, one that compares the deciles of the distributions, and one that has a certain connection to the Wilcoxon-Mann-Whitney method. For both methods, we propose an implementation using the Harrell-Davis quantile estimator, used in conjunction with a percentile bootstrap approach. We report results of simulations of false and true positive rates.Comment: GitHub repository: https://github.com/GRousselet/decinter

    The percentile bootstrap: a primer with step-by-step instructions in R.

    Get PDF
    The percentile bootstrap is the Swiss Army knife of statistics: It is a nonparametric method based on data-driven simulations. It can be applied to many statistical problems, as a substitute to standard parametric approaches, or in situations for which parametric methods do not exist. In this Tutorial, we cover R code to implement the percentile bootstrap to make inferences about central tendency (e.g., means and trimmed means) and spread in a one-sample example and in an example comparing two independent groups. For each example, we explain how to derive a bootstrap distribution and how to get a confidence interval and a p value from that distribution. We also demonstrate how to run a simulation to assess the behavior of the bootstrap. For some purposes, such as making inferences about the mean, the bootstrap performs poorly. But for other purposes, it is the only known method that works well over a broad range of situations. More broadly, combining the percentile bootstrap with robust estimators (i.e., estimators that are not overly sensitive to outliers) can help users gain a deeper understanding of their data than they would using conventional methods

    Promoting and supporting credibility in neuroscience.

    Get PDF
    No abstract available

    Promoting and supporting credibility in neuroscience

    Get PDF
    No abstract available

    Healthy Aging Delays Scalp EEG Sensitivity to Noise in a Face Discrimination Task

    Get PDF
    We used a single-trial ERP approach to quantify age-related changes in the time-course of noise sensitivity. A total of 62 healthy adults, aged between 19 and 98, performed a non-speeded discrimination task between two faces. Stimulus information was controlled by parametrically manipulating the phase spectrum of these faces. Behavioral 75% correct thresholds increased with age. This result may be explained by lower signal-to-noise ratios in older brains. ERP from each subject were entered into a single-trial general linear regression model to identify variations in neural activity statistically associated with changes in image structure. The fit of the model, indexed by R2, was computed at multiple post-stimulus time points. The time-course of the R2 function showed significantly delayed noise sensitivity in older observers. This age effect is reliable, as demonstrated by test–retest in 24 subjects, and started about 120 ms after stimulus onset. Our analyses suggest also a qualitative change from a young to an older pattern of brain activity at around 47 ± 4 years old
    corecore