652 research outputs found

    Does Filtering Preclude Us from Studying ERP Time-Courses?

    Get PDF
    Filtering can distort signals (Lyons, 2004), a problem well documented for ERP data (see, e.g., Luck, 2005; Kappenman and Luck, 2010; May and Tiitinen, 2010). It is thus recommended to filter ERPs as little as possible (Luck, 2005). Recently, VanRullen (2011) provided a healthy reminder of filtering dangers. Using simulated data, VanRullen demonstrated that an effect occurring randomly between 150 and 180 ms post-stimulus can be smeared back in time by a 30-Hz low-pass filter, and appears to start at 100 ms. From this result, VanRullen concluded that if researchers filter their data, they cannot interpret the onsets of ERP effects and should limit their conclusions to peak amplitudes and latencies, without interpreting precise ERP time-courses. However, as we are going to see, we can study ERP onsets by using causal filters

    Threshold feedback control for a collective flashing ratchet: threshold dependence

    Full text link
    We study the threshold control protocol for a collective flashing ratchet. In particular, we analyze the dependence of the current on the values of the thresholds. We have found analytical expressions for the small threshold dependence both for the few and for the many particle case. For few particles the current is a decreasing function of the thresholds, thus, the maximum current is reached for zero thresholds. In contrast, for many particles the optimal thresholds have a nonzero finite value. We have numerically checked the relation that allows to obtain the optimal thresholds for an infinite number of particles from the optimal period of the periodic protocol. These optimal thresholds for an infinite number of particles give good results for many particles. In addition, they also give good results for few particles due to the smooth dependence of the current up to these threshold values.Comment: LaTeX, 10 pages, 7 figures, improved version to appear in Phys. Rev.

    Improving standards in brain-behavior correlation analyses

    Get PDF
    Associations between two variables, for instance between brain and behavioral measurements, are often studied using correlations, and in particular Pearson correlation. However, Pearson correlation is not robust: outliers can introduce false correlations or mask existing ones. These problems are exacerbated in brain imaging by a widespread lack of control for multiple comparisons, and several issues with data interpretations. We illustrate these important problems associated with brain-behavior correlations, drawing examples from published articles. We make several propositions to alleviate these problems

    投稿規程

    Get PDF
    To summarise skewed (asymmetric) distributions, such as reaction times, typically the mean or the median are used as measures of central tendency. Using the mean might seem surprising, given that it provides a poor measure of central tendency for skewed distributions, whereas the median provides a better indication of the location of the bulk of the observations. However, the sample median is biased: with small sample sizes, it tends to overestimate the population median. This is not the case for the mean. Based on this observation, Miller (1988) concluded that "sample medians must not be used to compare reaction times across experimental conditions when there are unequal numbers of trials in the conditions". Here we replicate and extend Miller (1988), and demonstrate that his conclusion was ill-advised for several reasons. First, the median's bias can be corrected using a percentile bootstrap bias correction. Second, a careful examination of the sampling distributions reveals that the sample median is median unbiased, whereas the mean is median biased when dealing with skewed distributions. That is, on average the sample mean estimates the population mean, but typically this is not the case. In addition, simulations of false and true positives in various situations show that no method dominates. Crucially, neither the mean nor the median are sufficient or even necessary to compare skewed distributions. Different questions require different methods and it would be unwise to use the mean or the median in all situations. Better tools are available to get a deeper understanding of how distributions differ: we illustrate the hierarchical shift function, a powerful alternative that relies on quantile estimation. All the code and data to reproduce the figures and analyses in the article are available online

    A Quantile Shift Approach To Main Effects And Interactions In A 2-By-2 Design

    Full text link
    When comparing two independent groups, shift functions are basically techniques that compare multiple quantiles rather than a single measure of location, the goal being to get a more detailed understanding of how the distributions differ. Various versions have been proposed and studied. This paper deals with extensions of these methods to main effects and interactions in a between-by-between, 2-by-2 design. Two approaches are studied, one that compares the deciles of the distributions, and one that has a certain connection to the Wilcoxon-Mann-Whitney method. For both methods, we propose an implementation using the Harrell-Davis quantile estimator, used in conjunction with a percentile bootstrap approach. We report results of simulations of false and true positive rates.Comment: GitHub repository: https://github.com/GRousselet/decinter

    The percentile bootstrap: a primer with step-by-step instructions in R.

    Get PDF
    The percentile bootstrap is the Swiss Army knife of statistics: It is a nonparametric method based on data-driven simulations. It can be applied to many statistical problems, as a substitute to standard parametric approaches, or in situations for which parametric methods do not exist. In this Tutorial, we cover R code to implement the percentile bootstrap to make inferences about central tendency (e.g., means and trimmed means) and spread in a one-sample example and in an example comparing two independent groups. For each example, we explain how to derive a bootstrap distribution and how to get a confidence interval and a p value from that distribution. We also demonstrate how to run a simulation to assess the behavior of the bootstrap. For some purposes, such as making inferences about the mean, the bootstrap performs poorly. But for other purposes, it is the only known method that works well over a broad range of situations. More broadly, combining the percentile bootstrap with robust estimators (i.e., estimators that are not overly sensitive to outliers) can help users gain a deeper understanding of their data than they would using conventional methods

    Parametric study of EEG sensitivity to phase noise during face processing

    Get PDF
    <b>Background: </b> The present paper examines the visual processing speed of complex objects, here faces, by mapping the relationship between object physical properties and single-trial brain responses. Measuring visual processing speed is challenging because uncontrolled physical differences that co-vary with object categories might affect brain measurements, thus biasing our speed estimates. Recently, we demonstrated that early event-related potential (ERP) differences between faces and objects are preserved even when images differ only in phase information, and amplitude spectra are equated across image categories. Here, we use a parametric design to study how early ERP to faces are shaped by phase information. Subjects performed a two-alternative force choice discrimination between two faces (Experiment 1) or textures (two control experiments). All stimuli had the same amplitude spectrum and were presented at 11 phase noise levels, varying from 0% to 100% in 10% increments, using a linear phase interpolation technique. Single-trial ERP data from each subject were analysed using a multiple linear regression model. <b>Results: </b> Our results show that sensitivity to phase noise in faces emerges progressively in a short time window between the P1 and the N170 ERP visual components. The sensitivity to phase noise starts at about 120–130 ms after stimulus onset and continues for another 25–40 ms. This result was robust both within and across subjects. A control experiment using pink noise textures, which had the same second-order statistics as the faces used in Experiment 1, demonstrated that the sensitivity to phase noise observed for faces cannot be explained by the presence of global image structure alone. A second control experiment used wavelet textures that were matched to the face stimuli in terms of second- and higher-order image statistics. Results from this experiment suggest that higher-order statistics of faces are necessary but not sufficient to obtain the sensitivity to phase noise function observed in response to faces. <b>Conclusion: </b> Our results constitute the first quantitative assessment of the time course of phase information processing by the human visual brain. We interpret our results in a framework that focuses on image statistics and single-trial analyses

    Extraspinal sciatica revealing late metastatic disease from parotid carcinoma

    Get PDF
    Sciatica is a clinical symptom usually caused by a disk herniation and less often by other conditions such as tumors, infections, or inflammatory diseases. We report the case of a woman in whom sciatica led to the identification of a large pelvic metastasis from a carcinoma of the parotid gland
    corecore