98 research outputs found

    Comments and Controversies Ten ironic rules for non-statistical reviewers

    Get PDF
    As an expert reviewer, it is sometimes necessary to ensure a paper is rejected. This can sometimes be achieved by highlighting improper statistical practice. This technical note provides guidance on how to critique the statistical analysis of neuroimaging studies to maximise the chance that the paper will be declined. We will review a series of critiques that can be applied universally to any neuroimaging paper and consider responses to potential rebuttals that reviewers might encounter from authors or editors. © 2012 Elsevier Inc. All rights reserved. Introduction This technical note is written for reviewers who may not have sufficient statistical expertise to provide an informed critique during the peer-reviewed process, but would like to recommend rejection on the basis of inappropriate or invalid statistical analysis. This guidance follows the 10 simple rules format and hopes to provide useful tips and criticisms for reviewers who find themselves in this difficult position. These rules are presented for reviewers in an ironic way 1 that makes it easier (and hopefully more entertaining) to discuss the issues from the point of view of both the reviewer and author -and to caricature both sides of the arguments. Some key issues are presented more formally in (non-ironic) appendices. There is a perceived need to reject peer-reviewed papers with the advent of open access publishing and the large number of journals available to authors. Clearly, there may be idiosyncratic reasons to block a paper -to ensure your precedence in the literature, personal rivalry etc. -however, we will assume that there is an imperative to reject papers for the good of the community: handling editors are often happy to receive recommendations to decline a paper. This is because they are placed under pressure to maintain a high rejection rate. This pressure is usually exerted by the editorial board (and publishers) and enforced by circulating quantitative information about their rejection rates (i.e., naming and shaming lenient editors). All journals want to maximise rejection rates, because this increases the quality of submissions, increases their impact factor and underwrites their long-term viability. A reasonably mature journal like Neuroimage would hope to see between 70% and 90% of submissions rejected. Prestige journals usually like to reject over 90% of the papers they receive. As an expert reviewer, it is your role to help editors decline papers whenever possible. In what follows, we will provide 10 simple rules to make this job easier: Rule number one: dismiss self doubt Occasionally, when asked to provide an expert opinion on the design or analysis of a neuroimaging study you might feel under qualified. For example, you may not have been trained in probability theory or statistics or -if you have -you may not be familiar with topological inference and related topics such as random field theory. It is important to dismiss any ambivalence about your competence to provide a definitive critique. You have been asked to provide comments as an expert reviewer and, operationally, this is now your role. By definition, what you say is the opinion of the expert reviewer and cannot be challenged -in relation to the paper under consideration, you are the ultimate authority. You should therefore write with authority, in a firm and friendly fashion. Rule number two: avoid dispassionate statements A common mistake when providing expert comments is to provide definitive observations that can be falsified. Try to avoid phrases like "I believe" or "it can be shown that". These statements invite a rebuttal that could reveal your beliefs or statements to be false. It is much safer, and preferable, to use phrases like "I feel" and "I do not trust". No one can question the veracity of your feelings and convictions. Another useful device is to make your points vicariously; for example, instead of saying "Procedure A is statistically invalid" it is much better to say that "It is commonly accepted that procedure A is statistically invalid". Although authors may be able to show that procedure A is valid, they will find it more difficult to prove that it is commonly accepted as valid. In short, trying to pre-empt a NeuroImage 61 (2012) 1300-1310 1 The points made in this paper rest heavily on irony (Irony from the Ancient Greek Δ ÏÏ‰ÎœÎ”ÎŻÎ± eirƍneĂ­a, meaning dissimilation or feign ignorance). The intended meaning of ironic statements is the opposite of their literal meaning. prolonged exchange with authors by centring the issues on convictions held by yourself or others and try to avoid stating facts. Contents lists available at SciVerse ScienceDirect Rule number three: submit your comments as late as possible It is advisable to delay submitting your reviewer comments for as long as possible -preferably after the second reminder from the editorial office. This has three advantages. First, it delays the editorial process and creates an air of frustration, which you might be able to exploit later. Second, it creates the impression that you are extremely busy (providing expert reviews for other papers) and indicates that you have given this paper due consideration, after thinking about it carefully for several months. A related policy, that enhances your reputation with editors, is to submit large numbers of papers to their journal but politely decline invitations to review other people's papers. This shows that you are focused on your science and are committed to producing high quality scientific reports, without the distraction of peer-review or other inappropriate demands on your time. Rule number four: the under-sampled study If you are lucky, the authors will have based their inference on less than 16 subjects. All that is now required is a statement along the following lines: "Reviewer: Unfortunately, this paper cannot be accepted due to the small number of subjects. The significant results reported by the authors are unsafe because the small sample size renders their design insufficiently powered. It may be appropriate to reconsider this work if the authors recruit more subjects." Notice your clever use of the word "unsafe", which means you are not actually saying the results are invalid. This sort of critique is usually sufficient to discourage an editor from accepting the paper; however -in the unhappy event the authors are allowed to respond -be prepared for something like: "Response: We would like to thank the reviewer for his or her comments on sample size; however, his or her concerns are statistically misplaced. This is because a significant result (properly controlled for false positives), based on a small sample indicates the treatment effect is actually larger than the equivalent result with a large sample. In short, not only is our result statistically valid. It is quantitatively stronger than the same result with a larger number of subjects." Unfortunately, the authors are correct (see Appendix 1). On the bright side, the authors did not resort to the usual anecdotes that beguile handling editors. Responses that one is in danger of eliciting include things like: "Response: We suspect the reviewer is one of those scientists who would reject our report of a talking dog because our sample size equals one!" Or, a slightly more considered rebuttal: "Response: Clearly, the reviewer has never heard of the fallacy of classical inference. Large sample sizes are not a substitute for good hypothesis testing. Indeed, the probability of rejecting the null hypothesis under trivial treatment effects increases with sample size." Thankfully, you have heard of the fallacy of classical inference (see Appendix 1) and will call upon it when needed (see next rule). When faced with the above response, it is often worthwhile trying a slightly different angle of attack; for example 2 "Reviewer: I think the authors misunderstood my point here: The point that a significant result with a small sample size is more compelling than one with a large sample size ignores the increased influence of outliers and lack-of-robustness for small samples." Unfortunately, this is not actually the case and the authors may respond with: "Response: The reviewer's concern now pertains to the robustness of parametric tests with small sample sizes. Happily, we can dismiss this concern because outliers decrease the type I error of parametric tests At this point, it is probably best to proceed to rule six. Rule number five: the over-sampled study If the number of subjects reported exceeds 32, you can now try a less common, but potentially potent argument of the following sort: "Reviewer: I would like to commend the authors for studying such a large number of subjects; however, I suspect they have not heard of the fallacy of classical inference. Put simply, when a study is overpowered (with too many subjects), even the smallest treatment effect will appear significant. In this case, although I am sure the population effects reported by the authors are significant; they are probably trivial in quantitative terms. It would have been much more compelling had the authors been able to show a significant effect without resorting to large sample sizes. However, this was not the case and I cannot recommend publication." You could even drive your point home with: "Reviewer: In fact, the neurological model would only consider a finding useful if it could be reproduced three times in three patients. If I have to analyse 100 patients before finding a discernible effect, one has to ask whether this effect has any diagnostic or predictive value." Most authors (and editors) will not have heard of this criticism but, after a bit of background reading, will probably try to talk their way out of it by referring to effect sizes (see Appendix 2). Happily, there are no rules that establish whether an effect size is trivial or nontrivial. This means that if you pursue this line of argument diligently, it should lead to a positive outcome. Rule number six: untenable assumptions (nonparametric analysis) If the number of subjects falls between 16 and 32, it is probably best to focus on the fallibility of classical inference -namely its assumptions. Happily, in neuroimaging, it is quite easy to sound convincing when critiquing along these lines: for example, "Reviewer: I am very uncomfortable about the numerous and untenable assumptions that lie behind the parametric tests used by the authors. It is well-known that MRI data has a non Gaussian (Rician) distribution, which violates the parametric assumptions of their statistical tests. It is imperative that the authors repeat their analysis using nonparametric tests." The nice thing about this request is that it will take some time to perform nonparametric tests. Furthermore, the nonparametric tests will, by the Neyman-Pearson lemma, 3 be less sensitive than the 2 This point was raised by a reviewer of the current paper. 3 The Neyman-Pearson lemma states that when performing a hypothesis test, the likelihood-ratio test is the most powerful test for a given size and threshold. original likelihood ratio tests reported by the authors -and their significant results may disappear. However, be prepared for the following rebuttal: "Response: We would like to thank the reviewer for his or her helpful suggestions about nonparametric testing; however, we would like to point out that it is not the distribution of the data that is assumed to be Gaussian in parametric tests, but the distribution of the random errors. These are guaranteed to be Gaussian for our data, by the Central limit theorem, 4 because of the smoothing applied to the data and because our summary statistics at the between subject level are linear mixtures of data at the within subject level." The authors are correct here and this sort of response should be taken as a cue to pursue a different line of critique: Rule number seven: question the validity (cross validation) At this stage, it is probably best to question the fundaments of the statistical analysis and try to move the authors out of their comfort zone. A useful way to do this is to keep using words like validity and validation: for example, "Reviewer: I am very uncomfortable about the statistical inferences made in this report. The correlative nature of the findings makes it difficult to accept the mechanistic interpretations offered by the authors. Furthermore, the validity of the inference seems to rest upon many strong assumptions. It is imperative that the authors revisit their inference using cross validation and perhaps some form of multivariate pattern analysis." Hopefully, this will result in the paper being declined or -at least -being delayed for a few months. However, the authors could respond with something like: "Response: We would like to thank the reviewer for his or her helpful comments concerning cross validation. However, the inference made using cross validation accuracy pertains to exactly the same thing as our classical inference; namely, the statistical dependence (mutual information) between our explanatory variables and neuroimaging data. In fact, it is easy to prove (with the Neyman-Pearson lemma) that classical inference is more efficient than cross validation." This is frustrating, largely because the authors are correct 5 and it is probably best to proceed to rule number eight. Rule number eight: exploit superstitious thinking As a general point, it is useful to instil a sense of defensiveness in editorial exchanges by citing papers that have been critical of neuroimaging data analysis. A useful entree here is when authors have reported effect sizes to supplement their inferential statistics (p values). Effect sizes can include parameter estimates, regression slopes, correlation coefficients or proportion of variance explained (see "Reviewer: It appears that the authors are unaware of the dangers of voodoo correlations and double dipping. For example, they report effect sizes based upon data (regions of interest) previously identified as significant in their whole brain analysis. This is not valid and represents a pernicious form of double dipping (biased sampling or the nonindependence problem). I would urge the authors to read "Response: We thank the reviewer for highlighting the dangers of biased sampling but this concern does not apply to our report: by definition, the effect size pertains to the data used to make an inferenceand can be regarded as an in-sample prediction of the treatment effect. We appreciate that effect sizes can overestimate the true effect size; especially when the treatment effect is small or statistical thresholds are high. However, the (in-sample) effect size should not be confused with an out-of-sample prediction (an unbiased estimate of the true effect size). We were not providing an out-of-sample prediction but simply following APA guidelines by supplementing our inference ("Always present effect sizes for primary outcomes." Wilkinson and APA Task Force on Statistical Inference, 1999, p. 599)." In this case, the authors have invoked the American Psychological Association (APA) guidelines (Wilkinson and APA Task Force on Statistical Inference, 1999) on good practice for statistical reporting in journals. It is difficult to argue convincingly against these guidelines (which most editors are comfortable with). However, do not be too disappointed because the APA guidelines enable you to create a Catch-22 for authors who have not reported effect sizes: "Reviewer: The authors overwhelm the reader with pretty statistical maps and magnificent p-values but at no point do they quantify the underlying effects about which they are making an inference. For 4 The central limit theorem states the conditions under which the mean of a sufficiently large number of independent random variables, each with finite mean and variance, will be (approximately) normally distributed. 5 Inferences based upon cross validation tests (e.g., accuracy or classification performance) are not likelihood ratio tests because, by definition, they are not functions of the complete data whose likelihood is assessed. Therefore, by the Neyman-Pearson lemma, they are less powerful

    Increasing nature connection in children:A mini review of interventions

    Get PDF
    Alexia Barrable - ORCID: 0000-0002-5352-8330 https://orcid.org/0000-0002-5352-8330Half of the world’s population live in the urban environment. Lifestyle changes in the 20th century have led to spending more time indoors and less in nature. Due to safety concerns, longer hours in formal education, as well as lack of suitable outdoor environments, children in particular have been found to spend very little time outdoors. We have an opportunity, both timely and unique to have our children (re)connect with nature. Nature connection is a subjective state and trait that encompasses affective, cognitive, and experiential aspects in addition to being positively associated with wellbeing, and strong predictor of pro-environmental attitudes and behaviors. This mini-review brings together recent studies that report on interventions to increase nature connection in children. Fourteen studies were identified through electronic searches of Web of Science, Scopus, PsychInfo, ERIC, and Google Scholar. The review aims to offer an overview of the interventions identified, provide a snapshot of the current state of the literature, briefly present themes and trends in the studies identified in relation to nature connection in young people, and propose potential guidelines for future work.https://doi.org/10.3389/fpsyg.2020.0049211pubpu

    Attachment style moderates partner presence effects on pain : A laser-evoked potentials study

    Get PDF
    This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly citedSocial support is crucial for psychological and physical well-being. Yet, in experimental and clinical pain research, the presence of others has been found to both attenuate and intensify pain. To investigate the factors underlying these mixed effects, we administered noxious laser stimuli to 39 healthy women while their romantic partner was present or absent, and measured pain ratings and laser-evoked potentials to assess the effects of partner presence on subjective pain experience and underlying neural processes. Further, we examined whether individual differences in adult attachment style, alone or in interaction with the partner's level of attentional focus (manipulated to be either on or away from the participant) might modulate these effects. We found that the effects of partner presence versus absence on pain-related measures depended on adult attachment style but not partner attentional focus. The higher participants' attachment avoidance, the higher pain ratings and N2 and P2 local peak amplitudes were in the presence compared to the absence of the romantic partner. As laser-evoked potentials are thought to reflect activity relating to the salience of events, our data suggest that partner presence may influence the perceived salience of events threatening the body, particularly in individuals who tend to mistrust others.Peer reviewedFinal Published versio

    Interregional compensatory mechanisms of motor functioning in progressing preclinical neurodegeneration.

    Get PDF
    Understanding brain reserve in preclinical stages of neurodegenerative disorders allows determination of which brain regions contribute to normal functioning despite accelerated neuronal loss. Besides the recruitment of additional regions, a reorganisation and shift of relevance between normally engaged regions are a suggested key mechanism. Thus, network analysis methods seem critical for investigation of changes in directed causal interactions between such candidate brain regions. To identify core compensatory regions, fifteen preclinical patients carrying the genetic mutation leading to Huntington's disease and twelve controls underwent fMRI scanning. They accomplished an auditory paced finger sequence tapping task, which challenged cognitive as well as executive aspects of motor functioning by varying speed and complexity of movements. To investigate causal interactions among brain regions a single Dynamic Causal Model (DCM) was constructed and fitted to the data from each subject. The DCM parameters were analysed using statistical methods to assess group differences in connectivity, and the relationship between connectivity patterns and predicted years to clinical onset was assessed in gene carriers. In preclinical patients, we found indications for neural reserve mechanisms predominantly driven by bilateral dorsal premotor cortex, which increasingly activated superior parietal cortices the closer individuals were to estimated clinical onset. This compensatory mechanism was restricted to complex movements characterised by high cognitive demand. Additionally, we identified task-induced connectivity changes in both groups of subjects towards pre- and caudal supplementary motor areas, which were linked to either faster or more complex task conditions. Interestingly, coupling of dorsal premotor cortex and supplementary motor area was more negative in controls compared to gene mutation carriers. Furthermore, changes in the connectivity pattern of gene carriers allowed prediction of the years to estimated disease onset in individuals. Our study characterises the connectivity pattern of core cortical regions maintaining motor function in relation to varying task demand. We identified connections of bilateral dorsal premotor cortex as critical for compensation as well as task-dependent recruitment of pre- and caudal supplementary motor area. The latter finding nicely mirrors a previously published general linear model-based analysis of the same data. Such knowledge about disease specific inter-regional effective connectivity may help identify foci for interventions based on transcranial magnetic stimulation designed to stimulate functioning and also to predict their impact on other regions in motor-associated networks

    Learning and comparing functional connectomes across subjects

    Get PDF
    Functional connectomes capture brain interactions via synchronized fluctuations in the functional magnetic resonance imaging signal. If measured during rest, they map the intrinsic functional architecture of the brain. With task-driven experiments they represent integration mechanisms between specialized brain areas. Analyzing their variability across subjects and conditions can reveal markers of brain pathologies and mechanisms underlying cognition. Methods of estimating functional connectomes from the imaging signal have undergone rapid developments and the literature is full of diverse strategies for comparing them. This review aims to clarify links across functional-connectivity methods as well as to expose different steps to perform a group study of functional connectomes

    Neuroimaging Research: From Null-Hypothesis Falsification to Out-of-sample Generalization

    Get PDF
    International audienceBrain imaging technology has boosted the quantification of neurobiological phenomena underlying human mental operations and their disturbances. Since its inception, drawing inference on neurophysiological effects hinged on classical statistical methods, especially, the general linear model. The tens of thousands variables per brain scan were routinely tackled by independent statistical tests on each voxel. This circumvented the curse of dimensionality in exchange for neurobiologically imperfect observation units, a challenging multiple comparisons problem, and limited scaling to currently growing data repositories. Yet, the always-bigger information granularity of neuroimaging data repositories has lunched a rapidly increasing adoption of statistical learning algorithms. These scale naturally to high-dimensional data, extract models from data rather than prespecifying them, and are empirically evaluated for extrapolation to unseen data. The present paper portrays commonalities and differences between long-standing classical inference and upcoming generalization inference relevant for conducting neuroimaging research

    Cardio-metabolic risk factors and cortical thickness in a neurologically healthy male population: results from the psychological, social and biological determinants of ill health (pSoBid) study

    Get PDF
    <p>Introduction: Cardio-metabolic risk factors have been associated with poor physical and mental health. Epidemiological studies have shown peripheral risk markers to be associated with poor cognitive functioning in normal healthy population and in disease. The aim of the study was to explore the relationship between cardio-metabolic risk factors and cortical thickness in a neurologically healthy middle aged population-based sample.</p> <p>Methods: T1-weighted MRI was used to create models of the cortex for calculation of regional cortical thickness in 40 adult males (average age = 50.96 years), selected from the PSOBID study. The relationship between cardio-vascular risk markers and cortical thickness across the whole brain, were examined using the general linear models. The relationship with various covariates of interest was explored.</p> <p>Results: Lipid fractions with greater triglyceride content (TAG, VLDL and LDL) were associated with greater cortical thickness pertaining to a number of regions in the brain. Greater C reactive protein (CRP) and Intercellular adhesion molecule (ICAM-1) levels were associated with cortical thinning pertaining to perisylvian regions in the left hemisphere. Smoking status and education status were significant covariates in the model.</p> <p>Conclusions: This exploratory study adds to a small body of existing literature increasingly showing a relationship between cardio-metabolic risk markers and regional cortical thickness involving a number of regions in the brain in a neurologically normal middle aged sample. A focused investigation of factors determining the inter-individual variations in regional cortical thickness in the adult brain could provide further clarity in our understanding of the relationship between cardio-metabolic factors and cortical structures.</p&gt

    Neurogenic Bowel Dysfunction Changes after Osteopathic Care in Individuals with Spinal Cord Injuries: A Preliminary Randomized Controlled Trial

    Get PDF
    Background: Neurogenic bowel dysfunction (NBD) indicates bowel dysfunction due to a lack of nervous control after a central nervous system lesion. Bowel symptoms, such as difficulties with evacuation, constipation, abdominal pain and swelling, are experienced commonly among individuals with spinal cord injury (SCI). Consequentially, individuals with SCI experience a general dissatisfaction with the lower perceived quality of life (QoL). Several studies have demonstrated the positive effects of manual therapies on NBD, including Osteopathic Manipulative Treatment (OMT). This study aimed to explore OMT effects on NBD in individuals with SCI compared with Manual Placebo Treatment (MPT). Methods: The study was a double-blind randomized controlled trial composed of three phases, each one lasting 30 days (i: NBD/drugs monitoring; ii: four OMT/MPT sessions; iii: NBD/drug monitoring and follow-up evaluation). Results: the NBD scale, the QoL on worries and concerns sub-questionnaire, and the perception of abdominal swelling and constipation significantly improved after treatments compared to baseline only for individuals who underwent OMT. Conclusion: These preliminary results showed positive effects of OMT on bowel function and QoL in individuals with SCI, but further studies are needed to confirm our results

    Users\u27 Trust Building Processes During Their Initial Connecting Behavior in Social Networks: Behavioral and Neural Evidence

    Get PDF
    Social networking sites (SNSs) are a ubiquitous phenomenon in today’s society and their economic and social impact is high. However, despite the fact that many SNSs provide increasingly more system features to boost social networking, there is also an increasing concern about trust. Users’ trust is important for a long-term oriented and successful SNS, based on a lively connecting behavior in SNSs. Nevertheless, so far only a limited number of studies investigated users’ trust perceptions that are an important antecedent of connecting behavior in SNSs. We conducted a behavioral study, as well as a brain imaging experiment, to explore trustworthiness judgments in SNSs in order to better understand how pictures and textual information influence users’ initial connecting behavior. Preliminary results of this research-in-progress paper show that both pictures and textual information have strong influence on trustworthiness judgments, and these judgments are processed differently in the users’ brains
    • 

    corecore