91 research outputs found

    Error bars in experimental biology

    Get PDF
    Error bars commonly appear in figures in publications, but experimental biologists are often unsure how they should be used and interpreted. In this article we illustrate some basic features of error bars and explain how they can help communicate data and assist correct interpretation. Error bars may show confidence intervals, standard errors, standard deviations, or other quantities. Different types of error bars give quite different information, and so figure legends must make clear what error bars represent. We suggest eight simple rules to assist with effective use and interpretation of error bars

    Confidence Intervals Permit, but Do Not Guarantee, Better Inference than Statistical Significance Testing

    Get PDF
    A statistically significant result, and a non-significant result may differ little, although significance status may tempt an interpretation of difference. Two studies are reported that compared interpretation of such results presented using null hypothesis significance testing (NHST), or confidence intervals (CIs). Authors of articles published in psychology, behavioral neuroscience, and medical journals were asked, via email, to interpret two fictitious studies that found similar results, one statistically significant, and the other non-significant. Responses from 330 authors varied greatly, but interpretation was generally poor, whether results were presented as CIs or using NHST. However, when interpreting CIs respondents who mentioned NHST were 60% likely to conclude, unjustifiably, the two results conflicted, whereas those who interpreted CIs without reference to NHST were 95% likely to conclude, justifiably, the two results were consistent. Findings were generally similar for all three disciplines. An email survey of academic psychologists confirmed that CIs elicit better interpretations if NHST is not invoked. Improved statistical inference can result from encouragement of meta-analytic thinking and use of CIs but, for full benefit, such highly desirable statistical reform requires also that researchers interpret CIs without recourse to NHST

    The influence of journal submission guidelines on authors’ reporting of statistics and use of open research practices: Five years later

    Get PDF
    Changes in statistical practices and reporting have been documented by Giofre et al. PLOS ONE 12(4), e0175583 (2017), who investigated ten statistical and open practices in two high-ranking journals (Psychological Science [PS] and Journal of Experimental Psychology-General [JEPG]): null hypothesis significance testing; confidence or credible intervals; meta-analysis of the results of multiple experiments; confidence interval interpretation; effect size interpretation; sample size determination; data exclusion; data availability; materials availability; and preregistered design and analysis plan. The investigation was based on an analysis of all papers published in these journals between 2013 and 2015. The aim of the present study was to follow up changes in both PS and JEPG in subsequent years, from 2016 to 2020, adding code availability as a further open practice. We found improvement in most practices, with some exceptions (i.e., confidence interval interpretation and meta-analysis). Despite these positive changes, our results indicate a need for further improvements in statistical practices and adoption of open practices

    Impact ionisation electroluminescence in planar GaAs-based heterostructure Gunn diodes:Spatial distribution and impact of doping nonuniformities

    Get PDF
    When biased in the negative differential resistance regime, electroluminescence (EL) is emitted from planar GaAs heterostructure Gunn diodes. This EL is due to the recombination of electrons in the device channel with holes that are generated by impact ionisation when the Gunn domains reach the anode edge. The EL forms non-uniform patterns whose intensity shows short-range intensity variations in the direction parallel to the contacts and decreases along the device channel towards the cathode. This paper employs Monte Carlo models, in conjunction with the experimental data, to analyse these non-uniform EL patterns and to study the carrier dynamics responsible for them. It is found that the short-range lateral (i.e., parallel to the device contacts) EL patterns are probably due to non-uniformities in the doping of the anode contact, illustrating the usefulness of EL analysis on the detection of such inhomogeneities. The overall decreasing EL intensity towards the anode is also discussed in terms of the interaction of holes with the time-dependent electric field due to the transit of the Gunn domains. Due to their lower relative mobility and the low electric field outside of the Gunn domain, freshly generated holes remain close to the anode until the arrival of a new domain accelerates them towards the cathode. When the average over the transit of several Gunn domains is considered, this results in a higher hole density, and hence a higher EL intensity, next to the anode

    Theory Testing Using Quantitative Predictions of Effect Size

    Get PDF
    Traditional Null Hypothesis Testing procedures are poorly adapted to theory testing. The methodology can mislead researchers in several ways, including: (a) a lack of power can result in an erroneous rejection of the theory; (b) the focus on directionality (ordinal tests) rather than more precise quantitative predictions limits the information gained; and (c) the misuse of probability values to indicate effect size. An alternative approach is proposed which involves employing the theory to generate explicit effect size predictions that are compared to the effect size estimates and related confidence intervals to test the theoretical predictions. This procedure is illustrated employing the Transtheoretical Model. Data from a sample (N = 3,967) of smokers from a large New England HMO system were used to test the model. There were a total of 15 predictions evaluated, each involving the relation between Stage of Change and one of the other 15 Transtheoretical Model variables. For each variable, omega‐squared and the related confidence interval were calculated and compared to the predicted effect sizes. Eleven of the 15 predictions were confirmed, providing support for the theoretical model. Quantitative predictions represent a much more direct, informative, and strong test of a theory than the traditional test of significance

    On the (non)persuasive power of a brain image

    Get PDF
    Abstract The persuasive power of brain images has captivated scholars in many disciplines. Like others, we too were intrigued by the finding that a brain image makes accompanying information more credible (McCabe & Castel in Cognition 107:343-352, 2008). But when our attempts to build on this effect failed, we instead ran a series of systematic replications of the original study-comprising 10 experiments and nearly 2,000 subjects. When we combined the original data with ours in a meta-analysis, we arrived at a more precise estimate of the effect, determining that a brain image exerted little to no influence. The persistent meme of the influential brain image should be viewed with a critical eye

    High impact  =  high statistical standards? Not necessarily so.

    Get PDF
    What are the statistical practices of articles published in journals with a high impact factor? Are there differences compared with articles published in journals with a somewhat lower impact factor that have adopted editorial policies to reduce the impact of limitations of Null Hypothesis Significance Testing? To investigate these questions, the current study analyzed all articles related to psychological, neuropsychological and medical issues, published in 2011 in four journals with high impact factors: Science, Nature, The New England Journal of Medicine and The Lancet, and three journals with relatively lower impact factors: Neuropsychology, Journal of Experimental Psychology-Applied and the American Journal of Public Health. Results show that Null Hypothesis Significance Testing without any use of confidence intervals, effect size, prospective power and model estimation, is the prevalent statistical practice used in articles published in Nature, 89%, followed by articles published in Science, 42%. By contrast, in all other journals, both with high and lower impact factors, most articles report confidence intervals and/or effect size measures. We interpreted these differences as consequences of the editorial policies adopted by the journal editors, which are probably the most effective means to improve the statistical practices in journals with high or low impact factors

    An electrical equivalent circuit to simulate the output power of an AlGaAs/GaAs planar Gunn diode

    Get PDF
    The planar Gunn diode offers the potential of microwave, milli-metric and THz based oscillator which can be fabricated as part of a microwave monolithic integrated circuit (mmic). To-date the RF output power has been too low for many applications. This paper looks at a simple electrical equivalent circuit model representation of an aluminium gallium arsenide (AlGaAs) based planar Gunn diode with an active channel length of approximately 4ÎŒm and width of 120ÎŒm. The model indicated a maximum RF output power of +5dBm compared with published experimental results of –19dBm for similar diodes

    An electrical equivalent circuit to simulate the output power of an AlGaAs/GaAs planar Gunn diode

    Get PDF
    The planar Gunn diode offers the potential of microwave, milli-metric and THz based oscillator which can be fabricated as part of a microwave monolithic integrated circuit (mmic). To-date the RF output power has been too low for many applications. This paper looks at a simple electrical equivalent circuit model representation of an aluminium gallium arsenide (AlGaAs) based planar Gunn diode with an active channel length of approximately 4ÎŒm and width of 120ÎŒm. The model indicated a maximum RF output power of +5dBm compared with published experimental results of –19dBm for similar diodes

    A Tribute to the Mind, Methodology and Mentoring of Wayne Velicer

    Get PDF
    Wayne Velicer is remembered for a mind where mathematical concepts and calculations intrigued him, behavioral science beckoned him, and people fascinated him. Born in Green Bay, Wisconsin on March 4, 1944, he was raised on a farm, although early influences extended far beyond that beginning. His Mathematics BS and Psychology minor at Wisconsin State University in Oshkosh, and his PhD in Quantitative Psychology from Purdue led him to a fruitful and far-reaching career. He was honored several times as a high-impact author, was a renowned scholar in quantitative and health psychology, and had more than 300 scholarly publications and 54,000+ citations of his work, advancing the arenas of quantitative methodology and behavioral health. In his methodological work, Velicer sought out ways to measure, synthesize, categorize, and assess people and constructs across behaviors and time, largely through principal components analysis, time series, and cluster analysis. Further, he and several colleagues developed a method called Testing Theory-based Quantitative Predictions, successfully applied to predicting outcomes and effect sizes in smoking cessation, diet behavior, and sun protection, with the potential for wider applications. With $60,000,000 in external funding, Velicer also helped engage a large cadre of students and other colleagues to study methodological models for a myriad of health behaviors in a widely applied Transtheoretical Model of Change. Unwittingly, he has engendered indelible memories and gratitude to all who crossed his path. Although Wayne Velicer left this world on October 15, 2017 after battling an aggressive cancer, he is still very present among us
    • 

    corecore