18 research outputs found

    The Effect Of Mild Exercise Induced Dehydration On Sport Concussion Assessment Tool 3 (SCAT3) Scores: A within-subjects design.

    Get PDF
    # Background Sports-related concussions are prevalent in the United States. Various diagnostic tools are utilized in order to monitor deviations from baseline in memory, reaction time, symptoms, and balance. Evidence indicates that dehydration may also alter the results of diagnostic tests. # Purpose The purpose was to determine the effect of exercise-induced dehydration on performance related to concussion examination tools. # Study Design Repeated measures design. # Methods Seventeen recreationally competitive, non-concussed participants (age: 23.1±3.1 years, height:168.93±10.71 cm, mass: 66.16 ± 6.91 kg) performed three thermoneutral, counterbalanced sessions (rested control, euhydrated, dehydrated). Participants were either restricted (0.0 L/hr) or provided fluids (1.0 L/hr) while treadmill running for 60 min at an intensity equal to 65-70% age-predicted maximum heart rate (APMHR). The Sport Concussion Assessment Tool 3 (SCAT3) was utilized to assess symptoms, memory, balance, and coordination. # Results Statistically significant differences were seen among sessions for symptom severity and symptom total. The rested control session had significantly lower values when compared to the dehydrated session. Additionally, the symptom total in the rested control was significantly lower than the euhydrated condition as well. No statistically significant differences were seen for the BESS or memory scores. # Conclusions Mild exercise-induced dehydration results in increased self-reported symptoms associated with concussions. Clinicians tasked with monitoring and accurately diagnosing head trauma should take factors such as hydration status into account when assessing patients for concussion with the SCAT3. Clinicians should proceed with caution and not assume concussion as primary cause for symptom change. # Level of evidence Level

    Statistical Primer for Athletic Trainers: Using Confidence Intervals and Effect Sizes to Evaluate Clinical Meaningfulness

    No full text
    Objective: To describe confidence intervals (CIs) and effect sizes and provide practical examples to assist clinicians in assessing clinical meaningfulness. Background: As discussed in our first article in 2015, which addressed the difference between statistical significance and clinical meaningfulness, evaluating the clinical meaningfulness of a research study remains a challenge to many readers. In this paper, we will build on this topic by examining CIs and effect sizes. Description: A CI is a range estimated from sample data (the data we collect) that is likely to include the population parameter (value) of interest. Conceptually, this constitutes the lower and upper limits of the sample data, which would likely include, for example, the mean from the unknown population. An effect size is the magnitude of difference between 2 means. When a statistically significant difference exists between 2 means, effect size is used to describe how large or small that difference actually is. Confidence intervals and effect sizes enhance the practical interpretation of research results. Recommendations: Along with statistical significance, the CI and effect size can assist practitioners in better understanding the clinical meaningfulness of a research study

    Statistical Primer for Athletic Trainers: Understanding the Role of Statistical Power in Comparative Athletic Training Research

    No full text
    Objective: To describe the concept of statistical power as related to comparative interventions and how various factors, including sample size, affect statistical power. Background: Having a sufficiently sized sample for a study is necessary for an investigation to demonstrate that an effective treatment is statistically superior. Many researchers fail to conduct and report a priori sample-size estimates, which then makes it difficult to interpret nonsignificant results and causes the clinician to question the planning of the research design. Description: Statistical power is the probability of statistically detecting a treatment effect when one truly exists. The α level, a measure of differences between groups, the variability of the data, and the sample size all affect statistical power. Recommendations: Authors should conduct and provide the results of a priori sample-size estimations in the literature. This will assist clinicians in determining whether the lack of a statistically significant treatment effect is due to an underpowered study or to a treatment\u27s actually having no effect

    Adverse Childhood Experiences in relation to drug and alcohol use in the 30 days prior to incarceration in a county jail.

    No full text
    Purpose: To characterize the relationship between adverse childhood experiences (ACEs) and substance use among people incarcerated in a county jail. Design/methodology/approach: A questionnaire was administered to 199 individuals incarcerated in a Southwest county jail as part of a social-epidemiological exploration of converging co-morbidities in incarcerated populations. Among 96 participants with complete ACEs data, the authors determined associations between individual ACEs items and a summative score with methamphetamine (meth), heroin, other opiates, and cocaine use and binge drinking in the 30 days prior to incarceration using logistic regression. Findings: People who self-reported use of methamphetamine, heroin, other opiates, or cocaine in the 30 days prior to incarceration had higher average ACEs scores. Methamphetamine use was significantly associated with living with anyone who served time in a correctional facility and with someone trying to make them touch sexually. Opiate use was significantly associated with living with anyone who was depressed, mentally ill, or suicidal; living with anyone who used illegal street drugs or misused prescription medications; and if an adult touched them sexually. Binge drinking was significantly associated with having lived with someone who was a problem drinker or alcoholic. Originality: Significant associations between methamphetamine use and opiate use and specific adverse childhood experiences suggest important entry points for improving jail and community programming. Social Implications: Our findings point to a need for research to understand differences between methamphetamine use and opiate use in relation to particular adverse experiences during childhood, and a need for tailored intervention for people incarcerated in jail

    Modified Tuck Jump Assessment: Reliability and Training of Raters

    No full text
    We are writing with regard to “Intra- and inter-rater reliability of the modified tuck jump assessment,” by Fort-Vanmeerhaeghe et al. (2017) published in the Journal of Sports Science & Medicine. The authors reported on the reliability of the modified Tuck Jump Assessment (TJA). The purpose of the article was twofold: to introduce a new scoring methodology and to report on the interrater and intrarater reliability. The authors found the modified TJA to have excellent interrater reliability (ICC = 0.94, 95% CI = 0.88-0.97) and intrarater reliability (rater 1 ICC = 0.94, 95% CI = 0.88-0.9; rater 2 ICC = 0.96, 95% CI = 0.92-0.98) with experienced raters (n = 2) in a sample of 24 elite volleyball athletes. Overall, we found the study to be well conducted and valuable to the field of injury screening; however, the study did not adequately explain how the raters were trained in the modified TJA to improve consistency of scoring, or the modifications of the individual flaw “excessive contact noise at landing.” This information is necessary to improve the clinical utility of the TJA and direct future reliability studies. The TJA has been changed at least three times in the literature: from the initial introduction (Myer et al., 2006) to the most referenced and detailed protocol (Myer et al., 2011) to the publication under discussion (Fort-Vanmeerhaeghe et al., 2017). The initial test protocol was based upon clinical expertise and has evolved over time as new research emerged and problems arose with the original TJA. Initially, the TJA was scored on a visual analog scale (Myer et al., 2006), changed to a dichotomous scale (0 for no flaw or 1 for flaw present) (Myer et al., 2011) and most recently modified using an ordinal scale (Fort-Vanmeerhaeghe et al., 2017). A significant disparity in the reported interrater and intrarater reliability arose with the dichotomously scored TJA, between those involved in the development of the TJA (Herrington et al., 2013) and other researchers who were not involved (Dudley et al., 2013). Dudley, et al. (2013) reported the lack of a clarity in protocol and rater training in the dichotomous TJA description (Myer et al., 2011), and these limitations may have contributed to the poor to moderate reliability found in their study of varied raters with differing educational backgrounds. Possibly in reference to the issues brought up in Dudley, et al. (2013), Fort-Vanmeerhaeghe et al. (2017) suggested that a lack of background information and the specific training in the TJA led to reliability issues in the dichotomous TJA scoring, which they believed necessitated changing the TJA protocol. However, the authors did not provide a detailed explanation for the training of the raters, nor their involvement with the creation of the modified TJA, which would have provided important information as a significant learning effect with scoring was seen with the dichotomous TJA (Dudley et al., 2013) which may inflate the reliability in this study (Fort-Vanmeerhaeghe et al., 2017). Further and perhaps more importantly, the clinical applicability of the new ordinal scoring methods is limited because it is not clear what is required to train raters for reliable scoring, especially with a new, more complicated scoring system. Beyond a simple explanation that the raters “watched as many times as necessary and at whatever speeds they needed to score each test,” no other methodology on video scoring was reported (Fort-Vanmeerhaeghe et al., 2017). Several questions are not answered in the study but will significantly impact replication of the findings and the use in a clinical setting. Were the raters instructed on calibrating volume? Were the raters instructed in the criteria for scoring? Did the raters work together to calibrate their scoring prior to the study? If so, for how long and by what methods? To illustrate, for “pause between jumps,” the following criteria are reported: (0) reactive and reflex jumps, (1) small pause between jumps, and (2) large pause between jumps. The authors do not explain the difference between small and large. If the frame rate is not controlled while watching the video frame by frame, a rater may incorrectly score a severe pause between jumps when there is no flaw present. To limit this error, a possible solution is for the rater to watch the video at normal speed and only mark a flaw present if a pause is noticeable. The difference between a large and small pause could then be determined by determining time during the pause by going frame by frame. Pauses longer than half a second could constitute a large flaw (2), while those below are a small flaw (1). The method of scoring for each flaw needs to be clear and outline common errors in methodology, especially with a new scoring criteria. The flaw “excessive contact noise at landing” seems to have two separate criteria in modified TJA compared with the dichotomously scored TJA. Fort-Vanmeerhaeghe et al. (2017) provided the following criteria: (0) subtle noise at landing (landing on the balls of their feet), (1) audible noise at landing (heels almost touch the ground at landing), (2) loud and pronounced noise at landing (contact of the entire foot and heel on the ground between jumps). The text in parentheses was not included in other research on the TJA (Myer et al., 2011). No explanation for this addition is present in the study, and the ambiguity of these criteria will limit reproducibility. If an athlete lands softly and the entire foot and heel touch the ground between jumps, this may be related to the pause between jumps flaw. Would this still be scored as excessive contact noise and scored as a severe flaw even when the noise is not excessive? From the study, it is unclear what constitutes excessive contact noise, if noise was considered in the scoring, if the raters calibrated volume to a certain level during video analysis, and if foot landing strategy should impact scoring—this clarity is needed for reliability, clinical utility, and validity. In closing, our team has found the TJA to be clinically valuable in practice. We suggest more detail on training methodology for adequate reliability in raters with the modified TJA (Dudley et al., 2013), and an improved method for quantifying excessive contact noise

    Bone mineral density in Masters Olympic weightlifters

    No full text
    Presentation given at the American Collection of Sports Medicine Annual Meeting (thematic), Minneapolis, MI

    Tuck Jump Assessment: An Exploratory Factor Analysis in a College Age Population

    No full text
    Due to the high rate of noncontact lower extremity injuries that occur in the collegiate setting, medical personnel are implementing screening mechanisms to identify those athletes that may be at risk for certain injuries before starting a sports season. The tuck jump assessment (TJA) was created as a “clinician friendly” tool to identify lower extremity landing technique flaws during a plyometric activity. There are 10 technique flaws that are assessed as either having the apparent deficit or not during the TJA. Technique flaws are then summed up for an overall score. Through expert consensus, these 10 technique flaws have been grouped into 5 modifiable risk factors: ligament dominance, quadriceps dominance, leg dominance or residual injury deficits, trunk dominance (“core” dysfunction), and technique perfection. Research has not investigated the psychometric properties of the TJA technique flaws or the modifiable risk factors. The present study is a psychometric analysis of the TJA technique flaws to measure the internal structure using an exploratory factor analysis (EFA) using data from collegiate athletes (n = 90) and a general college cohort (n = 99). The EFA suggested a 3 factor model accounting for 46% of the variance. The 3 factors were defined as fatigue, distal landing pattern, and proximal control. The results differ from the 5 modifiable risk categories as previously suggested. These results may question the use of a single score, a unidimensional construct, of the TJA for injury screening
    corecore