22 research outputs found

    The Effect Of Mild Exercise Induced Dehydration On Sport Concussion Assessment Tool 3 (SCAT3) Scores: A within-subjects design.

    Get PDF
    # Background Sports-related concussions are prevalent in the United States. Various diagnostic tools are utilized in order to monitor deviations from baseline in memory, reaction time, symptoms, and balance. Evidence indicates that dehydration may also alter the results of diagnostic tests. # Purpose The purpose was to determine the effect of exercise-induced dehydration on performance related to concussion examination tools. # Study Design Repeated measures design. # Methods Seventeen recreationally competitive, non-concussed participants (age: 23.1±3.1 years, height:168.93±10.71 cm, mass: 66.16 ± 6.91 kg) performed three thermoneutral, counterbalanced sessions (rested control, euhydrated, dehydrated). Participants were either restricted (0.0 L/hr) or provided fluids (1.0 L/hr) while treadmill running for 60 min at an intensity equal to 65-70% age-predicted maximum heart rate (APMHR). The Sport Concussion Assessment Tool 3 (SCAT3) was utilized to assess symptoms, memory, balance, and coordination. # Results Statistically significant differences were seen among sessions for symptom severity and symptom total. The rested control session had significantly lower values when compared to the dehydrated session. Additionally, the symptom total in the rested control was significantly lower than the euhydrated condition as well. No statistically significant differences were seen for the BESS or memory scores. # Conclusions Mild exercise-induced dehydration results in increased self-reported symptoms associated with concussions. Clinicians tasked with monitoring and accurately diagnosing head trauma should take factors such as hydration status into account when assessing patients for concussion with the SCAT3. Clinicians should proceed with caution and not assume concussion as primary cause for symptom change. # Level of evidence Level

    Reliability of the Tuck Jump Assessment Using Standardized Rater Training

    Get PDF
    # BACKGROUND The Tuck Jump Assessment (TJA) is a test used to assess technique flaws during a 10-second, high intensity, jumping bout. Although the TJA has broad clinical applicability, there is no standardized training to maximize the TJA measurement properties. # HYPOTHESIS/PURPOSE To determine the reliability of the TJA using varied healthcare professionals following an online standardized training program. The authors hypothesized that the total score will have moderate to excellent levels of intra- and interrater reliability. # STUDY DESIGN Cross-sectional reliability. # METHODS A website was created by a physical therapist (PT) with videos, written descriptors of the 10 TJA technique flaws, and examples of what constituted no flaw, minor flaw, or major flaw (0,1,2) using published standards. The website was then validated (both face and content) by four experts. Three raters of different professions: a PT, an AT, and a Strength and Conditioning Coach Certified (SCCC) were selected due to their expertise with injury and movement. Raters used the online standardized training, scored 41 videos of participants' TJAs, then scored them again two weeks later. Reliability estimates were determined using intraclass correlation coefficients (ICCs) for total scores of 10 technique flaws and Krippendorff α (K α) for the individual technique flaws (ordinal). # RESULTS Eleven of 50 individual technique flaws were above the acceptable level (K α = 0.80). The total score had moderate interrater reliability in both sessions (Session 1: ICC~2,2~ = 0.64; 95% CI (Confidence Interval) (0.34-0.81); Standard Error Measurement (SEM) = 0.66 technique flaws and Session 2: ICC~2,2~ = 0.56; 95% CI (0.04-0.79); SEM = 1.30). Rater 1had a good reliability (ICC~2,2~ = 0.76; 95% CI (0.54-0.87); SEM = 0.26), rater 2 had a moderate reliability (ICC~2,2~ = 0.62; 95% CI (0.24-0.80); SEM =0.41) and rater 3 had excellent reliability (ICC~2,2~ = 0.98; 95% CI (0.97-0.99); SEM =0.01). # CONCLUSION All raters had at least good reliability estimates for the total score. The same level of consistency was not seen when evaluating each technique flaw. These findings suggest that the total score may not be as accurate when compared to individual technique flaws and should be used with caution. # LEVEL OF EVIDENCE: 3

    Dorsiflexion Range of Motion in Copers and Those with Chronic Ankle Instability

    Get PDF
    International Journal of Exercise Science 12(1): 614-622, 2019. The Cumberland Ankle Instability Tool (CAIT) is used to classify individuals as ankle sprain copers, or as one suffering from chronic ankle instability (CAI). However, literature examining factors contributing to these classifications on the CAIT is lacking, as the CAIT itself does not offer explanations for specific anthropometric measures that influence a patient’s classification. Therefore, the purpose was to determine if there was a difference between dorsiflexion active range of motion (AROM) between copers, those with CAI, and a healthy control group. Twenty-two individuals with recent ankle sprains were recruited by a convenience sampling method and placed in the coper (5 females, 5 males, age: 21.9 ± 1.5 years, height: 173.74 ± 7.69 cm, weight: 69.75 ± 10.50 kg) or CAI (10 females, 2 males, age: 21.8 ± 2.3 years, height: 173.99 ± 10.86 cm, weight: 68.14 ± 10.63 kg) groups. The remaining 10 individuals (4 females, 6 males, age: 23.2 ± 1.5 years, height: 178.05 ± 12.92 cm, weight: 75.65 ± 8.00 kg) who participated in the study served as control, as they had never sustained a previous ankle sprain. Dorsiflexion AROM measurements were evaluated using an inclinometer during a weight-bearing lunge. Three measurements were taken for each participant and used for statistical analysis. There was no statistically significant difference in average dorsiflexion AROM between the coper, control, and CAI groups (F2,29= 2.063, p = 0.15, ω = 0.06, 1 – β= 0.40). Further research is needed to determine if limited dorsiflexion AROM is indeed a contributing factor to an individual’s classification as a coper or suffering from CAI, as defined by the CAIT

    Statistical Primer for Athletic Trainers: Using Confidence Intervals and Effect Sizes to Evaluate Clinical Meaningfulness

    No full text
    Objective: To describe confidence intervals (CIs) and effect sizes and provide practical examples to assist clinicians in assessing clinical meaningfulness. Background: As discussed in our first article in 2015, which addressed the difference between statistical significance and clinical meaningfulness, evaluating the clinical meaningfulness of a research study remains a challenge to many readers. In this paper, we will build on this topic by examining CIs and effect sizes. Description: A CI is a range estimated from sample data (the data we collect) that is likely to include the population parameter (value) of interest. Conceptually, this constitutes the lower and upper limits of the sample data, which would likely include, for example, the mean from the unknown population. An effect size is the magnitude of difference between 2 means. When a statistically significant difference exists between 2 means, effect size is used to describe how large or small that difference actually is. Confidence intervals and effect sizes enhance the practical interpretation of research results. Recommendations: Along with statistical significance, the CI and effect size can assist practitioners in better understanding the clinical meaningfulness of a research study

    Statistical Primer for Athletic Trainers: Understanding the Role of Statistical Power in Comparative Athletic Training Research

    No full text
    Objective: To describe the concept of statistical power as related to comparative interventions and how various factors, including sample size, affect statistical power. Background: Having a sufficiently sized sample for a study is necessary for an investigation to demonstrate that an effective treatment is statistically superior. Many researchers fail to conduct and report a priori sample-size estimates, which then makes it difficult to interpret nonsignificant results and causes the clinician to question the planning of the research design. Description: Statistical power is the probability of statistically detecting a treatment effect when one truly exists. The α level, a measure of differences between groups, the variability of the data, and the sample size all affect statistical power. Recommendations: Authors should conduct and provide the results of a priori sample-size estimations in the literature. This will assist clinicians in determining whether the lack of a statistically significant treatment effect is due to an underpowered study or to a treatment\u27s actually having no effect

    Changes in Self-Reported Concussion History after Administration of a Novel Concussion History Questionnaire in Collegiate Recreational Student-Athletes

    No full text
    Research has shown that exposure to a concussion definition (CD) increases self-reported concussion history (SRCH) immediately, however, no research has been performed that examines the effects of exposure to a CD on SRCH over time. Collegiate recreational student-athletes (RSAs) have limited access to monitoring and supervision by medical staff. As such, recognition of concussion symptoms and need for medical management oftentimes falls upon the RSA. The purpose of this study was to assess the effect of a novel questionnaire on the SRCH of RSAs. A two-part questionnaire was sent to RSAs participating is sports with a greater than average risk of concussion at a university in Arizona. Data from 171 RSAs were analyzed to assess the change in RSAs’ suspected concussion estimates pre- and post-exposure to a CD and concussion symptom worksheet, as well as over the short-term (2.5 months). Approximately one-third of RSAs reported an increase in suspected concussion estimates immediately following exposure to the questionnaire, but the change was not maintained over the short-term. The results suggest that a single exposure to a CD is ineffective at increasing short-term SRCH estimates

    Adverse Childhood Experiences in relation to drug and alcohol use in the 30 days prior to incarceration in a county jail.

    No full text
    Purpose: To characterize the relationship between adverse childhood experiences (ACEs) and substance use among people incarcerated in a county jail. Design/methodology/approach: A questionnaire was administered to 199 individuals incarcerated in a Southwest county jail as part of a social-epidemiological exploration of converging co-morbidities in incarcerated populations. Among 96 participants with complete ACEs data, the authors determined associations between individual ACEs items and a summative score with methamphetamine (meth), heroin, other opiates, and cocaine use and binge drinking in the 30 days prior to incarceration using logistic regression. Findings: People who self-reported use of methamphetamine, heroin, other opiates, or cocaine in the 30 days prior to incarceration had higher average ACEs scores. Methamphetamine use was significantly associated with living with anyone who served time in a correctional facility and with someone trying to make them touch sexually. Opiate use was significantly associated with living with anyone who was depressed, mentally ill, or suicidal; living with anyone who used illegal street drugs or misused prescription medications; and if an adult touched them sexually. Binge drinking was significantly associated with having lived with someone who was a problem drinker or alcoholic. Originality: Significant associations between methamphetamine use and opiate use and specific adverse childhood experiences suggest important entry points for improving jail and community programming. Social Implications: Our findings point to a need for research to understand differences between methamphetamine use and opiate use in relation to particular adverse experiences during childhood, and a need for tailored intervention for people incarcerated in jail

    Modified Tuck Jump Assessment: Reliability and Training of Raters

    No full text
    We are writing with regard to “Intra- and inter-rater reliability of the modified tuck jump assessment,” by Fort-Vanmeerhaeghe et al. (2017) published in the Journal of Sports Science & Medicine. The authors reported on the reliability of the modified Tuck Jump Assessment (TJA). The purpose of the article was twofold: to introduce a new scoring methodology and to report on the interrater and intrarater reliability. The authors found the modified TJA to have excellent interrater reliability (ICC = 0.94, 95% CI = 0.88-0.97) and intrarater reliability (rater 1 ICC = 0.94, 95% CI = 0.88-0.9; rater 2 ICC = 0.96, 95% CI = 0.92-0.98) with experienced raters (n = 2) in a sample of 24 elite volleyball athletes. Overall, we found the study to be well conducted and valuable to the field of injury screening; however, the study did not adequately explain how the raters were trained in the modified TJA to improve consistency of scoring, or the modifications of the individual flaw “excessive contact noise at landing.” This information is necessary to improve the clinical utility of the TJA and direct future reliability studies. The TJA has been changed at least three times in the literature: from the initial introduction (Myer et al., 2006) to the most referenced and detailed protocol (Myer et al., 2011) to the publication under discussion (Fort-Vanmeerhaeghe et al., 2017). The initial test protocol was based upon clinical expertise and has evolved over time as new research emerged and problems arose with the original TJA. Initially, the TJA was scored on a visual analog scale (Myer et al., 2006), changed to a dichotomous scale (0 for no flaw or 1 for flaw present) (Myer et al., 2011) and most recently modified using an ordinal scale (Fort-Vanmeerhaeghe et al., 2017). A significant disparity in the reported interrater and intrarater reliability arose with the dichotomously scored TJA, between those involved in the development of the TJA (Herrington et al., 2013) and other researchers who were not involved (Dudley et al., 2013). Dudley, et al. (2013) reported the lack of a clarity in protocol and rater training in the dichotomous TJA description (Myer et al., 2011), and these limitations may have contributed to the poor to moderate reliability found in their study of varied raters with differing educational backgrounds. Possibly in reference to the issues brought up in Dudley, et al. (2013), Fort-Vanmeerhaeghe et al. (2017) suggested that a lack of background information and the specific training in the TJA led to reliability issues in the dichotomous TJA scoring, which they believed necessitated changing the TJA protocol. However, the authors did not provide a detailed explanation for the training of the raters, nor their involvement with the creation of the modified TJA, which would have provided important information as a significant learning effect with scoring was seen with the dichotomous TJA (Dudley et al., 2013) which may inflate the reliability in this study (Fort-Vanmeerhaeghe et al., 2017). Further and perhaps more importantly, the clinical applicability of the new ordinal scoring methods is limited because it is not clear what is required to train raters for reliable scoring, especially with a new, more complicated scoring system. Beyond a simple explanation that the raters “watched as many times as necessary and at whatever speeds they needed to score each test,” no other methodology on video scoring was reported (Fort-Vanmeerhaeghe et al., 2017). Several questions are not answered in the study but will significantly impact replication of the findings and the use in a clinical setting. Were the raters instructed on calibrating volume? Were the raters instructed in the criteria for scoring? Did the raters work together to calibrate their scoring prior to the study? If so, for how long and by what methods? To illustrate, for “pause between jumps,” the following criteria are reported: (0) reactive and reflex jumps, (1) small pause between jumps, and (2) large pause between jumps. The authors do not explain the difference between small and large. If the frame rate is not controlled while watching the video frame by frame, a rater may incorrectly score a severe pause between jumps when there is no flaw present. To limit this error, a possible solution is for the rater to watch the video at normal speed and only mark a flaw present if a pause is noticeable. The difference between a large and small pause could then be determined by determining time during the pause by going frame by frame. Pauses longer than half a second could constitute a large flaw (2), while those below are a small flaw (1). The method of scoring for each flaw needs to be clear and outline common errors in methodology, especially with a new scoring criteria. The flaw “excessive contact noise at landing” seems to have two separate criteria in modified TJA compared with the dichotomously scored TJA. Fort-Vanmeerhaeghe et al. (2017) provided the following criteria: (0) subtle noise at landing (landing on the balls of their feet), (1) audible noise at landing (heels almost touch the ground at landing), (2) loud and pronounced noise at landing (contact of the entire foot and heel on the ground between jumps). The text in parentheses was not included in other research on the TJA (Myer et al., 2011). No explanation for this addition is present in the study, and the ambiguity of these criteria will limit reproducibility. If an athlete lands softly and the entire foot and heel touch the ground between jumps, this may be related to the pause between jumps flaw. Would this still be scored as excessive contact noise and scored as a severe flaw even when the noise is not excessive? From the study, it is unclear what constitutes excessive contact noise, if noise was considered in the scoring, if the raters calibrated volume to a certain level during video analysis, and if foot landing strategy should impact scoring—this clarity is needed for reliability, clinical utility, and validity. In closing, our team has found the TJA to be clinically valuable in practice. We suggest more detail on training methodology for adequate reliability in raters with the modified TJA (Dudley et al., 2013), and an improved method for quantifying excessive contact noise

    Bone mineral density in Masters Olympic weightlifters

    No full text
    Presentation given at the American Collection of Sports Medicine Annual Meeting (thematic), Minneapolis, MI
    corecore