7 research outputs found
The Relationship between Sport-Related Concussion and Sensation-Seeking
Sensation-seeking, or the need for novel and exciting experiences, is thought to play a role in sport-related concussion (SRC), yet much remains unknown regarding these relationships and, more importantly, how sensation-seeking influences SRC risk. The current study assessed sensation-seeking, sport contact level, and SRC history and incidence in a large sample of NCAA collegiate athletes. Data included a full study sample of 22,374 baseline evaluations and a sub-sample of 2037 incident SRC. Independent samples t-test, analysis of covariance, and hierarchical logistic regression were constructed to address study hypotheses. Results showed that (1) among participants without SRC, sensation-seeking scores were higher in athletes playing contact sports compared to those playing limited- or non-contact sports (p < 0.001, R2 = 0.007, η2p = 0.003); (2) in the full study sample, a one-point increase in sensation-seeking scores resulted in a 21% greater risk of prior SRC (OR = 1.212; 95% CI: 1.154–1.272), and in the incident SRC sub-sample, a 28% greater risk of prior SRC (OR = 1.278; 95% CI: 1.104–1.480); (3) a one-point increase in sensation-seeking scores resulted in a 12% greater risk of incident SRC among the full study sample; and (4) sensation-seeking did not vary as a function of incident SRC (p = 0.281, η2p = 0.000). Our findings demonstrate the potential usefulness of considering sensation-seeking in SRC management
Linking Symptom Inventories using Semantic Textual Similarity
An extensive library of symptom inventories has been developed over time to
measure clinical symptoms, but this variety has led to several long standing
issues. Most notably, results drawn from different settings and studies are not
comparable, which limits reproducibility. Here, we present an artificial
intelligence (AI) approach using semantic textual similarity (STS) to link
symptoms and scores across previously incongruous symptom inventories. We
tested the ability of four pre-trained STS models to screen thousands of
symptom description pairs for related content - a challenging task typically
requiring expert panels. Models were tasked to predict symptom severity across
four different inventories for 6,607 participants drawn from 16 international
data sources. The STS approach achieved 74.8% accuracy across five tasks,
outperforming other models tested. This work suggests that incorporating
contextual, semantic information can assist expert decision-making processes,
yielding gains for both general and disease-specific clinical assessment
Recommended from our members
Sensitivity and Specificity of Computer-Based Neurocognitive Tests in Sport-Related Concussion: Findings from the NCAA-DoD CARE Consortium
To optimally care for concussed individuals, a multi-dimensional approach is critical and a key component of this assessment in the athletic environment is computer-based neurocognitive testing. However, there continues to be concerns about the reliability and validity of these testing tools. The purpose of this study was to determine the sensitivity and specificity of three common computer-based neurocognitive tests (Immediate Post-Concussion Assessment and Cognitive Testing [ImPACT], CNS Vital Signs, and CogState Computerized Assessment Tool [CCAT]), to provide guidance on their clinical utility.
This study analyzed assessments from a cohort of collegiate athletes and non-varsity cadets from the NCAA-DoD CARE Consortium. The data were collected from 2014-2018. Study participants were divided into two testing groups [concussed, n = 1414 (baseline/24-48 h) and healthy, n = 8305 (baseline/baseline)]. For each test type, change scores were calculated for the components of interest. Then, the Normative Change method, which used normative data published in a similar cohort, and the Reliable Change Index (RCI) method were used to determine if the change scores were significant.
Using the Normative Change method, ImPACT performed best with an 87.5%-confidence interval and 1 number of components failed (NCF; sensitivity = 0.583, specificity = 0.625, F1 = 0.308). CNS Vital Signs performed best with a 90%-confidence interval and 1 NCF (sensitivity = 0.587, specificity = 0.532, F1 = 0.314). CCAT performed best when using a 75%-confidence interval and 2 NCF (sensitivity = 0.513, specificity = 0.715, F1 = 0.290). When using the RCI method, ImPACT performed best with an 87.5%-confidence interval and 1 NCF (sensitivity = 0.626, specificity = 0.559, F1 = 0.297).
When considering all three computer-based neurocognitive tests, the overall low sensitivity and specificity results provide additional evidence for the use of a multi-dimensional assessment for concussion diagnosis, including symptom evaluation, postural control assessment, neuropsychological status, and other functional assessments
Recommended from our members
Sensitivity and Specificity of the ImPACT Neurocognitive Test in Collegiate Athletes and US Military Service Academy Cadets with ADHD and/or LD: Findings from the NCAA-DoD CARE Consortium
Computer-based neurocognitive tests are widely used in sport-related concussion management, but the performance of these tests is not well understood in the participant population with attention-deficit/hyperactivity disorder (ADHD) and/or learning disorder (LD). This research estimates the sensitivity and specificity performance of the Immediate Post-Concussion Assessment and Cognitive Testing (ImPACT) computer-based neurocognitive test in identifying concussion in this population.
Study participants consisted of collegiate university athletes and military service academy cadets from the National Collegiate Athletic Association-Department of Defense CARE Consortium who completed the ImPACT test between 2014 and 2021. Participants who self-identified as belonging to one of the subgroups of interest (ADHD with or without LD [ADHD:LD+/-], LD with or without ADHD [LD:ADHD+/-], ADHD and/or LD [ADHD a/o LD]) and completed a baseline (1874 ADHD:LD+/-, 779 LD:ADHD+/-, 2338 ADHD a/o LD) or 24-48 h post-concussion (175 ADHD:LD+/-, 77 LD:ADHD+/-, 216 ADHD a/o LD) ImPACT assessment were included. Sensitivity and specificity were calculated using a normative data method and three machine learning classification methods: logistic regression, classification and regression tree, and random forest.
Using the four methods, participants with ADHD:LD+/- had sensitivities that ranged from 0.474 to 0.697, and specificities that ranged from 0.538 to 0.686. Participants with LD:ADHD+/- had sensitivities that ranged from 0.455 to 0.688, and specificities that ranged from 0.456 to 0.588. For participants with ADHD a/o LD, sensitivities ranged from 0.542 to 0.755, and specificities ranged from 0.451 to 0.724.
For all subgroups and analytical methods, the results illustrate sensitivity and specificity values below typically accepted levels indicative of clinical utility. These findings support that using ImPACT alone may be insufficient to inform concussion diagnoses and encourages the use of a multi-dimensional concussion assessment
Bridging big data: procedures for combining non-equivalent cognitive measures from the ENIGMA Consortium
Investigators in the cognitive neurosciences have turned to Big Data to address persistent replication and reliability issues by increasing sample sizes, statistical power, and representativeness of data. While there is tremendous potential to advance science through open data sharing, these efforts unveil a host of new questions about how to integrate data arising from distinct sources and instruments. We focus on the most frequently assessed area of cognition - memory testing - and demonstrate a process for reliable data harmonization across three common measures. We aggregated raw data from 53 studies from around the world which measured at least one of three distinct verbal learning tasks, totaling N = 10,505 healthy and brain-injured individuals. A mega analysis was conducted using empirical bayes harmonization to isolate and remove site effects, followed by linear models which adjusted for common covariates. After corrections, a continuous item response theory (IRT) model estimated each individual subject’s latent verbal learning ability while accounting for item difficulties. Harmonization significantly reduced inter-site variance by 37% while preserving covariate effects. The effects of age, sex, and education on scores were found to be highly consistent across memory tests. IRT methods for equating scores across AVLTs agreed with held-out data of dually-administered tests, and these tools are made available for free online. This work demonstrates that large-scale data sharing and harmonization initiatives can offer opportunities to address reproducibility and integration challenges across the behavioral sciences