129 research outputs found
Concussion in National Football League Athletes Is Not Associated With Increased Risk of Acute, Noncontact Lower Extremity Musculoskeletal Injury
Background: Impaired neuromuscular function after concussion has recently been linked to increased risk of lower extremity injuries in athletes.
Purpose: To determine if National Football League (NFL) athletes have an increased risk of sustaining an acute, noncontact lower extremity injury in the 90-day period after return to play (RTP) and whether on-field performance differs pre- and postconcussion.
Study Design: Cohort study, Level of evidence, 3.
Methods: NFL concussions in offensive players from the 2012-2013 to the 2016-2017 seasons were studied. Age, position, injury location/type, RTP, and athlete factors were noted. A 90-day RTP postconcussive period was analyzed for lower extremity injuries. Concussion and injury data were obtained from publicly available sources. Nonconcussed, offensive skill position NFL athletes from the same period were used as a control cohort, with the 2014 season as the reference season. Power rating performance metrics were calculated for ±1, ±2, and ±3 seasons pre- and postconcussion. Conditional logistic regression was used to determine associations between concussion and lower extremity injury as well as the relationship of concussions to on-field performance.
Results: In total, 116 concussions were recorded in 108 NFL athletes during the study period. There was no statistically significant difference in the incidence of an acute, noncontact lower extremity injury between concussed and control athletes (8.5% vs 12.8%;
Conclusion: Concussed, NFL offensive athletes did not demonstrate increased odds of acute, noncontact, lower extremity injury in a 90-day RTP period when compared with nonconcussed controls. Immediate on-field performance of skill position players did not appear to be affected by concussion
Referral patterns to primary mental health services in Western Sydney (Australia) : an analysis of routinely collected data (2005-2018)
Background: Regionally-specific approaches to primary mental health service provision through Primary Health Networks (PHNs) have been a feature of recent national mental health reforms. No previous studies have been conducted to investigate local patterns of primary mental health care (PMHC) services in Western Sydney. This study is designed to (i) understand the socio-demographic and economic profiles (ii) examine the inequalities of service access, and (iii) investigate the service utilisation patterns, among those referred to PMHC services in Western Sydney, Australia.
Methods: This study used routinely collected PMHC data (2005–2018), population-level general practice and Medicare rebates data (2013–2018) related to mental health conditions, for the population catchment of the Western Sydney PHN. Sex- and age-specific PMHC referrals were examined by socio-demographic, diagnostic, referral- and service-level factors, and age-specific referrals to PMHC services as a percentage of total mental health encounters were investigated. Results: There were 27,897 referrals received for 20,507 clients, of which, 79.19% referrals resulted in follow-up services with 138,154 sessions. Overall, 60.09% clients were female, and median age was 31 years with interquartile ranged 16–46 years. Anxiety and depression were the predominant mental health condition, and 9.88% referred for suicidal risk. Over two-thirds of referrals started treatments during the first month of the referral and 95.1% of the total sessions were delivered by face to face. The younger age group (0–24) had greater referral opportunities as a percentage of total visits to a general practitioner and Medicare rebates, however demonstrating poor attendance rates with reduced average sessions per referral compared with older adults. Conclusion: Children and young adults were more likely to be referred to PMHC services than older adults, but were less likely to attend services. Further research is needed to identify the strategies to address these differences in access to PMHC services to/10.1186/s13033-020-00368-5 optimise the effectiveness of services
Impact of Patellar Tendinopathy on Player Performance in the National Basketball Association
Background: The extent to which patellar tendinopathy affects National Basketball Association (NBA) athletes has not been thoroughly elucidated.
Purpose: To assess the impact patellar tendinopathy has on workload, player performance, and career longevity in NBA athletes.
Study Design: Cohort study; Level of evidence, 3.
Methods: NBA players diagnosed with patellar tendinopathy between the 2000-2001 and 2018-2019 seasons were identified through publicly available data. Characteristics, return to play (RTP), player statistics, and workload data were compiled. The season of diagnosis was set as the index year, and the statistical analysis compared post- versus preindex data acutely and in the long term, both within the injured cohort and with a matched healthy NBA control cohort.
Results: A total of 46 NBA athletes were included in the tendinopathy group; all 46 players returned to the NBA after their diagnosis. Compared with controls, the tendinopathy cohort had longer careers (10.50 ± 4.32 vs 7.18 ± 5.28 seasons; P \u3c .001) and played more seasons after return from injury (4.26 ± 2.46 vs 2.58 ± 3.07 seasons; P = .001). Risk factors for patellar tendinopathy included increased workload before injury (games started, 45.83 ± 28.67 vs 25.01 ± 29.77; P \u3c .001) and time played during the season (1951.21 ± 702.09 vs 1153.54 ± 851.05 minutes; P \u3c .001) and during games (28.71 ± 6.81 vs 19.88 ± 9.36 minutes per game; P \u3c .001). Players with increased productivity as measured by player efficiency rating (PER) were more likely to develop patellar tendinopathy compared with healthy controls (15.65 ± 4.30 vs 12.76 ± 5.27; P = .003). When comparing metrics from 1 year preinjury, there was a decrease in games started at 1 year postinjury (-12.42 ± 32.38 starts; P = .028) and total time played (-461.53 ± 751.42 minutes; P = .001); however, PER at 1 and 3 years after injury was unaffected compared with corresponding preinjury statistics.
Conclusion: NBA players with a higher PER and significantly more playing time were more likely to be diagnosed with patellar tendinopathy. Player performance was not affected by the diagnosis of patellar tendinopathy, and athletes were able to RTP without any impact on career longevity
The Use of MotusBASEBALL For Pitch Monitoring and Injury Prevention
Introduction: MotusBASEBALL (MOTUS) has proven to be a reliable and accurate method for evaluating the multifactorial kinesiology involved with pitching. We sought to review the use of MOTUS in assessment of pitching parameters and identify its practicality as an injury prevention tool across the literature.
Methods: A systemic review of the literature was preformed, using key words such as MOTUS, baseball, pitcher, sensor and arm sleeve, identifying 77 total articles. Inclusion criteria entailed original articles that used MOTUS and studied baseball pitchers across any level of sport.
Results: A total of 13 articles met the inclusion criteria, producing a sample of 493 male athletes with a mean age of 18.7. Uniformly across studies, elbow torque was a primary metric and was observed in relation to a wide range of variables, such as pitch type, height, weight and arm length. Additionally, MOTUS was able to detect several other pitching metrics, such as arm speed, shoulder rotation and arm slot, displaying a wide range of capabilities.
Conclusion: We suspect MOTUS technology could become a significant tool for observing pitching mechanics in real time, as well as an injury prevention tool to be used by players, coaches and trainers across all levels of baseball
A Cost-Effectiveness Analysis of the Various Treatment Options for Distal Radius Fractures
PURPOSE: To conduct a cost-effectiveness study of nonsurgical and surgical treatment options for distal radius fractures using distinct posttreatment outcome patterns.
METHODS: We created a decision tree to model the following treatment modalities for distal radius fractures: nonsurgical management, external fixation, percutaneous pinning, and plate fixation. Each node of the model was associated with specific costs in dollars, a utility adjustment (quality-adjusted life year [QALY]), and a percent likelihood. The nodes of the decision tree included uneventful healing, eventful healing and no further intervention, carpal tunnel syndrome, trigger finger, and tendon rupture as well as associated treatments for each event. The percent probabilities of each transition state, QALY values, and costs of intervention were gleaned from a systematic review. Rollback and incremental cost-effectiveness ratio analyses were conducted to identify optimal treatment strategies. Threshold values of 100,000/QALY were used to distinguish the modalities in the incremental cost-effectiveness ratio analysis.
RESULTS: Both the rollback analysis and the incremental cost-effectiveness ratio analysis revealed nonsurgical management as the predominant strategy when compared with the other operative modalities. Nonsurgical management dominated external fixation and plate fixation, although it was comparable with percutaneous fixation, yielding a $2,242 lesser cost and 0.017 lesser effectiveness.
CONCLUSIONS: The cost effectiveness of nonsurgical management is driven by its decreased cost to the health care system. Plate and external fixation have been shown to be both more expensive and less effective than other proposed treatments. Percutaneous pinning has demonstrated more favorable effectiveness in the literature than plate and external fixation and, thus, may be more cost effective in certain circumstances. Future studies may find value in investigating further clinical aspects of distal radius fractures and their association with nonsurgical management versus that with plate fixation.
TYPE OF STUDY/LEVEL OF EVIDENCE: Economic/decision analysis II
Ferromagnetic dynamics detected via one- and two-magnon NV relaxometry
The NV center in diamond has proven to be a powerful tool for locally
characterizing the magnetic response of microwave excited ferromagnets. To
date, this has been limited by the requirement that the FMR excitation
frequency be less than the NV spin resonance frequency. Here we report NV
relaxometry based on a two-magnon Raman-like process, enabling detection of FMR
at frequencies higher than the NV frequency. For high microwave drive powers,
we observe an unexpected field-shift of the NV response relative to a
simultaneous microwave absorption signal from a low damping ferrite film. We
show that the field-shifted NV response is due to a second order Suhl
instability. The instability creates a large population of non-equilibrium
magnons which relax the NV spin, even when the uniform mode FMR frequency
exceeds that of the NV spin resonance frequency, hence ruling out the
possibility that the NV is relaxed by a single NV-resonant magnon. We argue
that at high frequencies the NV response is due to a two-magnon relaxation
process in which the difference frequency of two magnons matches the NV
frequency, and at low frequencies we evaluate the lineshape of the one-magnon
NV relaxometry response using spinwave instability theory
Realist synthesis : illustrating the method for implementation research
BackgroundRealist synthesis is an increasingly popular approach to the review and synthesis of evidence, which focuses on understanding the mechanisms by which an intervention works (or not). There are few published examples of realist synthesis. This paper therefore fills a gap by describing, in detail, the process used for a realist review and synthesis to answer the question \u27what interventions and strategies are effective in enabling evidence-informed healthcare?\u27 The strengths and challenges of conducting realist review are also considered. MethodsThe realist approach involves identifying underlying causal mechanisms and exploring how they work under what conditions. The stages of this review included: defining the scope of the review (concept mining and framework formulation); searching for and scrutinising the evidence; extracting and synthesising the evidence; and developing the narrative, including hypotheses. ResultsBased on key terms and concepts related to various interventions to promote evidenceinformed healthcare, we developed an outcome-focused theoretical framework. Questions were tailored for each of four theory/intervention areas within the theoretical framework and were used to guide development of a review and data extraction process. The search for literature within our first theory area, change agency, was executed and the screening procedure resulted in inclusion of 52 papers. Using the questions relevant to this theory area, data were extracted by one reviewer and validated by a second reviewer. Synthesis involved organisation of extracted data into evidence tables, theming and formulation of chains of inference, linking between the chains of inference, and hypothesis formulation. The narrative was developed around the hypotheses generated within the change agency theory area. ConclusionsRealist synthesis lends itself to the review of complex interventions because it accounts for context as well as outcomes in the process of systematically and transparently synthesising relevant literature. While realist synthesis demands flexible thinking and the ability to deal with complexity, the rewards include the potential for more pragmatic conclusions than alternative approaches to systematic reviewing. A separate publication will report the findings of the review. <br /
Longer-term efficiency and safety of increasing the frequency of whole blood donation (INTERVAL): extension study of a randomised trial of 20 757 blood donors
Background:
The INTERVAL trial showed that, over a 2-year period, inter-donation intervals for whole blood donation can be safely reduced to meet blood shortages. We extended the INTERVAL trial for a further 2 years to evaluate the longer-term risks and benefits of varying inter-donation intervals, and to compare routine versus more intensive reminders to help donors keep appointments.
Methods:
The INTERVAL trial was a parallel group, pragmatic, randomised trial that recruited blood donors aged 18 years or older from 25 static donor centres of NHS Blood and Transplant across England, UK. Here we report on the prespecified analyses after 4 years of follow-up. Participants were whole blood donors who agreed to continue trial participation on their originally allocated inter-donation intervals (men: 12, 10, and 8 weeks; women: 16, 14, and 12 weeks). They were further block-randomised (1:1) to routine versus more intensive reminders using computer-generated random sequences. The prespecified primary outcome was units of blood collected per year analysed in the intention-to-treat population. Secondary outcomes related to safety were quality of life, self-reported symptoms potentially related to donation, haemoglobin and ferritin concentrations, and deferrals because of low haemoglobin and other factors. This trial is registered with ISRCTN, number ISRCTN24760606, and has completed.
Findings:
Between Oct 19, 2014, and May 3, 2016, 20 757 of the 38 035 invited blood donors (10 843 [58%] men, 9914 [51%] women) participated in the extension study. 10 378 (50%) were randomly assigned to routine reminders and 10 379 (50%) were randomly assigned to more intensive reminders. Median follow-up was 1·1 years (IQR 0·7–1·3). Compared with routine reminders, more intensive reminders increased blood collection by a mean of 0·11 units per year (95% CI 0·04–0·17; p=0·0003) in men and 0·06 units per year (0·01–0·11; p=0·0094) in women. During the extension study, each week shorter inter-donation interval increased blood collection by a mean of 0·23 units per year (0·21–0·25) in men and 0·14 units per year (0·12–0·15) in women (both p<0·0001). More frequent donation resulted in more deferrals for low haemoglobin (odds ratio per week shorter inter-donation interval 1·19 [95% CI 1·15–1·22] in men and 1·10 [1·06–1·14] in women), and lower mean haemoglobin (difference per week shorter inter-donation interval −0·84 g/L [95% CI −0·99 to −0·70] in men and −0·45 g/L [–0·59 to −0·31] in women) and ferritin concentrations (percentage difference per week shorter inter-donation interval −6·5% [95% CI −7·6 to −5·5] in men and −5·3% [–6·5 to −4·2] in women; all p<0·0001). No differences were observed in quality of life, serious adverse events, or self-reported symptoms (p>0.0001 for tests of linear trend by inter-donation intervals) other than a higher reported frequency of doctor-diagnosed low iron concentrations and prescription of iron supplements in men (p<0·0001).
Interpretation:
During a period of up to 4 years, shorter inter-donation intervals and more intensive reminders resulted in more blood being collected without a detectable effect on donors' mental and physical wellbeing. However, donors had decreased haemoglobin concentrations and more self-reported symptoms compared with the initial 2 years of the trial. Our findings suggest that blood collection services could safely use shorter donation intervals and more intensive reminders to meet shortages, for donors who maintain adequate haemoglobin concentrations and iron stores.
Funding:
NHS Blood and Transplant, UK National Institute for Health Research, UK Medical Research Council, and British Heart Foundation
- …