15 research outputs found

    Exploring the association between recent concussion, subconcussive impacts and depressive symptoms in male Australian Football players

    Get PDF
    Objectives: To explore the association between depressive symptoms and recent head-related trauma (diagnosed concussion, subconcussive impacts) in semiprofessional male Australian Football (AF) players. Methods: Sixty-nine semiprofessional male players from a West Australian Football League (WAFL) club participated in the study (M age =21.81, SD=2.91 years). Depressive symptoms were measured using the Centre for Epidemiological Studies Depression Scale. Injuries and potential confounding variables (eg, pre-existing mental health condition; alcohol or drug hangovers; experiencing a stressful event) were self-reported anonymously using the WAFL Injury Report Survey. Both tools were administered every 2-weeks over the first 22-weeks of the WAFL season. Controlling for potential confounding variables and other injuries, a repeated measures generalised estimating equations model assessed the risk of clinically relevant depressive symptoms occurring, when diagnosed concussion or subconcussive impacts were experienced. Results: A total of 10 concussions and 183 subconcussive impacts were reported. Players who experienced a concussion were almost nine times more likely to experience clinically relevant depressive symptoms (OR 8.88, 95% CI 2.65 to 29.77, p \u3c 0.001). Although elevated, depressive symptoms following subconcussive impacts were not statistically significant (OR 1.13, 95% CI 0.67 to 1.92, p = 0.641). Conclusion: These findings indicate that semiprofessional AF athletes may be at risk of experiencing depressive symptoms after concussion. Severity (concussion vs subconcussive impacts) and dose (number of impacts) appear to have an important relationship with depressive symptom outcomes in this cohort and should be considered for further research and management of player welfare. © Author(s) (or their employer(s)) 2020. Re-use permitted under CC BY-NC. No commercial re-use. See rights and permissions. Published by BMJ

    Randomised controlled trial comparing daily VerSus depot vitamin D3 therapy in 0-16-year-old newly settled refugees in Western Australia over a period of 40 weeks

    Get PDF
    Vitamin D deficiency is highly prevalent in newly settled refugees in Western Australia (WA). If adherence to daily vitamin D therapy is problematic, depot therapy is a therapeutic alternative. The aim of this studywas to compare daily versus depot treatment and factors influencing the therapeutic outcome. Newly settled refugees (n = 151) with 25(OH)D levels less than 78 nmol/L were randomised to receive daily or depot vitamin D therapy with eight weekly interval follow up to 40 weeks. Biochemical and clinical parameters were collected at each visit. Generalized LinearMixedModels (GLMM) examined the longitudinal changes over time controlling for confounders including age, gender, treatment arm, season, country of refuge/origin and sun exposure score. Participants were aged 5.5 months to 16.0 years (75 males, 83 females). Both treatment groups achieved vitamin D sufficiency. The daily treatment group had significantly higher 25(OH)D levels at each visit post baseline and a higher proportion of participants with levels above 50 nmol/L at all time points. Time, treatment group, calcium and sun exposure score were significant predictors of 25(OH)D serum levels. Depot vitamin D therapy is an alternative to daily treatment in this at-risk group of children and adolescents in whom treatment adherence is problemati

    Early infant feeding and adiposity risk: from infancy to adulthood

    Get PDF
    Introduction: Systematic reviews suggest that a longer duration of breast-feeding is associated with a reduction in the risk of later overweight and obesity. Most studies examining breast-feeding in relation to adiposity have not used longitudinal analysis. In our study, we aimed to examine early infant feeding and adiposity risk in a longitudinal cohort from birth to young adulthood using new as well as published data. Methods: Data from the Western Australian Pregnancy Cohort (Raine) Study in Perth, W.A., Australia, were used to examine associations between breast-feeding and measures of adiposity at 1, 2, 3, 6, 8, 10, 14, 17, and 20 years. Results: Breast-feeding was measured in a number of ways. Longer breast-feeding (in months) was associated with reductions in weight z-scores between birth and 1 year (β = -0.027; p \u3c 0.001) in the adjusted analysis. At 3 years, breast-feeding for \u3c4 months increased the odds of infants experiencing early rapid growth (OR 2.05; 95% CI 1.43-2.94; p \u3c 0.001). From 1 to 8 years, children breast-fed for ≤4 months compared to ≥12 months had a significantly greater probability of exceeding the 95th percentile of weight. The age at which breast-feeding was stopped and a milk other than breast milk was introduced (introduction of formula milk) played a significant role in the trajectory of the BMI from birth to 14 years; the 4-month cutoff point was consistently associated with a higher BMI trajectory. Introduction of a milk other than breast milk before 6 months compared to at 6 months or later was a risk factor for being overweight or obese at 20 years of age (OR 1.47; 95% CI 1.12-1.93; p = 0.005). Discussion: Breast-feeding until 6 months of age and beyond should be encouraged and is recommended for protection against increased adiposity in childhood, adolescence, and young adulthood. Adverse long-term effects of early growth acceleration are fundamental in later overweight and obesity. Formula feeding stimulates a higher postnatal growth velocity, whereas breast-feeding promotes slower growth and a reduced likelihood of overweight and obesity. Biological mechanisms underlying the protective effect of breast-feeding against obesity are based on the unique composition and metabolic and physiological responses to human milk

    Epidemiology of musculoskeletal injury in military recruits: A systematic review and meta-analysis

    Get PDF
    Background: Injuries are a common occurrence in military recruit training, however due to differences in the capture of training exposure, injury incidence rates are rarely reported. Our aim was to determine the musculoskeletal injury epidemiology of military recruits, including a standardised injury incidence rate. Methods: Epidemiological systematic review following the PRISMA 2020 guidelines. Five online databases were searched from database inception to 5th May 2021. Prospective and retrospective studies that reported data on musculoskeletal injuries sustained by military recruits after the year 2000 were included. We reported on the frequency, prevalence and injury incidence rate. Incidence rate per 1000 training days (Exact 95% CI) was calculated using meta-analysis to allow comparisons between studies. Observed heterogeneity (e.g., training duration) precluded pooling of results across countries. The Joanna Briggs Institute Quality Assessment Checklist for Prevalence Studies assessed study quality. Results: This review identified 41 studies comprising 451,782 recruits. Most studies (n = 26; 63%) reported the number of injured recruits, and the majority of studies (n = 27; 66%) reported the number of injuries to recruits. The prevalence of recruits with medical attention injuries or time-loss injuries was 22.8% and 31.4%, respectively. Meta-analysis revealed the injury incidence rate for recruits with a medical attention injury may be as high as 19.52 injuries per 1000 training days; and time-loss injury may be as high as 3.97 injuries per 1000 training days. Longer recruit training programs were associated with a reduced injury incidence rate (p = 0.003). The overall certainty of the evidence was low per a modified GRADE approach. Conclusion: This systematic review with meta-analysis highlights a high musculoskeletal injury prevalence and injury incidence rate within military recruits undergoing basic training with minimal improvement observed over the past 20 years. Longer training program, which may decrease the degree of overload experienced by recruit, may reduce injury incidence rates. Unfortunately, reporting standards and reporting consistency remain a barrier to generalisability. Trial registration: PROSPERO (Registration number: CRD42021251080)

    Paediatric nurses\u27 satisfaction with organisational communication, job satisfaction, and intention to stay: A structural equation modelling analysis

    No full text
    Aims: To examine the effect of paediatric nurses’ organisational communication satisfaction on job satisfaction and intention to stay in their role. Background: Nurses’ satisfaction with organisational communication has not been studied in-depth in recent years and specifically, there is a paucity of evidence in relation to paediatric nurses’ job satisfaction and intention to stay in their current job. Methods: A cross-sectional quantitative research design using questionnaires was used. Descriptive statistics were used to analyse demographic data and structural equation modelling was used to analyse the hypothesised models. Findings: The constructs of supervisor relationships, communication climate and media quality had a significant direct effect on paediatric nurses’ job satisfaction. Job satisfaction was found to have a significant negative inverse relationship with intention to leave and looking for another job in nursing. Conclusion: Strategies that promote job satisfaction and communication satisfaction should be disseminated by management in order to reduce paediatric nurse intention to leave and looking for another job. Implications for nursing management: The study highlighted the need for improvement in the efficacy of communication systems for upward and downward feedback in order to improve supervisor subordinate relationship. Further, managers need to facilitate adequate infiltration of relevant information to maintain low levels of frustration and prevent development of grapevines amongst nurses. © 2020 Australian College of Nursing Ltd. Published by Elsevier Ltd

    Bouncing back from COVID-19: A Western Australian community perspective

    No full text
    Introduction: This study explored the behavioral profiles of residing Western Australians during a COVID-19 lockdown period and transitions in behavior post-lockdown. Methods: A total of 313 participants (76% female, age: M = 50.1, SD = 15.7 years) completed behavioral and mental health questionnaire items ~2 months after a 3-month COVID-19 lockdown in October 2020, using a retrospective recall to assess their experience during the lockdown period. Latent transition analysis (LTA) was used to identify behavioral profiles and transitions. Indicators were identified by assessing during–post-lockdown group differences (Kruskal–Wallis, chi-square tests) and profiles described using qualitative open-ended questions. Results: Significant indicators included changes in physical activity, leisure screen time, alcohol intake, psychological distress, and loneliness, but not fast food consumption. The significant indicators were used to form LTA models. The five latent class model showed the best model fit (Log-likelihood = −1301.66, AIC = 426.12, BIC = 609.68). Approximately one in four participants reported a change in their behavior profiles after the lockdown ceased. Key differences between the profiles were age, household income, education, resilience, sense of control, existing mental health issues, and social relations. Washing hands and social distancing were the most recalled and effective health campaigns across the classes, with health campaigns encompassing physical activity/alcohol consumption, or domestic violence having the least attention. Discussion: Overall, while most participants recovered relatively well after the lockdown period, LTA did identify subgroups such as those who were inactive and lonely experienced more difficulties than other groups, and engagement with public health campaigns differed. The results provide important insights for future public health campaigns on how these campaigns might be diversified to effectively target more people and particular groups to maximize engagement for maintaining people\u27s mental health with additional focus on physical activity, alcohol consumption, and domestic violence

    Role of lactic acidosis as a mediator of sprint-mediated nausea

    Get PDF
    This study aims to determine whether there is a relationship between nausea level and lactic acidosis during recovery from sprinting. In all, 13 recreationally active males completed a 60 s bout of maximal intensity cycling. Prior to and for 45 min following exercise, blood pH, pCO2, and lactate levels were measured together with nausea. In response to sprinting, nausea, lactate, and H+ concentrations increased and remained elevated for at least 10 min (p \u3c .001), whereas pCO2 increased only transiently (p \u3c .001) before falling below pre-exercise levels (p \u3c .001), with all these variables returning toward pre-exercise levels during recovery. Both measures of nausea adopted for analyses (nausea profile, NP; visual analogue scale, VAS), demonstrated significant repeated measures correlation (rmcorr) post-exercise between nausea and plasma lactate (VAS and NPrrm\u3e 0.595, p \u3c .0001) and H+ concentrations (VAS and NPrrm\u3e 0.689, p \u3c .0001), but an inconsistent relationship with pCO2 (VAS rrm = 0.250, p = .040; NP rrm = 0.144, p = .248) and bicarbonate levels (VAS rrm = −0.252, p = .095; NP rrm = −0.397, p = .008). Linear mixed modeling was used to predict the trajectory of nausea over time, with both lactate and H+ concentrations found to be key predictors of nausea (p \u3c .0001). In conclusion, this study reveals a strong positive relationship between nausea and both H+ and lactate concentrations during recovery from sprinting, a finding consistent with H+ and lactate being potential mediators of nausea post-sprinting. However, as the timing of the recovery of both H+ and lactate was delayed, compared to that of nausea, further research is required to confirm these findings and investigate other potential mechanisms

    Epidemiology of musculoskeletal injury in military recruits: A systematic review and meta-analysis

    No full text
    Background: Injuries are a common occurrence in military recruit training, however due to differences in the capture of training exposure, injury incidence rates are rarely reported. Our aim was to determine the musculoskeletal injury epidemiology of military recruits, including a standardised injury incidence rate. Methods: Epidemiological systematic review following the PRISMA 2020 guidelines. Five online databases were searched from database inception to 5th May 2021. Prospective and retrospective studies that reported data on musculoskeletal injuries sustained by military recruits after the year 2000 were included. We reported on the frequency, prevalence and injury incidence rate. Incidence rate per 1000 training days (Exact 95% CI) was calculated using meta-analysis to allow comparisons between studies. Observed heterogeneity (e.g., training duration) precluded pooling of results across countries. The Joanna Briggs Institute Quality Assessment Checklist for Prevalence Studies assessed study quality. Results: This review identified 41 studies comprising 451,782 recruits. Most studies (n=26; 63%) reported the number of injured recruits, and the majority of studies (n=27; 66%) reported the number of injuries to recruits. The prevalence of recruits with medical attention injuries or time-loss injuries was 22.8% and 31.4%, respectively. Meta-analysis revealed the injury incidence rate for recruits with a medical attention injury may be as high as 19.52 injuries per 1000 training days; and time-loss injury may be as high as 3.97 injuries per 1000 training days. Longer recruit training programs were associated with a reduced injury incidence rate (p=0.003). The overall certainty of the evidence was low per a modified GRADE approach. Conclusion: This systematic review with meta-analysis highlights a high musculoskeletal injury prevalence and injury incidence rate within military recruits undergoing basic training with minimal improvement observed over the past 20 years. Longer training program, which may decrease the degree of overload experienced by recruit, may reduce injury incidence rates. Unfortunately, reporting standards and reporting consistency remain a barrier to generalisability. Trial registration: PROSPERO (Registration number: CRD42021251080)

    Self-reported throwing volumes are not a valid tool for monitoring throwing loads in elite Australian cricket players: An observational cohort study

    No full text
    Objectives: To determine the concurrent validity of player self-reported and independently observed throwing volume. Examine whether sex, playing position, or time to upload self-reported data post training influences the accuracy of self-reported throwing loads. Design: Cross-sectional cohort study. Methods: A total of 8 female and 18 male elite cricket players participated in the study. Overarm throws from 12 training sessions during the 2020–21 cricket year were observed. Player self-reported throwing volume data were retrieved post training, with the time difference between session completion and self-reported data upload recorded. Results: A moderate positive correlation was found between self-reported and observed throwing loads (rho = 0.65), however only 22 % of players reported values within a 10 % level of error. Players reported a mean (SD) absolute inaccuracy of 11.17 (9.77) throws, and a mean (SD) relative inaccuracy of 24.76 (16.04) percent. Sex did not influence reporting accuracy (p = 0.41). Females tended to upload self-reported data the day of training, whereas men report the day following. Players who uploaded their data greater than one day after training were the most inaccurate with a mean relative inaccuracy of 36 %. Conclusions: While there is a clear relationship between observed and self-reported throwing volumes, the findings of this study question the validity of using player self-reported throwing load as a marker of true throwing loads with most players recording in excess of 10 % error. High performance staff and players should consider whether the current accuracy of self-reported throwing load justifies the additional reporting burden on the players during training
    corecore