151 research outputs found

    Compression under pressure: physiological and methodological factors influencing the effect of compression garments on running economy

    Get PDF
    Evidence for the effects of compression garments on sports performance and physiological responses to dynamic exercise remains equivocal. Contradictory findings within the sporting literature are confounded by methodological heterogeneity in terms of; intensity and modality of exercise, type of garment worn, and the interface pressure produced by the garment. The interface pressure applied by compression clothing is an important measure in evaluating the bio-physical impact of compression. Interface pressure values obtained in vivo with two portable pressure devices (PicoPress and Kikuhime) were compared against a reference standard (HOSY). The PicoPress satisfied the a priori thresholds for acceptable validity at the posterior and lateral orientation with calf stockings and tights, confirming its future use to assess interface pressure. A small, likely beneficial improvement in running economy was observed with correctly fitted (95%:5%:0%; η2 = 0.55) but not oversized compression tights, indicating that a certain level of interface pressure is required. Compression tights improved running economy only at higher relative exercise intensities (77.7 - 91.5% V̇O2max). The absence of any improvement at lower intensities (67.1 - 77.6 % V̇O2max) suggest that changes in running economy from compression are dependent on relative exercise intensity when V̇O2max (%) is used as an anchor of exercise intensity. Comparing measures from two portable, wireless near-infrared spectroscopy (NIRS) devices (PortaMon and MOXY) we found that the low-cost and light-weight MOXY device gave tissue oxygen saturation values at rest and during exercise that were physiologically credible and suitable for future research. Compression tights did affect ground contact time but not tissue oxygen saturation, cardiovascular or other kinematic parameters during running at intensities equivalent to long-distance race speed. Compression tights can produce small improvements in running economy, but effects are restricted to higher intensity exercise and appear dependent on garment interface pressure. It remains unlikely that this small positive effect on running economy, in very specific conditions, is enough to result in a meaningful impact on running performance

    The academic backbone: longitudinal continuities in educational achievement from secondary school and medical school to MRCP(UK) and the specialist register in UK medical students and doctors

    Get PDF
    Background: Selection of medical students in the UK is still largely based on prior academic achievement, although doubts have been expressed as to whether performance in earlier life is predictive of outcomes later in medical school or post-graduate education. This study analyses data from five longitudinal studies of UK medical students and doctors from the early 1970s until the early 2000s. Two of the studies used the AH5, a group test of general intelligence (that is, intellectual aptitude). Sex and ethnic differences were also analyzed in light of the changing demographics of medical students over the past decades. Methods: Data from five cohort studies were available: the Westminster Study (began clinical studies from 1975 to 1982), the 1980, 1985, and 1990 cohort studies (entered medical school in 1981, 1986, and 1991), and the University College London Medical School (UCLMS) Cohort Study (entered clinical studies in 2005 and 2006). Different studies had different outcome measures, but most had performance on basic medical sciences and clinical examinations at medical school, performance in Membership of the Royal Colleges of Physicians (MRCP(UK)) examinations, and being on the General Medical Council Specialist Register. Results: Correlation matrices and path analyses are presented. There were robust correlations across different years at medical school, and medical school performance also predicted MRCP(UK) performance and being on the GMC Specialist Register. A-levels correlated somewhat less with undergraduate and post-graduate performance, but there was restriction of range in entrants. General Certificate of Secondary Education (GCSE)/O-level results also predicted undergraduate and post-graduate outcomes, but less so than did A-level results, but there may be incremental validity for clinical and post-graduate performance. The AH5 had some significant correlations with outcome, but they were inconsistent. Sex and ethnicity also had predictive effects on measures of educational attainment, undergraduate, and post-graduate performance. Women performed better in assessments but were less likely to be on the Specialist Register. Non-white participants generally underperformed in undergraduate and post-graduate assessments, but were equally likely to be on the Specialist Register. There was a suggestion of smaller ethnicity effects in earlier studies. Conclusions: The existence of the Academic Backbone concept is strongly supported, with attainment at secondary school predicting performance in undergraduate and post-graduate medical assessments, and the effects spanning many years. The Academic Backbone is conceptualized in terms of the development of more sophisticated underlying structures of knowledge ('cognitive capital’ and 'medical capital’). The Academic Backbone provides strong support for using measures of educational attainment, particularly A-levels, in student selection

    Construct-level predictive validity of educational attainment and intellectual aptitude tests in medical student selection: meta-regression of six UK longitudinal studies

    Get PDF
    Background: Measures used for medical student selection should predict future performance during training. A problem for any selection study is that predictor-outcome correlations are known only in those who have been selected, whereas selectors need to know how measures would predict in the entire pool of applicants. That problem of interpretation can be solved by calculating construct-level predictive validity, an estimate of true predictor-outcome correlation across the range of applicant abilities. Methods: Construct-level predictive validities were calculated in six cohort studies of medical student selection and training (student entry, 1972 to 2009) for a range of predictors, including A-levels, General Certificates of Secondary Education (GCSEs)/O-levels, and aptitude tests (AH5 and UK Clinical Aptitude Test (UKCAT)). Outcomes included undergraduate basic medical science and finals assessments, as well as postgraduate measures of Membership of the Royal Colleges of Physicians of the United Kingdom (MRCP(UK)) performance and entry in the Specialist Register. Construct-level predictive validity was calculated with the method of Hunter, Schmidt and Le (2006), adapted to correct for right-censorship of examination results due to grade inflation. Results: Meta-regression analyzed 57 separate predictor-outcome correlations (POCs) and construct-level predictive validities (CLPVs). Mean CLPVs are substantially higher (.450) than mean POCs (.171). Mean CLPVs for first-year examinations, were high for A-levels (.809; CI: .501 to .935), and lower for GCSEs/O-levels (.332; CI: .024 to .583) and UKCAT (mean = .245; CI: .207 to .276). A-levels had higher CLPVs for all undergraduate and postgraduate assessments than did GCSEs/O-levels and intellectual aptitude tests. CLPVs of educational attainment measures decline somewhat during training, but continue to predict postgraduate performance. Intellectual aptitude tests have lower CLPVs than A-levels or GCSEs/O-levels. Conclusions: Educational attainment has strong CLPVs for undergraduate and postgraduate performance, accounting for perhaps 65% of true variance in first year performance. Such CLPVs justify the use of educational attainment measure in selection, but also raise a key theoretical question concerning the remaining 35% of variance (and measurement error, range restriction and right-censorship have been taken into account). Just as in astrophysics, ‘dark matter’ and ‘dark energy’ are posited to balance various theoretical equations, so medical student selection must also have its ‘dark variance’, whose nature is not yet properly characterized, but explains a third of the variation in performance during training. Some variance probably relates to factors which are unpredictable at selection, such as illness or other life events, but some is probably also associated with factors such as personality, motivation or study skills

    Applied Sports Nutrition Support, Dietary Intake and Body Composition Changes of a Female Athlete Completing 26 Marathons in 26 Days: A Case Study.

    Get PDF
    The aim of this case study is to describe the nutrition practices of a female recreational runner (VO2max 48.9 ml·kg-1·min-1) who completed 26 marathons (42.195 km) in 26 consecutive days. Information relating to the nutritional intake of female runners during multi-day endurance events is extremely limited, yet the number of people participating year-on-year continues to increase. This case study reports the nutrition intervention, dietary intake, body composition changes and performance in the lead-up and during the 26 days. Prior to undertaking the 26 marathon challenge, three consultations were held between the athlete and a sports nutrition advisor; planning and tailoring the general diet and race-specific strategies to the endurance challenge. During the marathons, the mean energy and fluid intake was 1039.7 ± 207.9 kcal (607.1 - 1453.2) and 2.39 ± 0.35 L (1.98 - 3.19). Mean hourly carbohydrate intake was 38.9 g·hr-1. 11 days following the completion of the 26 marathons, body mass had reduced by 4.6 kg and lean body mass increasing by 0.53 kg when compared with 20 days prior. This case study highlights the importance of providing general and event-specific nutrition education when training for such an event. This is particularly prudent for multi-day endurance running events

    Perspectives from the water: Utilizing fisher’s observations to inform SNE/ MA windowpane science and managemen

    Get PDF
    Within fisheries, stakeholders often have varying viewpoints regarding natural marine resources, and use different sets information to evaluate their condition. Evaluating a resource with different sets of information can lead to different conclusions. Windowpane flounder (Scophthalmus aquosus) are a managed finfish species in the northwest Atlantic whose regulations have the potential to limit harvest opportunities for target species. We analyzed commercial trip and catch information from video data to understand local densities of windowpane flounder in conjunction with fisheries independent surveys. Video monitoring data from three Rhode Island commercial fisher’s vessels and fisheries independent trawl survey data were analyzed to understand the geographic distribution of the stock as well as overlap with temporary closed areas. Biomass data from the fisheries-dependent and fisheries-independent surveys were combined with a spatial-temporal model that accounted for differences in catchability among vessels and spatial autocorrelation. A separate analysis of esti-mated discard rates with observer data was also conducted to determine how the distribution of windowpane discards in Southern New England compared to the distribution of model predicted windowpane abundance. In agreement with the fishermen’s observations, the temporary closed areas were not located where the highest densities of windowpane flounder occurred. The temporary closed areas, however, were located where the highest rates of discards occurred and thus where fishing had the greatest impact on the stock. The integration of verified fishery-dependent data with the scientific surveys has the potential to create a single set of information that is trusted by all user groups

    A pre-post test evaluation of the impact of the PELICAN MDT-TME development programme on the working lives of colorectal cancer team members

    Get PDF
    Background - The PELICAN Multidisciplinary Team Total Mesorectal Excision (MDT-TME) Development Programme aimed to improve clinical outcomes for rectal cancer by educating colorectal cancer teams in precision surgery and related aspects of multidisciplinary care. The Programme reached almost all colorectal cancer teams across England. We took the opportunity to assess the impact of participating in this novel team-based Development Programme on the working lives of colorectal cancer team members. Methods - The impact of participating in the programme on team members' self-reported job stress, job satisfaction and team performance was assessed in a pre-post course study. 333/568 (59%) team members, from the 75 multidisciplinary teams who attended the final year of the Programme, completed questionnaires pre-course, and 6-8 weeks post-course. Results - Across all team members, the main sources of job satisfaction related to working in multidisciplinary teams; whilst feeling overloaded was the main source of job stress. Surgeons and clinical nurse specialists reported higher levels of job satisfaction than team members who do not provide direct patient care, whilst MDT coordinators reported the lowest levels of job satisfaction and job stress. Both job stress and satisfaction decreased after participating in the Programme for all team members. There was a small improvement in team performance. Conclusions - Participation in the Development Programme had a mixed impact on the working lives of team members in the immediate aftermath of attending. The decrease in team members' job stress may reflect the improved knowledge and skills conferred by the Programme. The decrease in job satisfaction may be the consequence of being unable to apply these skills immediately in clinical practice because of a lack of required infrastructure and/or equipment. In addition, whilst the Programme raised awareness of the challenges of teamworking, a greater focus on tackling these issues may have improved working lives further

    Performance comparison of the MOXY and PortaMon near-infrared spectroscopy muscle oximeters at rest and during exercise

    Get PDF
    The purpose of the study was to compare muscle oxygenation as measured by two portable, wireless near-infrared spectroscopy (NIRS) devices under resting and dynamic conditions. A recently developed low-cost NIRS device (MOXY) was compared against an established PortaMon system that makes use of the spatially resolved spectroscopy algorithm. The influence of increasing external pressure on tissue oxygen saturation index (TSI) indicated that both devices are stable between 2 and 20 mmHg. However, above this pressure, MOXY reports declining TSI values. Analysis of adipose tissue thickness (ATT) and TSI shows a significant, nonlinear difference between devices at rest. The devices report similar TSI (%) values at a low ATT ( 10mm the difference remains constant ( −14.7±2.8% ). The most likely explanation for this difference is the small source–detector separation (2.5 cm) in the MOXY resulting in lower tissue penetration into muscle in subjects with higher ATT. Interday test–retest reliability of resting TSI was evaluated on five separate occasions, with the PortaMon reporting a lower coefficient of variation (1.8% to 2.5% versus 5.7% to 6.2%). In studies on male subjects with low ATT, decreases in the TSI were strongly correlated during isometric exercise, arterial occlusion, and incremental arm crank exercise. However, the MOXY reports a greater dynamic range, particularly during ischemia induced by isometric contraction or occlusion ( Δ74.3% versus Δ43.7% ; hyperemia MAX—occlusion MIN). This study shows that in this subject group both MOXY and PortaMon produce physiologically credible TSI measures during rest and exercise. However, the absolute values obtained during exercise are generally not comparable between devices unless corrected by physiological calibration following an arterial occlusion

    Antiretroviral treatment use, co-morbidities and clinical outcomes among Aboriginal participants in the Australian HIV Observational Database (AHOD)

    Get PDF
    Background: There are few data regarding clinical care and outcomes of Indigenous Australians living with HIV and it is unknown if these differ from non-Indigenous HIV-positive Australians. Methods: AHOD commenced enrolment in 1999 and is a prospective cohort of HIV-positive participants attending HIV outpatient services throughout Australia, of which 20 (74 %) sites report Indigenous status. Data were collected up until March 2013 and compared between Indigenous and non-Indigenous participants. Person-year methods were used to compare death rates, rates of loss to follow-up and rates of laboratory testing during follow-up between Indigenous and non-Indigenous participants. Factors associated with time to first combination antiretroviral therapy (cART) regimen change were assessed using Kaplan Meier and Cox Proportional hazards methods. Results: Forty-two of 2197 (1.9 %) participants were Indigenous. Follow-up amongst Indigenous and non-Indigenous participants was 332 & 16270 person-years, respectively. HIV virological suppression was achieved in similar proportions of Indigenous and non-Indigenous participants 2 years after initiation of cART (81.0 % vs 76.5 %, p = 0.635). Indigenous status was not independently associated with shorter time to change from first- to second-line cART (aHR 0.95, 95 % CI 0.51-1.76, p = 0.957). Compared with non-Indigenous participants, Indigenous participants had significantly less frequent laboratory monitoring of CD4 count (rate:2.76 tests/year vs 2.97 tests/year, p = 0.025) and HIV viral load (rate:2.53 tests/year vs 2.93 tests/year, p < 0.001), while testing rates for lipids and blood glucose were almost half that of non-indigenous participants (rate:0.43/year vs 0.71 tests/year, p < 0.001). Loss to follow-up (23.8 % vs 29.8 %, p = 0.496) and death (2.4 % vs 7.1 %, p = 0.361) occurred in similar proportions of indigenous and non-Indigenous participants, respectively, although causes of death in both groups were mostly non-HIV-related. Conclusions: As far as we are aware, these are the first data comparing clinical outcomes between Indigenous and non-Indigenous HIV-positive Australians. The forty-two Indigenous participants represent over 10 % of all Indigenous Australians ever diagnosed with HIV. Although outcomes were not significantly different, Indigenous patients had lower rates of laboratory testing for HIV and lipid/glucose parameters. Given the elevated risk of cardiovascular disease in the general Indigenous community, the additional risk factor of HIV infection warrants further focus on modifiable risk factors to maximise life expectancy in this population
    corecore