6 research outputs found

    The Reliability of the Seated Medicine Ball Throw as Assessed with Accelerometer Instrumentation

    Get PDF
    The Seated Medicine Ball Throw (SMBT) is low-risk, easy to perform, requires minimal equipment, and is a valid measure of upper body explosiveness. The Ballistic Ball™ (BB) medicine ball contains inertial sensors which estimate peak velocity, and transmits these values to an iPad™ app via Bluetooth™. This method of gathering data may be superior to using horizontal distance as there is less chance of confounding factors and it is easier to administer. The objective of this study was to evaluate the reliability of the BB peak velocity measurement in the SMBT. Twenty healthy, rested, recreationally-active, undergraduate students volunteered to participate in this study. After a standard dynamic warm-up, subjects were taught proper throwing technique. For familiarization, subjects performed repeated SMBTs with a 10 lb BB until horizontal distance thrown for 3 consecutive trials was within 0.25m. After 20 minutes of rest, subjects repeated the warm-up protocol, then performed 6 trials with the same 10 lb BB for which peak velocity was recorded. The test-retest reliability of these 6 trials was analyzed using intraclass correlations (ICC). The ICCs between consecutive trials ranged from 0.94 to 0.98. Peak velocity for trials 1-6 were: 3.85±1.14 m/s, 3.86±1.06 m/s, 3.94±1.22 m/s, 3.85±1.13 m/s, 3.95±1.21 m/s, 3.92±1.20 m/s, respectively. The high ICC values suggest excellent reliability of the peak velocity measurement from the BB device. The BB peak velocity as assessed during a SMBT is a reliable method for assessment of upper body explosiveness

    The Reliability of the Seated Medicine Ball Throw for Distance

    Get PDF
    Measuring distance thrown during the Seated Medicine Ball Throw (SMBT) has been used frequently within the literature to quantify upper body explosiveness, due to the test being easy to learn, low-risk, and requiring minimal equipment. The reliability of distance thrown in the SMBT has not been broadly reported, nor have familiarization protocols been thoroughly documented. The purpose of this study is to assess the reliability of distance thrown during the SMBT as a representative measurement for upper body explosiveness in active, recreationally trained adults. Before testing, 20 subjects completed a dynamic warm-up. After learning proper technique, subjects were familiarized with the exercise by completing continuous trials using a 10 lb medicine ball, with 1 minute of rest between trials, until three consecutive throws within 0.25 m were achieved. Subjects rested 20 minutes, repeated the warm-up, and then completed 6 trials of the SMBT where distance of each throw was measured. Any trial in which technique deviated significantly from the instructions was repeated. Intraclass correlation coefficients (ICCs) were used to assess reliability between trials. Distances thrown for trials 1-6 were as follows: 3.43±0.99 m, 3.41±0.95 m, 3.48±1.00 m, 3.48±1.00 m, 3.46±1.03 m, and 3.54±1.05 m respectively. ICCs for consecutive trial pairs ranged from 0.97-0.99. These findings suggest that distance thrown is a reliable representative measure of upper body explosiveness in recreationally trained adults. The familiarization protocol used was sufficient for producing consistent performance

    Prognostic model to identify and quantify risk factors for mortality among hospitalised patients with COVID-19 in the USA

    No full text
    Objectives To develop a prognostic model to identify and quantify risk factors for mortality among patients admitted to the hospital with COVID-19.Design Retrospective cohort study. Patients were randomly assigned to either training (80%) or test (20%) sets. The training set was used to fit a multivariable logistic regression. Predictors were ranked using variable importance metrics. Models were assessed by C-indices, Brier scores and calibration plots in the test set.Setting Optum de-identified COVID-19 Electronic Health Record dataset including over 700 hospitals and 7000 clinics in the USA.Participants 17 086 patients hospitalised with COVID-19 between 20 February 2020 and 5 June 2020.Main outcome measure All-cause mortality while hospitalised.Results The full model that included information on demographics, comorbidities, laboratory results, and vital signs had good discrimination (C-index=0.87) and was well calibrated, with some overpredictions for the most at-risk patients. Results were similar on the training and test sets, suggesting that there was little overfitting. Age was the most important risk factor. The performance of models that included all demographics and comorbidities (C-index=0.79) was only slightly better than a model that only included age (C-index=0.76). Across the study period, predicted mortality was 1.3% for patients aged 18 years old, 8.9% for 55 years old and 28.7% for 85 years old. Predicted mortality across all ages declined over the study period from 22.4% by March to 14.0% by May.Conclusion Age was the most important predictor of all-cause mortality, although vital signs and laboratory results added considerable prognostic information, with oxygen saturation, temperature, respiratory rate, lactate dehydrogenase and white cell count being among the most important predictors. Demographic and comorbidity factors did not improve model performance appreciably. The full model had good discrimination and was reasonably well calibrated, suggesting that it may be useful for assessment of prognosis

    Sustainable HIV treatment in Africa through viral-load-informed differentiated care

    Get PDF
    There are inefficiencies in current approaches to monitoring patients on antiretroviral therapy in sub-Saharan Africa. Patients typically attend clinics every 1 to 3 months for clinical assessment. The clinic costs are comparable with the costs of the drugs themselves and CD4 counts are measured every 6 months, but patients are rarely switched to second-line therapies. To ensure sustainability of treatment programmes, a transition to more cost-effective delivery of antiretroviral therapy is needed. In contrast to the CD4 count, measurement of the level of HIV RNA in plasma (the viral load) provides a direct measure of the current treatment effect. Viral-load-informed differentiated care is a means of tailoring care so that those with suppressed viral load visit the clinic less frequently and attention is focussed on those with unsuppressed viral load to promote adherence and timely switching to a second-line regimen. The most feasible approach to measuring viral load in many countries is to collect dried blood spot samples for testing in regional laboratories; however, there have been concerns over the sensitivity and specificity of this approach to define treatment failure and the delay in returning results to the clinic. We use modelling to synthesize evidence and evaluate the cost-effectiveness of viral-load-informed differentiated care, accounting for limitations of dried blood sample testing. We find that viral-load-informed differentiated care using dried blood sample testing is cost-effective and is a recommended strategy for patient monitoring, although further empirical evidence as the approach is rolled out would be of value. We also explore the potential benefits of point-of-care viral load tests that may become available in the future.This article has not been written or reviewed by Nature editors. Nature accepts no responsibility for the accuracy of the information provided

    A map of human genome variation from population-scale sequencing

    Get PDF
    The 1000 Genomes Project aims to provide a deep characterization of human genome sequence variation as a foundation for investigating the relationship between genotype and phenotype. Here we present results of the pilot phase of the project, designed to develop and compare different strategies for genome-wide sequencing with high-throughput platforms. We undertook three projects: low-coverage whole-genome sequencing of 179 individuals from four populations; high-coverage sequencing of two mother-father-child trios; and exon-targeted sequencing of 697 individuals from seven populations. We describe the location, allele frequency and local haplotype structure of approximately 15 million single nucleotide polymorphisms, 1 million short insertions and deletions, and 20,000 structural variants, most of which were previously undescribed. We show that, because we have catalogued the vast majority of common variation, over 95% of the currently accessible variants found in any individual are present in this data set. On average, each person is found to carry approximately 250 to 300 loss-of-function variants in annotated genes and 50 to 100 variants previously implicated in inherited disorders. We demonstrate how these results can be used to inform association and functional studies. From the two trios, we directly estimate the rate of de novo germline base substitution mutations to be approximately 10−8 per base pair per generation. We explore the data with regard to signatures of natural selection, and identify a marked reduction of genetic variation in the neighbourhood of genes, due to selection at linked sites. These methods and public data will support the next phase of human genetic researc
    corecore