32 research outputs found

    Backward walking training improves balance in school-aged boys

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Falls remain a major cause of childhood morbidity and mortality. It is suggested that backward walking (BW) may offer some benefits especially in balance and motor control ability beyond those experienced through forward walking (FW), and may be a potential intervention for prevention of falls. The objective of this study was to investigate the effects of BW on balance in boys.</p> <p>Methods</p> <p>Sixteen healthy boys (age: 7.19 ± 0.40 y) were randomly assigned to either an experimental or a control group. The experimental group participated in a BW training program (12-week, 2 times weekly, and 25-min each time) but not the control group. Both groups had five dynamic balance assessments with a Biodex Stability System (anterior/posterior, medial/lateral, and overall balance index) before, during and after the training (week- 0, 4, 8, 12, 24). Six control and six experimental boys participated in a study comparing kinematics of lower limbs between FW and BW after the training (week-12).</p> <p>Results</p> <p>The balance of experimental group was better than that of control group after 8 weeks of training (<it>P </it>< 0.01), and was still better than that of control group (<it>P </it>< 0.05), when the BW training program had finished for 12 weeks. The kinematic analysis indicated that there was no difference between control and experimental groups in the kinematics of both FW and BW gaits after the BW training (<it>P </it>> 0.05). Compared to FW, the duration of stance phase of BW tended to be longer, while the swing phase, stride length, walking speed, and moving ranges of the thigh, calf and foot of BW decreased (<it>P </it>< 0.01).</p> <p>Conclusion</p> <p>Backward walking training in school-aged boys can improve balance.</p

    Bleeding in cardiac patients prescribed antithrombotic drugs: Electronic health record phenotyping algorithms, incidence, trends and prognosis

    Get PDF
    Background Clinical guidelines and public health authorities lack recommendations on scalable approaches to defining and monitoring the occurrence and severity of bleeding in populations prescribed antithrombotic therapy. Methods We examined linked primary care, hospital admission and death registry electronic health records (CALIBER 1998–2010, England) of patients with newly diagnosed atrial fibrillation, acute myocardial infarction, unstable angina or stable angina with the aim to develop algorithms for bleeding events. Using the developed bleeding phenotypes, Kaplan-Meier plots were used to estimate the incidence of bleeding events and we used Cox regression models to assess the prognosis for all-cause mortality, atherothrombotic events and further bleeding. Results We present electronic health record phenotyping algorithms for bleeding based on bleeding diagnosis in primary or hospital care, symptoms, transfusion, surgical procedures and haemoglobin values. In validation of the phenotype, we estimated a positive predictive value of 0.88 (95% CI 0.64, 0.99) for hospitalised bleeding. Amongst 128,815 patients, 27,259 (21.2%) had at least 1 bleeding event, with 5-year risks of bleeding of 29.1%, 21.9%, 25.3% and 23.4% following diagnoses of atrial fibrillation, acute myocardial infarction, unstable angina and stable angina, respectively. Rates of hospitalised bleeding per 1000 patients more than doubled from 1.02 (95% CI 0.83, 1.22) in January 1998 to 2.68 (95% CI 2.49, 2.88) in December 2009 coinciding with the increased rates of antiplatelet and vitamin K antagonist prescribing. Patients with hospitalised bleeding and primary care bleeding, with or without markers of severity, were at increased risk of all-cause mortality and atherothrombotic events compared to those with no bleeding. For example, the hazard ratio for all-cause mortality was 1.98 (95% CI 1.86, 2.11) for primary care bleeding with markers of severity and 1.99 (95% CI 1.92, 2.05) for hospitalised bleeding without markers of severity, compared to patients with no bleeding. Conclusions Electronic health record bleeding phenotyping algorithms offer a scalable approach to monitoring bleeding in the population. Incidence of bleeding has doubled in incidence since 1998, affects one in four cardiovascular disease patients, and is associated with poor prognosis. Efforts are required to tackle this iatrogenic epidemic

    The Standard Account of Moral Distress and Why We Should Keep It

    No full text
    In the last three decades, considerable theoretical and empirical research has been undertaken on the topic of moral distress among health professionals. Understood as a psychological and emotional response to the experience of moral wrongdoing, there is evidence to suggest that-if unaddressed-it contributes to staff demoralization, desensitization and burnout and, ultimately, to lower standards of patient safety and quality of care. However, more recently, the concept of moral distress has been subjected to important criticisms. Specifically, some authors argue that the standard account of moral distress elucidated by Jameton (AWHONN's Clin Issues Perinat Women's Health 4(4):542-551, 1984) does not refer to a discrete phenomenon and/or that it is not sufficiently broad and that this makes measuring its prevalence among health professionals, and other groups of workers, difficult if not impossible. In this paper, we defend the standard account of moral distress. We understand it as a concept that draws attention to the social, political and contextual determinants of moral agency and brings the emotional landscape of the moral realm to the fore. Given the increasing pressure on health professionals worldwide to meet efficiency, financial and corporate targets and reported adverse effects of these for the quality and safety of patient care, we believe that further empirical research that deploys the standard account moral distress is timely and important
    corecore