274 research outputs found

    Long term cognitive outcomes of early term (37-38 weeks) and late preterm (34-36 weeks) births: a systematic review

    Get PDF
    Background: There is a paucity of evidence regarding long-term outcomes of late preterm (34-36 weeks) and early term (37-38 weeks) delivery.  The objective of this systematic review was to assess long-term cognitive outcomes of children born at these gestations. Methods: Four electronic databases (Medline, Embase, clinicaltrials.gov and PsycINFO) were searched.  Last search was 5 th August 2016.  Studies were included if they reported gestational age, IQ measure and the ages assessed.  The protocol was registered with the International prospective register of systematic reviews (PROSPERO Record CRD42015015472).  Two independent reviewers assessed the studies.  Data were abstracted and critical appraisal performed of eligible papers. Results: Of 11,905 potential articles, seven studies reporting on 41,344 children were included.  For early term births, four studies (n = 35,711) consistently showed an increase in cognitive scores for infants born at full term (39-41 weeks) compared to those born at early term (37-38 weeks) with increases for each week of term (difference between 37 and 40 weeks of around 3 IQ points), despite differences in age of testing and method of IQ/cognitive testing.  Four studies (n = 5644) reporting childhood cognitive outcomes of late preterm births (34 - 36 weeks) also differed in study design (cohort and case control); age of testing; and method of IQ testing, and found no differences in outcomes between late preterm and term births, although risk of bias was high in included studies. Conclusion:  Children born at 39-41 weeks have higher cognitive outcome scores compared to those born at early term (37-38 weeks).  This should be considered when discussing timing of delivery.  For children born late preterm, the data is scarce and when compared to full term (37-42 weeks) did not show any difference in IQ scores

    Stable isotope composition of faeces as an indicator of seasonal diet selection in wild herbivores in southern Africa

    Get PDF
    We used stable carbon isotopes and nitrogen contents of faeces to investigate diet selection differences among wild grazers, browsers and mixed-feeders at seasonal intervals across a year in the Hluhluwe–Umfolozi Park, South Africa. Faecal 13C values showed that wildebeest and warthog selected predominantly C4 plant material throughout the year. Impala ingested significantly more C3 plant material during the winter months than in all other months. Nyala also ingested more browse during winter. The nitrogen content of wildebeest faeces was significantly lower in winter than in summer, suggesting a possible decline in diet quality during the dry winter months. No significant seasonal trend in faecal nitrogen content was evident for nyala or warthog. Nitrogen contents of impala faeces were significantly higher in spring than in other seasons. Faecal isotopic and nutrient content analyses appear to be useful indicators of short-term diet selection and nutritional status of free-ranging herbivores. Analyses show resource partitioning among the different herbivores at finer time resolutions than can be obtained from bone collagen or isotopic analysis of tooth enamel

    Proportion and number of upper-extremity musculoskeletal disorders attributable to the combined effect of biomechanical and psychosocial risk factors in a working population

    Get PDF
    The objective of this paper is to assess the combined effect of occupational biomechanical and psychosocial risk factors on the incidence of work-related upper-extremity musculoskeletal disorders (UEMSDs) and estimate the proportion and number of incident cases attributable to these risk factors in a working population. Using data from the French COSALI (COhorte des SAlariés LIgériens) cohort (enrolment phase: 2002-2005; follow-up phase: 2007-2010), a complete case analysis including 1246 workers (59% men, mean age: 38 years ± 8.6 at baseline) was performed. All participants underwent a standardized clinical examination at enrolment and 1611 workers were re-examined at follow-up. Population attributable fractions and the number of UEMSD cases attributable to occupational risk factors were calculated. During follow-up, 139 UEMSD cases were diagnosed, representing an estimated 129,320 projected incident UEMSD cases in the working population. After adjusting for personal factors, in model 1, 8664 cases (6.7%) were attributable to low social support, 19,010 (14.7%) to high physical exertion, and 20,443 (15.8%) to co-exposure to both factors. In model 2, 16,294 (12.6%) cases were attributable to low social support, 6983 (5.4%) to posture with arms above shoulder level, and 5043 (3.9%) to co-exposure to both factors. Our findings suggest that many cases of UEMSD could be potentially prevented by multidimensional interventions aimed at reducing exposure to high physical exertion and improving social support at work

    Technology use by people with intellectual and developmental disabilities to support employment activities: A single-subject design meta analysis

    Get PDF
    This is the published version. Copyright 2006 IOS PressObjectives: Technology has the potential to improve employment and rehabilitation related outcomes for persons with disabilities. The purpose of this study was to examine the impact of technology use on employment-related outcomes for people with intellectual and developmental disabilities. Study design: A comprehensive search of the literature pertaining to technology use by people with intellectual disabilities was conducted, and a single-subject design meta analysis was conducted for a subset of those studies, which focused on employment and rehabilitation related outcomes. Results: The use of technology to promote outcomes in this area was shown to be generally effective, in particular when universal design features were addressed. Conclusions: Technology has the potential to enable people with intellectual and developmental disabilities to achieve more positive employment and rehabilitation outcomes. It is important to focus on universal design features important to persons with cognitive disabilities, and there is a need for more research in this area

    Mental disorders in judicial workers: analysis of sickness absence in a cohort study

    Get PDF
    OBJECTIVE: To analyze risk factors for sickness absence due to mental disorders among judicial workers in Bahia, Brazil. METHODS: Retrospective cohort with follow-up from 2011 to 2016 with 2,660 workers of a judicial sector in Bahia, Brazil. The main outcome measures were survival curves estimated for the independent variables using the Kaplan-Meier product limit estimator and risk factors for the first episode of sickness absence calculated based on the Cox regression model. RESULTS: The survival estimate of the population of this study for the event was 0.90 and from the Cox model the risk factors for the first episode of sickness absence due to mental disorders were: female (HR = 1.81), occupation of magistrate (HR = 1.80), and age over 30 years old (HR = 1.84). In addition, the risk for new cases of sickness absence among women reached 4.0 times the risk for men, in 2015. The estimated relative risks of sickness absence and the observed survival reduction behavior over time add information to the literature on sociodemographic and occupational factors associated with sickness absence due to mental disorders in the public sector. CONCLUSION: These results highlight the need for further research to more precisely identify vulnerable groups at risk of preventable mental health-related sickness absence in the workplace, better identify the workplace organizational factors that contribute to these disorders as well as studies on the effectiveness of workplace interventions to improve mental health among judicial and other public sectors workers

    Crowdsourced assessment of surgical skill proficiency in cataract surgery

    Get PDF
    OBJECTIVE: To test whether crowdsourced lay raters can accurately assess cataract surgical skills. DESIGN: Two-armed study: independent cross-sectional and longitudinal cohorts. SETTING: Washington University Department of Ophthalmology. PARTICIPANTS AND METHODS: Sixteen cataract surgeons with varying experience levels submitted cataract surgery videos to be graded by 5 experts and 300+ crowdworkers masked to surgeon experience. Cross-sectional study: 50 videos from surgeons ranging from first-year resident to attending physician, pooled by years of training. Longitudinal study: 28 videos obtained at regular intervals as residents progressed through 180 cases. Surgical skill was graded using the modified Objective Structured Assessment of Technical Skill (mOSATS). Main outcome measures were overall technical performance, reliability indices, and correlation between expert and crowd mean scores. RESULTS: Experts demonstrated high interrater reliability and accurately predicted training level, establishing construct validity for the modified OSATS. Crowd scores were correlated with (r = 0.865, p \u3c 0.0001) but consistently higher than expert scores for first, second, and third-year residents (p \u3c 0.0001, paired t-test). Longer surgery duration negatively correlated with training level (r = -0.855, p \u3c 0.0001) and expert score (r = -0.927, p \u3c 0.0001). The longitudinal dataset reproduced cross-sectional study findings for crowd and expert comparisons. A regression equation transforming crowd score plus video length into expert score was derived from the cross-sectional dataset (r CONCLUSIONS: Crowdsourced rankings correlated with expert scores, but were not equivalent; crowd scores overestimated technical competency, especially for novice surgeons. A novel approach of adjusting crowd scores with surgery duration generated a more accurate predictive model for surgical skill. More studies are needed before crowdsourcing can be reliably used for assessing surgical proficiency

    The Cronobacter genus: ubiquity and diversity

    Get PDF
    Members of the Cronobacter genus (formerly Enterobacter sakazakii) have become associated with neonatal infections and in particular contaminated reconstituted infant formula. However this is only one perspective of the organism since the majority of infections are in the adult population, and the organism has been isolated from the enteral feeding tubes of neonates on non-formula diets. In recent years methods of detection from food and environmental sources have improved, though accurate identification has been problematic. The need for robust identification is essential in order to implement recent Codex Alimentarius Commission (2008) and related microbiological criteria for powdered infant formula (PIF; intended target age 0-6 months). Genomic analysis of emergent pathogens is of considerable advantage in both improving detection methods, and understanding the evolution of virulence. One ecosystem for Cronobacter is on plant material which may enable the organism to resist desiccation, adhere to surfaces, and resist some antimicrobial agents. These traits may also confer survival mechanisms of relevance in food manufacturing and also virulence mechanisms

    Impact of minimal residual disease status in patients with relapsed/refractory acute lymphoblastic leukemia treated with inotuzumab ozogamicin in the phase III INO-VATE trial.

    Get PDF
    Minimal residual disease (MRD) negativity is a key prognostic indicator of outcome in acute lymphocytic leukemia. In the INO-VATE trial (clinicaltrials.gov identifier: NCT01564784), patients with relapsed/refractory acute lymphocytic leukemia who received inotuzumab versus standard chemotherapy achieved greater remission and MRD-negativity rates as well as improved overall survival: hazard ratio 0.75, one-sided P = 0.0105. The current analysis assessed the prognostic value of MRD negativity at the end of inotuzumab treatment. All patients who received inotuzumab (n = 164) were included. Among patients with complete remission/complete remission with incomplete hematologic response (CR/CRi; n = 121), MRD-negative status (by multiparametric flow cytometry) was defined as <1 Ă— 10-4 blasts/nucleated cells. MRD negativity was achieved in 76 patients at the end of treatment. Compared with MRD-positive, MRD-negative status with CR/CRi was associated with significantly improved overall survival and progression-free survival, respectively: hazard ratio (97.5% confidence interval; one-sided P-value) 0.512 (97.5% CI [0.313-0.835]; P = 0.0009) and 0.423 (97.5% CI [0.256-0.699]; P < 0.0001). Median overall survival was 14.1 versus 7.2 months, in the MRD-negative versus MRD-positive groups. Patients in first salvage who achieved MRD negativity at the end of treatment experienced significantly improved survival versus that seen in MRD-positive patients, particularly for those patients who proceeded to stem cell transplant. Among patients with relapsed/refractory acute lymphocytic leukemia who received inotuzumab, those with MRD-negative CR/CRi had the best survival outcomes
    • …
    corecore