134 research outputs found

    Literacy skills of Australian Indigenous school children with and without otitis media and hearing loss

    Get PDF
    This study examined the relationship between reading, spelling, and the presence of otitis media (OM) and co-occurring hearing loss (HL) in metropolitan Indigenous Australian children, and compared their reading and spelling outcomes with those of their non-Indigenous peers. OM and HL may hinder language development and phonological awareness skills, but there is little empirical evidence to link OM/HL and literacy in this population. Eighty-six Indigenous and non-Indigenous children attending pre-primary, year one and year two at primary schools in the Perth metropolitan area participated in the study. The ear health of the participants was screened by Telethon Speech and Hearing Centre EarBus in 2011/2012. Participants’ reading and spelling skills were tested with culturally modifi ed sub-tests of the Queensland University Inventory of Literacy. Of the 46 Indigenous children, 18 presented with at least one episode of OM and one episode of HL. Results indicated that Indigenous participants had significantly poorer non-word and real word reading and spelling skills than their non-Indigenous peers. There was no significant difference between the groups of Indigenous participants with OM and HL and those with normal ear health on either measure. This research provides evidence to suggest that Indigenous children have ongoing literacy development difficulties and discusses the possibility of OM as one of many impacting factors

    Development of verb inflections among Bangla-speaking children with language disorder

    Get PDF
    Background: Children with language disorder across languages have problems with verb morphology. The nature of these problems varies according to the typology of the language. The language analyzed in this paper is the Standard Bangla spoken in Dhaka, Bangladesh, by more than 200 million people. It is an underexplored language with agglutinative features in its verb inflections. Some information on the acquisition of the language by typically developing children is available, but to date we have no information on the nature of ALD. As in many places in the developing world, the circumstances for research into language disorder are challenging, as there is no well‐ordered infrastructure for the identification of these children and approaches to intervention are not evidence based. This study represents the first attempt to characterize the nature of morphosyntactic limitations in standard Bangla‐speaking children with language disorder. Aims: To describe the performance of a group of children with language disorder on elicitation procedures for three Bangla verb inflections of increasing structural complexity—present simple, present progressive and past progressive—and to compare their abilities on these forms with those of a group of typically developing Bangla‐speaking children. Methods & Procedures: Nine children with language disorder (mean age = 88.11 months) were recruited from a special school in Dhaka. Eight of the children also had a differentiating or co‐occurring condition. They responded to three tasks: a semi‐structured conversation to elicit present simple, and two picture‐based tasks to elicit present progressive and past progressive. Their performance was compared with data available from a large group of younger typically developing children. Outcomes and Results: Group data indicated a comparable trajectory of performance by the children with language disorder with the typically developing children (present simple > present progressive > past progressive), but with significantly lower mean scores. Standard deviations suggested considerable individual variation and individual profiles were constructed for each child, revealing varying patterns of ability, some of which did not accord with the typical developmental trajectory and/or substitution patterns. Conclusions & Implications: This study identified verb morphology deficits in Bangla‐speaking children with language disorder who had asociated conditions. Variation in performance among the children suggests that individual profiles will be most effective in guiding intervention

    Nationwide abundance and distribution of African forest elephants across Gabon using non-invasive SNP genotyping

    Get PDF
    Robust monitoring programs are essential for understanding changes in wildlife population dynamics and distribution over time, especially for species of conservation concern. In this study, we applied a rapid non-invasive sampling approach to the Critically Endangered African forest elephant (Loxodonta cyclotis), at nationwide scale in its principal remaining population strongholds in Gabon. We used a species-specific customized genetic panel and spatial capture-recapture (SCR) approach, which gave a snapshot of current abundance and density distribution of forest elephants across the country. We estimated mean forest elephant density at 0.38 (95% Confidence Interval 0.24–0.52) per km2 from 18 surveyed sites. We confirm that Gabon is the main forest elephant stronghold, both in terms of estimated population size: 95,110 (95% CI 58,872–131,349) and spatial distribution (250,782 km2). Predicted elephant densities were highest in relatively flat areas with a high proportion of suitable habitat not in proximity to the national border. Protected areas and human pressure were not strong predictors of elephant densities in this study. Our nationwide systematic survey of forest elephants of Gabon serves as a proof-of-concept of application of noninvasive genetic sampling for rigorous population monitoring at large spatial scales. To our knowledge, it is the first nationwide DNA-based assessment of a free-ranging large mammal in Africa. Our findings offer a useful national baseline and status update for forest elephants in Gabon. It will inform adaptive management and stewardship of elephants and forests in the most important national forest elephant stronghold in Africa

    Monitoring and evaluation of malaria in pregnancy – developing a rational basis for control

    Get PDF
    Monitoring and evaluation of malaria control in pregnancy is essential for assessing the efficacy and effectiveness of health interventions aimed at reducing the major burden of this disease on women living in endemic areas. Yet there is no currently integrated strategic approach on how this should be achieved. Malaria control in pregnancy is formulated in relation to epidemiological patterns of exposure. Current emphasis is on intermittent preventive treatment (IPTp) during pregnancy with sulphadoxine-pyrimethamine in higher transmission areas, combined with insecticide treated bed nets (ITNs) and case management. Emphasis in lower transmission areas is primarily on case management. This paper discusses a rational basis for monitoring and evaluation based on: assessments of therapeutic and prophylactic drug efficacy; proportional reductions in parasite prevalence; seasonal effects; rapid assessment methodologies; birthweight and/or anaemia nomograms; case-coverage methods; maternal mortality indices; operational and programmatic indicators; and safety and pharmacovigilance of antimalarials in pregnancy. These approaches should be incorporated more effectively within National Programmes in order to facilitate surveillance and improve identification of high-risk women. Systems for utilizing routinely collected data should be strengthened, with greater attention to safety and pharmacovigilance with the advent of artemisinin combination therapies, and prospects of inadvertent exposures to artemisinins in the first trimester. Integrating monitoring activities within malaria control, reproductive health and adolescent-friendly services will be critical for implementation. Large-scale operational research is required to further evaluate the validity of currently proposed indicators, and in order to clarify the breadth and scale of implementation to be deployed

    Why Are Outcomes Different for Registry Patients Enrolled Prospectively and Retrospectively? Insights from the Global Anticoagulant Registry in the FIELD-Atrial Fibrillation (GARFIELD-AF).

    Get PDF
    Background: Retrospective and prospective observational studies are designed to reflect real-world evidence on clinical practice, but can yield conflicting results. The GARFIELD-AF Registry includes both methods of enrolment and allows analysis of differences in patient characteristics and outcomes that may result. Methods and Results: Patients with atrial fibrillation (AF) and ≥1 risk factor for stroke at diagnosis of AF were recruited either retrospectively (n = 5069) or prospectively (n = 5501) from 19 countries and then followed prospectively. The retrospectively enrolled cohort comprised patients with established AF (for a least 6, and up to 24 months before enrolment), who were identified retrospectively (and baseline and partial follow-up data were collected from the emedical records) and then followed prospectively between 0-18 months (such that the total time of follow-up was 24 months; data collection Dec-2009 and Oct-2010). In the prospectively enrolled cohort, patients with newly diagnosed AF (≤6 weeks after diagnosis) were recruited between Mar-2010 and Oct-2011 and were followed for 24 months after enrolment. Differences between the cohorts were observed in clinical characteristics, including type of AF, stroke prevention strategies, and event rates. More patients in the retrospectively identified cohort received vitamin K antagonists (62.1% vs. 53.2%) and fewer received non-vitamin K oral anticoagulants (1.8% vs . 4.2%). All-cause mortality rates per 100 person-years during the prospective follow-up (starting the first study visit up to 1 year) were significantly lower in the retrospective than prospectively identified cohort (3.04 [95% CI 2.51 to 3.67] vs . 4.05 [95% CI 3.53 to 4.63]; p = 0.016). Conclusions: Interpretations of data from registries that aim to evaluate the characteristics and outcomes of patients with AF must take account of differences in registry design and the impact of recall bias and survivorship bias that is incurred with retrospective enrolment. Clinical Trial Registration: - URL: http://www.clinicaltrials.gov . Unique identifier for GARFIELD-AF (NCT01090362)

    Improved risk stratification of patients with atrial fibrillation: an integrated GARFIELD-AF tool for the prediction of mortality, stroke and bleed in patients with and without anticoagulation.

    Get PDF
    OBJECTIVES: To provide an accurate, web-based tool for stratifying patients with atrial fibrillation to facilitate decisions on the potential benefits/risks of anticoagulation, based on mortality, stroke and bleeding risks. DESIGN: The new tool was developed, using stepwise regression, for all and then applied to lower risk patients. C-statistics were compared with CHA2DS2-VASc using 30-fold cross-validation to control for overfitting. External validation was undertaken in an independent dataset, Outcome Registry for Better Informed Treatment of Atrial Fibrillation (ORBIT-AF). PARTICIPANTS: Data from 39 898 patients enrolled in the prospective GARFIELD-AF registry provided the basis for deriving and validating an integrated risk tool to predict stroke risk, mortality and bleeding risk. RESULTS: The discriminatory value of the GARFIELD-AF risk model was superior to CHA2DS2-VASc for patients with or without anticoagulation. C-statistics (95% CI) for all-cause mortality, ischaemic stroke/systemic embolism and haemorrhagic stroke/major bleeding (treated patients) were: 0.77 (0.76 to 0.78), 0.69 (0.67 to 0.71) and 0.66 (0.62 to 0.69), respectively, for the GARFIELD-AF risk models, and 0.66 (0.64-0.67), 0.64 (0.61-0.66) and 0.64 (0.61-0.68), respectively, for CHA2DS2-VASc (or HAS-BLED for bleeding). In very low to low risk patients (CHA2DS2-VASc 0 or 1 (men) and 1 or 2 (women)), the CHA2DS2-VASc and HAS-BLED (for bleeding) scores offered weak discriminatory value for mortality, stroke/systemic embolism and major bleeding. C-statistics for the GARFIELD-AF risk tool were 0.69 (0.64 to 0.75), 0.65 (0.56 to 0.73) and 0.60 (0.47 to 0.73) for each end point, respectively, versus 0.50 (0.45 to 0.55), 0.59 (0.50 to 0.67) and 0.55 (0.53 to 0.56) for CHA2DS2-VASc (or HAS-BLED for bleeding). Upon validation in the ORBIT-AF population, C-statistics showed that the GARFIELD-AF risk tool was effective for predicting 1-year all-cause mortality using the full and simplified model for all-cause mortality: C-statistics 0.75 (0.73 to 0.77) and 0.75 (0.73 to 0.77), respectively, and for predicting for any stroke or systemic embolism over 1 year, C-statistics 0.68 (0.62 to 0.74). CONCLUSIONS: Performance of the GARFIELD-AF risk tool was superior to CHA2DS2-VASc in predicting stroke and mortality and superior to HAS-BLED for bleeding, overall and in lower risk patients. The GARFIELD-AF tool has the potential for incorporation in routine electronic systems, and for the first time, permits simultaneous evaluation of ischaemic stroke, mortality and bleeding risks. CLINICAL TRIAL REGISTRATION: URL: http://www.clinicaltrials.gov. Unique identifier for GARFIELD-AF (NCT01090362) and for ORBIT-AF (NCT01165710)

    Two-year outcomes of patients with newly diagnosed atrial fibrillation: results from GARFIELD-AF.

    Get PDF
    AIMS: The relationship between outcomes and time after diagnosis for patients with non-valvular atrial fibrillation (NVAF) is poorly defined, especially beyond the first year. METHODS AND RESULTS: GARFIELD-AF is an ongoing, global observational study of adults with newly diagnosed NVAF. Two-year outcomes of 17 162 patients prospectively enrolled in GARFIELD-AF were analysed in light of baseline characteristics, risk profiles for stroke/systemic embolism (SE), and antithrombotic therapy. The mean (standard deviation) age was 69.8 (11.4) years, 43.8% were women, and the mean CHA2DS2-VASc score was 3.3 (1.6); 60.8% of patients were prescribed anticoagulant therapy with/without antiplatelet (AP) therapy, 27.4% AP monotherapy, and 11.8% no antithrombotic therapy. At 2-year follow-up, all-cause mortality, stroke/SE, and major bleeding had occurred at a rate (95% confidence interval) of 3.83 (3.62; 4.05), 1.25 (1.13; 1.38), and 0.70 (0.62; 0.81) per 100 person-years, respectively. Rates for all three major events were highest during the first 4 months. Congestive heart failure, acute coronary syndromes, sudden/unwitnessed death, malignancy, respiratory failure, and infection/sepsis accounted for 65% of all known causes of death and strokes for <10%. Anticoagulant treatment was associated with a 35% lower risk of death. CONCLUSION: The most frequent of the three major outcome measures was death, whose most common causes are not known to be significantly influenced by anticoagulation. This suggests that a more comprehensive approach to the management of NVAF may be needed to improve outcome. This could include, in addition to anticoagulation, interventions targeting modifiable, cause-specific risk factors for death. CLINICAL TRIAL REGISTRATION: http://www.clinicaltrials.gov. Unique identifier: NCT01090362

    Risk profiles and one-year outcomes of patients with newly diagnosed atrial fibrillation in India: Insights from the GARFIELD-AF Registry.

    Get PDF
    BACKGROUND: The Global Anticoagulant Registry in the FIELD-Atrial Fibrillation (GARFIELD-AF) is an ongoing prospective noninterventional registry, which is providing important information on the baseline characteristics, treatment patterns, and 1-year outcomes in patients with newly diagnosed non-valvular atrial fibrillation (NVAF). This report describes data from Indian patients recruited in this registry. METHODS AND RESULTS: A total of 52,014 patients with newly diagnosed AF were enrolled globally; of these, 1388 patients were recruited from 26 sites within India (2012-2016). In India, the mean age was 65.8 years at diagnosis of NVAF. Hypertension was the most prevalent risk factor for AF, present in 68.5% of patients from India and in 76.3% of patients globally (P < 0.001). Diabetes and coronary artery disease (CAD) were prevalent in 36.2% and 28.1% of patients as compared with global prevalence of 22.2% and 21.6%, respectively (P < 0.001 for both). Antiplatelet therapy was the most common antithrombotic treatment in India. With increasing stroke risk, however, patients were more likely to receive oral anticoagulant therapy [mainly vitamin K antagonist (VKA)], but average international normalized ratio (INR) was lower among Indian patients [median INR value 1.6 (interquartile range {IQR}: 1.3-2.3) versus 2.3 (IQR 1.8-2.8) (P < 0.001)]. Compared with other countries, patients from India had markedly higher rates of all-cause mortality [7.68 per 100 person-years (95% confidence interval 6.32-9.35) vs 4.34 (4.16-4.53), P < 0.0001], while rates of stroke/systemic embolism and major bleeding were lower after 1 year of follow-up. CONCLUSION: Compared to previously published registries from India, the GARFIELD-AF registry describes clinical profiles and outcomes in Indian patients with AF of a different etiology. The registry data show that compared to the rest of the world, Indian AF patients are younger in age and have more diabetes and CAD. Patients with a higher stroke risk are more likely to receive anticoagulation therapy with VKA but are underdosed compared with the global average in the GARFIELD-AF. CLINICAL TRIAL REGISTRATION-URL: http://www.clinicaltrials.gov. Unique identifier: NCT01090362

    Safety, immunogenicity, and reactogenicity of BNT162b2 and mRNA-1273 COVID-19 vaccines given as fourth-dose boosters following two doses of ChAdOx1 nCoV-19 or BNT162b2 and a third dose of BNT162b2 (COV-BOOST): a multicentre, blinded, phase 2, randomised trial

    Get PDF

    Global, regional, and national comparative risk assessment of 84 behavioural, environmental and occupational, and metabolic risks or clusters of risks for 195 countries and territories, 1990-2017: a systematic analysis for the Global Burden of Disease Study 2017

    Get PDF
    Background The Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) 2017 comparative risk assessment (CRA) is a comprehensive approach to risk factor quantification that offers a useful tool for synthesising evidence on risks and risk–outcome associations. With each annual GBD study, we update the GBD CRA to incorporate improved methods, new risks and risk–outcome pairs, and new data on risk exposure levels and risk–outcome associations. Methods We used the CRA framework developed for previous iterations of GBD to estimate levels and trends in exposure, attributable deaths, and attributable disability-adjusted life-years (DALYs), by age group, sex, year, and location for 84 behavioural, environmental and occupational, and metabolic risks or groups of risks from 1990 to 2017. This study included 476 risk–outcome pairs that met the GBD study criteria for convincing or probable evidence of causation. We extracted relative risk and exposure estimates from 46 749 randomised controlled trials, cohort studies, household surveys, census data, satellite data, and other sources. We used statistical models to pool data, adjust for bias, and incorporate covariates. Using the counterfactual scenario of theoretical minimum risk exposure level (TMREL), we estimated the portion of deaths and DALYs that could be attributed to a given risk. We explored the relationship between development and risk exposure by modelling the relationship between the Socio-demographic Index (SDI) and risk-weighted exposure prevalence and estimated expected levels of exposure and risk-attributable burden by SDI. Finally, we explored temporal changes in risk-attributable DALYs by decomposing those changes into six main component drivers of change as follows: (1) population growth; (2) changes in population age structures; (3) changes in exposure to environmental and occupational risks; (4) changes in exposure to behavioural risks; (5) changes in exposure to metabolic risks; and (6) changes due to all other factors, approximated as the risk-deleted death and DALY rates, where the risk-deleted rate is the rate that would be observed had we reduced the exposure levels to the TMREL for all risk factors included in GBD 2017. Findings In 2017, 34·1 million (95% uncertainty interval [UI] 33·3–35·0) deaths and 1·21 billion (1·14–1·28) DALYs were attributable to GBD risk factors. Globally, 61·0% (59·6–62·4) of deaths and 48·3% (46·3–50·2) of DALYs were attributed to the GBD 2017 risk factors. When ranked by risk-attributable DALYs, high systolic blood pressure (SBP) was the leading risk factor, accounting for 10·4 million (9·39–11·5) deaths and 218 million (198–237) DALYs, followed by smoking (7·10 million [6·83–7·37] deaths and 182 million [173–193] DALYs), high fasting plasma glucose (6·53 million [5·23–8·23] deaths and 171 million [144–201] DALYs), high body-mass index (BMI; 4·72 million [2·99–6·70] deaths and 148 million [98·6–202] DALYs), and short gestation for birthweight (1·43 million [1·36–1·51] deaths and 139 million [131–147] DALYs). In total, risk-attributable DALYs declined by 4·9% (3·3–6·5) between 2007 and 2017. In the absence of demographic changes (ie, population growth and ageing), changes in risk exposure and risk-deleted DALYs would have led to a 23·5% decline in DALYs during that period. Conversely, in the absence of changes in risk exposure and risk-deleted DALYs, demographic changes would have led to an 18·6% increase in DALYs during that period. The ratios of observed risk exposure levels to exposure levels expected based on SDI (O/E ratios) increased globally for unsafe drinking water and household air pollution between 1990 and 2017. This result suggests that development is occurring more rapidly than are changes in the underlying risk structure in a population. Conversely, nearly universal declines in O/E ratios for smoking and alcohol use indicate that, for a given SDI, exposure to these risks is declining. In 2017, the leading Level 4 risk factor for age-standardised DALY rates was high SBP in four super-regions: central Europe, eastern Europe, and central Asia; north Africa and Middle East; south Asia; and southeast Asia, east Asia, and Oceania. The leading risk factor in the high-income super-region was smoking, in Latin America and Caribbean was high BMI, and in sub-Saharan Africa was unsafe sex. O/E ratios for unsafe sex in sub-Saharan Africa were notably high, and those for alcohol use in north Africa and the Middle East were notably low. Interpretation By quantifying levels and trends in exposures to risk factors and the resulting disease burden, this assessment offers insight into where past policy and programme efforts might have been successful and highlights current priorities for public health action. Decreases in behavioural, environmental, and occupational risks have largely offset the effects of population growth and ageing, in relation to trends in absolute burden. Conversely, the combination of increasing metabolic risks and population ageing will probably continue to drive the increasing trends in non-communicable diseases at the global level, which presents both a public health challenge and opportunity. We see considerable spatiotemporal heterogeneity in levels of risk exposure and risk-attributable burden. Although levels of development underlie some of this heterogeneity, O/E ratios show risks for which countries are overperforming or underperforming relative to their level of development. As such, these ratios provide a benchmarking tool to help to focus local decision making. Our findings reinforce the importance of both risk exposure monitoring and epidemiological research to assess causal connections between risks and health outcomes, and they highlight the usefulness of the GBD study in synthesising data to draw comprehensive and robust conclusions that help to inform good policy and strategic health planning
    corecore