490 research outputs found

    Correlates of Complete Childhood Vaccination in East African Countries.

    Get PDF
    Despite the benefits of childhood vaccinations, vaccination rates in low-income countries (LICs) vary widely. Increasing coverage of vaccines to 90% in the poorest countries over the next 10 years has been estimated to prevent 426 million cases of illness and avert nearly 6.4 million childhood deaths worldwide. Consequently, we sought to provide a comprehensive examination of contemporary vaccination patterns in East Africa and to identify common and country-specific barriers to complete childhood vaccination. Using data from the Demographic and Health Surveys (DHS) for Burundi, Ethiopia, Kenya, Rwanda, Tanzania, and Uganda, we looked at the prevalence of complete vaccination for polio, measles, Bacillus Calmette-Guérin (BCG) and DTwPHibHep (DTP) as recommended by the WHO among children ages 12 to 23 months. We conducted multivariable logistic regression within each country to estimate associations between complete vaccination status and health care access and sociodemographic variables using backwards stepwise regression. Vaccination varied significantly by country. In all countries, the majority of children received at least one dose of a WHO recommended vaccine; however, in Ethiopia, Tanzania, and Uganda less than 50% of children received a complete schedule of recommended vaccines. Being delivered in a public or private institution compared with being delivered at home was associated with increased odds of complete vaccination status. Sociodemographic covariates were not consistently associated with complete vaccination status across countries. Although no consistent set of predictors accounted for complete vaccination status, we observed differences based on region and the location of delivery. These differences point to the need to examine the historical, political, and economic context of each country in order to maximize vaccination coverage. Vaccination against these childhood diseases is a critical step towards reaching the Millennium Development Goal of reducing under-five mortality by two-thirds by 2015 and thus should be a global priority

    Separate cortical stages in amodal completion revealed by functional magnetic resonance adaptation

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Objects in our environment are often partly occluded, yet we effortlessly perceive them as whole and complete. This phenomenon is called visual amodal completion. Psychophysical investigations suggest that the process of completion starts from a representation of the (visible) physical features of the stimulus and ends with a completed representation of the stimulus. The goal of our study was to investigate both stages of the completion process by localizing both brain regions involved in processing the physical features of the stimulus as well as brain regions representing the completed stimulus.</p> <p>Results</p> <p>Using fMRI adaptation we reveal clearly distinct regions in the visual cortex of humans involved in processing of amodal completion: early visual cortex – presumably V1 -processes the local contour information of the stimulus whereas regions in the inferior temporal cortex represent the completed shape. Furthermore, our data suggest that at the level of inferior temporal cortex information regarding the original local contour information is not preserved but replaced by the representation of the amodally completed percept.</p> <p>Conclusion</p> <p>These findings provide neuroimaging evidence for a multiple step theory of amodal completion and further insights into the neuronal correlates of visual perception.</p

    The Deposition and Accumulation of Microplastics in Marine Sediments and Bottom Water from the Irish Continental Shelf

    Get PDF
    Abstract Microplastics are widely dispersed throughout the marine environment. An understanding of the distribution and accumulation of this form of pollution is crucial for gauging environmental risk. Presented here is the first record of plastic contamination, in the 5 mm–250 μm size range, of Irish continental shelf sediments. Sixty-two microplastics were recovered from 10 of 11 stations using box cores. 97% of recovered microplastics were found to reside shallower than 2.5 cm sediment depth, with the area of highest microplastic concentration being the water-sediment interface and top 0.5 cm of sediments (66%). Microplastics were not found deeper than 3.5 ± 0.5 cm. These findings demonstrate that microplastic contamination is ubiquitous within superficial sediments and bottom water along the western Irish continental shelf. Results highlight that cores need to be at least 4–5 cm deep to quantify the standing stock of microplastics within marine sediments. All recovered microplastics were classified as secondary microplastics as they appear to be remnants of larger items; fibres being the principal form of microplastic pollution (85%), followed by broken fragments (15%). The range of polymer types, colours and physical forms recovered suggests a variety of sources. Further research is needed to understand the mechanisms influencing microplastic transport, deposition, resuspension and subsequent interactions with biota

    A prospective registry of emergency department patients admitted with infection

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Patients with infections account for a significant proportion of Emergency Department (ED) workload, with many hospital patients admitted with severe sepsis initially investigated and resuscitated in the ED. The aim of this registry is to systematically collect quality observational clinical and microbiological data regarding emergency patients admitted with infection, in order to explore in detail the microbiological profile of these patients, and to provide the foundation for a significant programme of prospective observational studies and further clinical research.</p> <p>Methods/design</p> <p>ED patients admitted with infection will be identified through daily review of the computerised database of ED admissions, and clinical information such as site of infection, physiological status in the ED, and components of management abstracted from patients' charts. This information will be supplemented by further data regarding results of investigations, microbiological isolates, and length of stay (LOS) from hospital electronic databases. Outcome measures will be hospital and intensive care unit (ICU) LOS, and mortality endpoints derived from a national death registry.</p> <p>Discussion</p> <p>This database will provide substantial insights into the characteristics, microbiological profile, and outcomes of emergency patients admitted with infections. It will become the nidus for a programme of research into compliance with evidence-based guidelines, optimisation of empiric antimicrobial regimens, validation of clinical decision rules and identification of outcome determinants. The detailed observational data obtained will provide a solid baseline to inform the design of further controlled trials planned to optimise treatment and outcomes for emergency patients admitted with infections.</p

    Knowledge of ghostwriting and financial conflicts-of-interest reduces the perceived credibility of biomedical research

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>While the impact of conflicts-of-interest (COI) is of increasing concern in academic medicine, there is little research on the reaction of practicing clinicians to the disclosure of such conflicts. We developed two research vignettes presenting a fictional antidepressant medication study, one in which the principal investigator had no COI and another in which there were multiple COI disclosed. We confirmed the face validity of the COI vignette through consultation with experts. Hospital-based clinicians were randomly assigned to read one of these two vignettes and then administered a credibility scale.</p> <p>Findings</p> <p>Perceived credibility ratings were much lower in the COI group, with a difference of 11.00 points (31.42%) on the credibility scale total as calculated through the Mann-Whitney U test (95% CI = 6.99 - 15.00, <it>p </it>< .001). Clinicians in the COI group were also less likely to recommend the antidepressant medication discussed in the vignette (Odds Ratio = 0.163, 95% CI = .03 = 0.875).</p> <p>Conclusions</p> <p>In this study, increased disclosure of COI resulted in lower credibility ratings.</p

    Cognitive health among older adults in the United States and in England

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Cognitive function is a key determinant of independence and quality of life among older adults. Compared to adults in England, US adults have a greater prevalence of cardiovascular risk factors and disease that may lead to poorer cognitive function. We compared cognitive performance of older adults in the US and England, and sought to identify sociodemographic and medical factors associated with differences in cognitive function between the two countries.</p> <p>Methods</p> <p>Data were from the 2002 waves of the US Health and Retirement Study (HRS) (n = 8,299) and the English Longitudinal Study of Ageing (ELSA) (n = 5,276), nationally representative population-based studies designed to facilitate direct comparisons of health, wealth, and well-being. There were differences in the administration of the HRS and ELSA surveys, including use of both telephone and in-person administration of the HRS compared to only in-person administration of the ELSA, and a significantly higher response rate for the HRS (87% for the HRS vs. 67% for the ELSA). In each country, we assessed cognitive performance in non-hispanic whites aged 65 and over using the same tests of memory and orientation (0 to 24 point scale).</p> <p>Results</p> <p>US adults scored significantly better than English adults on the 24-point cognitive scale (unadjusted mean: 12.8 vs. 11.4, P < .001; age- and sex-adjusted: 13.2 vs. 11.7, P < .001). The US cognitive advantage was apparent even though US adults had a significantly higher prevalence of cardiovascular risk factors and disease. In a series of OLS regression analyses that controlled for a range of sociodemographic and medical factors, higher levels of education and wealth, and lower levels of depressive symptoms, accounted for some of the US cognitive advantage. US adults were also more likely to be taking medications for hypertension, and hypertension treatment was associated with significantly better cognitive function in the US, but not in England (P = .014 for treatment × country interaction).</p> <p>Conclusion</p> <p>Despite methodological differences in the administration of the surveys in the two countries, US adults aged ≥ 65 appeared to be cognitively healthier than English adults, even though they had a higher burden of cardiovascular risk factors and disease. Given the growing number of older adults worldwide, future cross-national studies aimed at identifying the medical and social factors that might prevent or delay cognitive decline in older adults would make important and valuable contributions to public health.</p

    Control of interjoint coordination during the swing phase of normal gait at different speeds

    Get PDF
    BACKGROUND: It has been suggested that the control of unconstrained movements is simplified via the imposition of a kinetic constraint that produces dynamic torques at each moving joint such that they are a linear function of a single motor command. The linear relationship between dynamic torques at each joint has been demonstrated for multijoint upper limb movements. The purpose of the current study was to test the applicability of such a control scheme to the unconstrained portion of the gait cycle – the swing phase. METHODS: Twenty-eight neurologically normal individuals walked along a track at three different speeds. Angular displacements and dynamic torques produced at each of the three lower limb joints (hip, knee and ankle) were calculated from segmental position data recorded during each trial. We employed principal component (PC) analysis to determine (1) the similarity of kinematic and kinetic time series at the ankle, knee and hip during the swing phase of gait, and (2) the effect of walking speed on the range of joint displacement and torque. RESULTS: The angular displacements of the three joints were accounted for by two PCs during the swing phase (Variance accounted for – PC1: 75.1 ± 1.4%, PC2: 23.2 ± 1.3%), whereas the dynamic joint torques were described by a single PC (Variance accounted for – PC1: 93.8 ± 0.9%). Increases in walking speed were associated with increases in the range of motion and magnitude of torque at each joint although the ratio describing the relative magnitude of torque at each joint remained constant. CONCLUSION: Our results support the idea that the control of leg swing during gait is simplified in two ways: (1) the pattern of dynamic torque at each lower limb joint is produced by appropriately scaling a single motor command and (2) the magnitude of dynamic torque at all three joints can be specified with knowledge of the magnitude of torque at a single joint. Walking speed could therefore be altered by modifying a single value related to the magnitude of torque at one joint

    Epidemiology and seasonality of respiratory viral infections in hospitalized children in Kuala Lumpur, Malaysia: a retrospective study of 27 years

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Viral respiratory tract infections (RTI) are relatively understudied in Southeast Asian tropical countries. In temperate countries, seasonal activity of respiratory viruses has been reported, particularly in association with temperature, while inconsistent correlation of respiratory viral activity with humidity and rain is found in tropical countries. A retrospective study was performed from 1982-2008 to investigate the viral etiology of children (≤ 5 years old) admitted with RTI in a tertiary hospital in Kuala Lumpur, Malaysia.</p> <p>Methods</p> <p>A total of 10269 respiratory samples from all children ≤ 5 years old received at the hospital's diagnostic virology laboratory between 1982-2008 were included in the study. Immunofluorescence staining (for respiratory syncytial virus (RSV), influenza A and B, parainfluenza types 1-3, and adenovirus) and virus isolation were performed. The yearly hospitalization rates and annual patterns of laboratory-confirmed viral RTIs were determined. Univariate ANOVA was used to analyse the demographic parameters of cases. Multiple regression and Spearman's rank correlation were used to analyse the correlation between RSV cases and meteorological parameters.</p> <p>Results</p> <p>A total of 2708 cases were laboratory-confirmed using immunofluorescence assays and viral cultures, with the most commonly detected being RSV (1913, 70.6%), parainfluenza viruses (357, 13.2%), influenza viruses (297, 11.0%), and adenovirus (141, 5.2%). Children infected with RSV were significantly younger, and children infected with influenza viruses were significantly older. The four main viruses caused disease throughout the year, with a seasonal peak observed for RSV in September-December. Monthly RSV cases were directly correlated with rain days, and inversely correlated with relative humidity and temperature.</p> <p>Conclusion</p> <p>Viral RTIs, particularly due to RSV, are commonly detected in respiratory samples from hospitalized children in Kuala Lumpur, Malaysia. As in temperate countries, RSV infection in tropical Malaysia also caused seasonal yearly epidemics, and this has implications for prophylaxis and vaccination programmes.</p

    Challenges to undertaking randomised trials with looked after children in social care settings.

    Get PDF
    BACKGROUND: Randomised controlled trials (RCTs) are widely viewed as the gold standard for assessing effectiveness in health research; however many researchers and practitioners believe that RCTs are inappropriate and un-doable in social care settings, particularly in relation to looked after children. The aim of this article is to describe the challenges faced in conducting a pilot study and phase II RCT of a peer mentoring intervention to reduce teenage pregnancy in looked after children in a social care setting. METHODS: Interviews were undertaken with social care professionals and looked after children, and a survey conducted with looked after children, to establish the feasibility and acceptability of the intervention and research design. RESULTS: Barriers to recruitment and in managing the intervention were identified, including social workers acting as informal gatekeepers; social workers concerns and misconceptions about the recruitment criteria and the need for and purpose of randomisation; resource limitations, which made it difficult to prioritise research over other demands on their time and difficulties in engaging and retaining looked after children in the study. CONCLUSIONS: The relative absence of a research infrastructure and culture in social care and the lack of research support funding available for social care agencies, compared to health organisations, has implications for increasing evidence-based practice in social care settings, particularly in this very vulnerable group of young people

    History of clinical transplantation

    Get PDF
    The emergence of transplantation has seen the development of increasingly potent immunosuppressive agents, progressively better methods of tissue and organ preservation, refinements in histocompatibility matching, and numerous innovations is surgical techniques. Such efforts in combination ultimately made it possible to successfully engraft all of the organs and bone marrow cells in humans. At a more fundamental level, however, the transplantation enterprise hinged on two seminal turning points. The first was the recognition by Billingham, Brent, and Medawar in 1953 that it was possible to induce chimerism-associated neonatal tolerance deliberately. This discovery escalated over the next 15 years to the first successful bone marrow transplantations in humans in 1968. The second turning point was the demonstration during the early 1960s that canine and human organ allografts could self-induce tolerance with the aid of immunosuppression. By the end of 1962, however, it had been incorrectly concluded that turning points one and two involved different immune mechanisms. The error was not corrected until well into the 1990s. In this historical account, the vast literature that sprang up during the intervening 30 years has been summarized. Although admirably documenting empiric progress in clinical transplantation, its failure to explain organ allograft acceptance predestined organ recipients to lifetime immunosuppression and precluded fundamental changes in the treatment policies. After it was discovered in 1992 that long-surviving organ transplant recipient had persistent microchimerism, it was possible to see the mechanistic commonality of organ and bone marrow transplantation. A clarifying central principle of immunology could then be synthesized with which to guide efforts to induce tolerance systematically to human tissues and perhaps ultimately to xenografts
    corecore