6,526 research outputs found
Risk factors for hospital admission with RSV bronchiolitis in England: a population-based birth cohort study.
OBJECTIVE: To examine the timing and duration of RSV bronchiolitis hospital admission among term and preterm infants in England and to identify risk factors for bronchiolitis admission.
DESIGN: A population-based birth cohort with follow-up to age 1 year, using the Hospital Episode Statistics database. SETTING: 71 hospitals across England.
PARTICIPANTS: We identified 296618 individual birth records from 2007/08 and linked to subsequent hospital admission records during the first year of life.
RESULTS: In our cohort there were 7189 hospital admissions with a diagnosis of bronchiolitis, 24.2 admissions per 1000 infants under 1 year (95%CI 23.7-24.8), of which 15% (1050/7189) were born preterm (47.3 bronchiolitis admissions per 1000 preterm infants (95% CI 44.4-50.2)). The peak age group for bronchiolitis admissions was infants aged 1 month and the median was age 120 days (IQR = 61-209 days). The median length of stay was 1 day (IQR = 0-3). The relative risk (RR) of a bronchiolitis admission was higher among infants with known risk factors for severe RSV infection, including those born preterm (RR = 1.9, 95% CI 1.8-2.0) compared with infants born at term. Other conditions also significantly increased risk of bronchiolitis admission, including Down's syndrome (RR = 2.5, 95% CI 1.7-3.7) and cerebral palsy (RR = 2.4, 95% CI 1.5-4.0).
CONCLUSIONS: Most (85%) of the infants who are admitted to hospital with bronchiolitis in England are born at term, with no known predisposing risk factors for severe RSV infection, although risk of admission is higher in known risk groups. The early age of bronchiolitis admissions has important implications for the potential impact and timing of future active and passive immunisations. More research is needed to explain why babies born with Down's syndrome and cerebral palsy are also at higher risk of hospital admission with RSV bronchiolitis
Measuring diet in primary school children aged 8-11 years: validation of the Child and Diet Evaluation Tool (CADET) with an emphasis on fruit and vegetable intake.
Background/Objectives:The Child And Diet Evaluation Tool (CADET) is a 24-h food diary that measures the nutrition intake of children aged 3-7 years, with a focus on fruit and vegetable consumption. Until now CADET has not been used to measure nutrient intake of children aged 8-11 years. To ensure that newly assigned portion sizes for this older age group were valid, participants were asked to complete the CADET diary (the school and home food diary) concurrently with a 1-day weighed record. Subjects/Methods:A total of 67 children with a mean age of 9.3 years (s.d.: ± 1.4, 51% girls) participated in the study. Total fruit and vegetable intake in grams and other nutrients were extracted to compare the mean intakes from the CADET diary and Weighed record using t-tests and Pearson's r correlations. Bland-Altman analysis was also conducted to assess the agreement between the two methods. Results: Correlations comparing the CADET diary to the weighed record were high for fruit, vegetables and combined fruit and vegetables (r=0.7). The results from the Bland-Altman plots revealed a mean difference of 54 g (95% confidence interval: -88, 152) for combined fruit and vegetables intake. CADET is the only tool recommended by the National Obesity Observatory that has been validated in a UK population and provides nutrient level data on children's diets. Conclusions:The results from this study conclude that CADET can provide high-quality nutrient data suitable for evaluating intervention studies now for children aged 3-11 years with a focus on fruit and vegetable intake
Determining the date of diagnosis – is it a simple matter? The impact of different approaches to dating diagnosis on estimates of delayed care for ovarian cancer in UK primary care
Background Studies of cancer incidence and early management will increasingly draw on routine electronic patient records. However, data may be incomplete or inaccurate. We developed a generalisable strategy for investigating presenting symptoms and delays in diagnosis using ovarian cancer as an example. Methods The General Practice Research Database was used to investigate the time between first report of symptom and diagnosis of 344 women diagnosed with ovarian cancer between 01/06/2002 and 31/05/2008. Effects of possible inaccuracies in dating of diagnosis on the frequencies and timing of the most commonly reported symptoms were investigated using four increasingly inclusive definitions of first diagnosis/suspicion: 1. "Definite diagnosis" 2. "Ambiguous diagnosis" 3. "First treatment or complication suggesting pre-existing diagnosis", 4 "First relevant test or referral". Results The most commonly coded symptoms before a definite diagnosis of ovarian cancer, were abdominal pain (41%), urogenital problems(25%), abdominal distension (24%), constipation/change in bowel habits (23%) with 70% of cases reporting at least one of these. The median time between first reporting each of these symptoms and diagnosis was 13, 21, 9.5 and 8.5 weeks respectively. 19% had a code for definitions 2 or 3 prior to definite diagnosis and 73% a code for 4. However, the proportion with symptoms and the delays were similar for all four definitions except 4, where the median delay was 8, 8, 3, 10 and 0 weeks respectively. Conclusion Symptoms recorded in the General Practice Research Database are similar to those reported in the literature, although their frequency is lower than in studies based on self-report. Generalisable strategies for exploring the impact of recording practice on date of diagnosis in electronic patient records are recommended, and studies which date diagnoses in GP records need to present sensitivity analyses based on investigation, referral and diagnosis data. Free text information may be essential in obtaining accurate estimates of incidence, and for accurate dating of diagnoses
Correlates of Complete Childhood Vaccination in East African Countries.
Despite the benefits of childhood vaccinations, vaccination rates in low-income countries (LICs) vary widely. Increasing coverage of vaccines to 90% in the poorest countries over the next 10 years has been estimated to prevent 426 million cases of illness and avert nearly 6.4 million childhood deaths worldwide. Consequently, we sought to provide a comprehensive examination of contemporary vaccination patterns in East Africa and to identify common and country-specific barriers to complete childhood vaccination. Using data from the Demographic and Health Surveys (DHS) for Burundi, Ethiopia, Kenya, Rwanda, Tanzania, and Uganda, we looked at the prevalence of complete vaccination for polio, measles, Bacillus Calmette-Guérin (BCG) and DTwPHibHep (DTP) as recommended by the WHO among children ages 12 to 23 months. We conducted multivariable logistic regression within each country to estimate associations between complete vaccination status and health care access and sociodemographic variables using backwards stepwise regression. Vaccination varied significantly by country. In all countries, the majority of children received at least one dose of a WHO recommended vaccine; however, in Ethiopia, Tanzania, and Uganda less than 50% of children received a complete schedule of recommended vaccines. Being delivered in a public or private institution compared with being delivered at home was associated with increased odds of complete vaccination status. Sociodemographic covariates were not consistently associated with complete vaccination status across countries. Although no consistent set of predictors accounted for complete vaccination status, we observed differences based on region and the location of delivery. These differences point to the need to examine the historical, political, and economic context of each country in order to maximize vaccination coverage. Vaccination against these childhood diseases is a critical step towards reaching the Millennium Development Goal of reducing under-five mortality by two-thirds by 2015 and thus should be a global priority
Subregional hippocampal morphology and psychiatric outcome in adolescents who were born very preterm and at term
Background: The hippocampus has been reported to be structurally and functionally altered as a sequel of very preterm birth ( < 33 weeks gestation), possibly due its vulnerability to hypoxic-ischemic damage in the neonatal period. We examined hippocampal volumes and subregional morphology in very preterm born individuals in mid- and late adolescence and their association with psychiatric outcome. Methods: Structural brain magnetic resonance images were acquired at two time points (baseline and follow-up) from 65 ex-preterm adolescents (mean age = 15.5 and 19.6 years) and 36 termborn controls (mean age=15.0 and 19.0 years). Hippocampal volumes and subregional morphometric differences were measured from manual tracings and with three-dimensional shape analysis. Psychiatric outcome was assessed with the Rutter Parents' Scale at baseline, the General Health Questionnaire at follow-up and the Peters Delusional Inventory at both time points. Results: In contrast to previous studies we did not find significant difference in the cross-sectional or longitudinal hippocampal volumes between individuals born preterm and controls, despite preterm individual having significantly smaller whole brain volumes. Shape analysis at baseline revealed subregional deformations in 28% of total bilateral hippocampal surface, reflecting atrophy, in ex-preterm individuals compared to controls, and in 22% at follow-up. In ex-preterm individuals, longitudinal changes in hippocampal shape accounted for 11% of the total surface, while in controls they reached 20%. In the whole sample (both groups) larger right hippocampal volume and bilateral anterior surface deformations at baseline were associated with delusional ideation scores at follow-up. Conclusions: This study suggests a dynamic association between cross-sectional hippocampal volumes, longitudinal changes and surface deformations and psychosis proneness. Copyright
Options for early breast cancer follow-up in primary and secondary care : a systematic review
Background
Both incidence of breast cancer and survival have increased in recent years and there is a need to review follow up strategies. This study aims to assess the evidence for benefits of follow-up in different settings for women who have had treatment for early breast cancer.
Method
A systematic review to identify key criteria for follow up and then address research questions. Key criteria were: 1) Risk of second breast cancer over time - incidence compared to general population. 2) Incidence and method of detection of local recurrence and second ipsi and contra-lateral breast cancer. 3) Level 1–4 evidence of the benefits of hospital or alternative setting follow-up for survival and well-being. Data sources to identify criteria were MEDLINE, EMBASE, AMED, CINAHL, PSYCHINFO, ZETOC, Health Management Information Consortium, Science Direct. For the systematic review to address research questions searches were performed using MEDLINE (2011). Studies included were population studies using cancer registry data for incidence of new cancers, cohort studies with long term follow up for recurrence and detection of new primaries and RCTs not restricted to special populations for trials of alternative follow up and lifestyle interventions.
Results
Women who have had breast cancer have an increased risk of a second primary breast cancer for at least 20 years compared to the general population. Mammographically detected local recurrences or those detected by women themselves gave better survival than those detected by clinical examination. Follow up in alternative settings to the specialist clinic is acceptable to women but trials are underpowered for survival.
Conclusions
Long term support, surveillance mammography and fast access to medical treatment at point of need may be better than hospital based surveillance limited to five years but further large, randomised controlled trials are needed
Are autistic traits measured equivalently in individuals with and without an Autism Spectrum Disorder?:An invariance analysis of the Autism Spectrum Quotient Short Form
It is common to administer measures of autistic traits to those without autism spectrum disorders (ASDs) with, for example, the aim of understanding autistic personality characteristics in non-autistic individuals. Little research has examined the extent to which measures of autistic traits actually measure the same traits in the same way across those with and without an ASD. We addressed this question using a multi-group confirmatory factor invariance analysis of the Autism Quotient Short Form (AQ-S: Hoekstra et al. in J Autism Dev Disord 41(5):589-596, 2011) across those with (n = 148) and without (n = 168) ASD. Metric variance (equality of factor loadings), but not scalar invariance (equality of thresholds), held suggesting that the AQ-S measures the same latent traits in both groups, but with a bias in the manner in which trait levels are estimated. We, therefore, argue that the AQ-S can be used to investigate possible causes and consequences of autistic traits in both groups separately, but caution is due when combining or comparing levels of autistic traits across the two group
Accuracy of Malaria Rapid Diagnostic Tests in Community Studies and their Impact on Treatment of Malaria in an Area with Declining Malaria Burden in North-Eastern Tanzania.
Despite some problems related to accuracy and applicability of malaria rapid diagnostic tests (RDTs), they are currently the best option in areas with limited laboratory services for improving case management through parasitological diagnosis and reducing over-treatment. This study was conducted in areas with declining malaria burden to assess; 1) the accuracy of RDTs when used at different community settings, 2) the impact of using RDTs on anti-malarial dispensing by community-owned resource persons (CORPs) and 3) adherence of CORPs to treatment guidelines by providing treatment based on RDT results. Data were obtained from: 1) a longitudinal study of passive case detection of fevers using CORPs in six villages in Korogwe; and 2) cross-sectional surveys (CSS) in six villages of Korogwe and Muheza districts, north-eastern, Tanzania. Performance of RDTs was compared with microscopy as a gold standard, and factors affecting their accuracy were explored using a multivariate logistic regression model. Overall sensitivity and specificity of RDTs in the longitudinal study (of 23,793 febrile cases; 18,154 with microscopy and RDTs results) were 88.6% and 88.2%, respectively. In the CSS, the sensitivity was significantly lower (63.4%; χ2=367.7, p<0.001), while the specificity was significantly higher (94.3%; χ2=143.1, p<0.001) when compared to the longitudinal study. As determinants of sensitivity of RDTs in both studies, parasite density of<200 asexual parasites/μl was significantly associated with high risk of false negative RDTs (OR≥16.60, p<0.001), while the risk of false negative test was significantly lower among cases with fever (axillary temperature ≥37.5 °C) (OR≤0.63, p≤0.027). The risk of false positive RDT (as a determinant of specificity) was significantly higher in cases with fever compared to afebrile cases (OR≥2.40, p<0.001). Using RDTs reduced anti-malarials dispensing from 98.9% to 32.1% in cases aged ≥5 years. Although RDTs had low sensitivity and specificity, which varied widely depending on fever and parasite density, using RDTs reduced over-treatment with anti-malarials significantly. Thus, with declining malaria prevalence, RDTs will potentially identify majority of febrile cases with parasites and lead to improved management of malaria and non-malaria fevers
Safety, tumor trafficking and immunogenicity of chimeric antigen receptor (CAR)-T cells specific for TAG-72 in colorectal cancer.
BackgroundT cells engineered to express chimeric antigen receptors (CARs) have established efficacy in the treatment of B-cell malignancies, but their relevance in solid tumors remains undefined. Here we report results of the first human trials of CAR-T cells in the treatment of solid tumors performed in the 1990s.MethodsPatients with metastatic colorectal cancer (CRC) were treated in two phase 1 trials with first-generation retroviral transduced CAR-T cells targeting tumor-associated glycoprotein (TAG)-72 and including a CD3-zeta intracellular signaling domain (CART72 cells). In trial C-9701 and C-9702, CART72 cells were administered in escalating doses up to 1010 total cells; in trial C-9701 CART72 cells were administered by intravenous infusion. In trial C-9702, CART72 cells were administered via direct hepatic artery infusion in patients with colorectal liver metastases. In both trials, a brief course of interferon-alpha (IFN-α) was given with each CART72 infusion to upregulate expression of TAG-72.ResultsFourteen patients were enrolled in C-9701 and nine in C-9702. CART72 manufacturing success rate was 100% with an average transduction efficiency of 38%. Ten patients were treated in CC-9701 and 6 in CC-9702. Symptoms consistent with low-grade, cytokine release syndrome were observed in both trials without clear evidence of on target/off tumor toxicity. Detectable, but mostly short-term (≤14 weeks), persistence of CART72 cells was observed in blood; one patient had CART72 cells detectable at 48 weeks. Trafficking to tumor tissues was confirmed in a tumor biopsy from one of three patients. A subset of patients had 111Indium-labeled CART72 cells injected, and trafficking could be detected to liver, but T cells appeared largely excluded from large metastatic deposits. Tumor biomarkers carcinoembryonic antigen (CEA) and TAG-72 were measured in serum; there was a precipitous decline of TAG-72, but not CEA, in some patients due to induction of an interfering antibody to the TAG-72 binding domain of humanized CC49, reflecting an anti-CAR immune response. No radiologic tumor responses were observed.ConclusionThese findings demonstrate the relative safety of CART72 cells. The limited persistence supports the incorporation of co-stimulatory domains in the CAR design and the use of fully human CAR constructs to mitigate immunogenicity
- …
