151 research outputs found

    Evaluation of stability of directly standardized rates for sparse data using simulation methods.

    Get PDF
    Background Directly standardized rates (DSRs) adjust for different age distributions in different populations and enable, say, the rates of disease between the populations to be directly compared. They are routinely published but there is concern that a DSR is not valid when it is based on a “small” number of events. The aim of this study was to determine the value at which a DSR should not be published when analyzing real data in England. Methods Standard Monte Carlo simulation techniques were used assuming the number of events in 19 age groups (i.e., 0–4, 5–9, ... 90+ years) follow independent Poisson distributions. The total number of events, age specific risks, and the population sizes in each age group were varied. For each of 10,000 simulations the DSR (using the 2013 European Standard Population weights), together with the coverage of three different methods (normal approximation, Dobson, and Tiwari modified gamma) of estimating the 95% confidence intervals (CIs), were calculated. Results The normal approximation was, as expected, not suitable for use when fewer than 100 events occurred. The Tiwari method and the Dobson method of calculating confidence intervals produced similar estimates and either was suitable when the expected or observed numbers of events were 10 or greater. The accuracy of the CIs was not influenced by the distribution of the events across categories (i.e., the degree of clustering, the age distributions of the sampling populations, and the number of categories with no events occurring in them). Conclusions DSRs should not be given when the total observed number of events is less than 10. The Dobson method might be considered the preferred method due to the formulae being simpler than that of the Tiwari method and the coverage being slightly more accurate

    The chicken or the egg? Exploring bi-directional associations between Newcastle disease vaccination and village chicken flock size in rural Tanzania

    Get PDF
    Newcastle disease (ND) is a viral disease of poultry with global importance, responsible for the loss of a potential source of household nutrition and economic livelihood in many low-income food-deficit countries. Periodic outbreaks of this endemic disease result in high mortality amongst free-ranging chicken flocks and may serve as a disincentive for rural households to invest time or resources in poultry-keeping. Sustainable ND control can be achieved through vaccination using a thermotolerant vaccine administered via eyedrop by trained "community vaccinators". This article evaluates the uptake and outcomes of fee-for-service ND vaccination programs in eight rural villages in the semi-arid central zone of Tanzania. It represents part of an interdisciplinary program seeking to address chronic undernutrition in children through improvements to existing poultry and crop systems. Newcastle disease vaccination uptake was found to vary substantially across communities and seasons, with a significantly higher level of vaccination amongst households participating in a longitudinal study of children's growth compared with non-participating households (p = 0.009). Two multivariable model analyses were used to explore associations between vaccination and chicken numbers, allowing for clustered data and socioeconomic and cultural variation amongst the population. Results demonstrated that both (a) households that undertook ND vaccination had a significantly larger chicken flock size in the period between that vaccination campaign and the next compared with those that did not vaccinate (p = 0.018); and (b) households with larger chicken flocks at the time of vaccination were significantly more likely to participate in vaccination programs (p < 0.001). Additionally, households vaccinating in all three vaccination campaigns held over 12 months were identified to have significantly larger chicken flocks at the end of this period (p < 0.001). Opportunities to understand causality and complexity through quantitative analyses are limited, and there is a role for qualitative approaches to explore decisions made by poultry-keeping households and the motivations, challenges and priorities of community vaccinators. Evidence of a bi-directional relationship, however, whereby vaccination leads to greater chicken numbers, and larger flocks are more likely to be vaccinated, offers useful insights into the efficacy of fee-for-service animal health programs. This article concludes that attention should be focused on ways of supporting the participation of vulnerable households in ND vaccination campaigns, and encouraging regular vaccination throughout the year, as a pathway to strengthen food security, promote resilience and contribute to improved human nutrition

    Human epididymis protein 4 reference limits and natural variation in a Nordic reference population

    Get PDF
    The objectives of this study are to establish reference limits for human epididymis protein 4, HE4, and investigate factors influencing HE4 levels in healthy subjects. HE4 was measured in 1,591 samples from the Nordic Reference Interval Project Bio-bank and Database biobank, using the manual HE4 EIA (Fujirebio) for 802 samples and the Architect HE4 (Abbott) for 792 samples. Reference limits were calculated using the statistical software R. The influence of donor characteristics such as age, sex, body mass index, smoking habits, and creatinine on HE4 levels was investigated using a multivariate model. The study showed that age is the main determinant of HE4 in healthy subjects, corresponding to 2% higher HE4 levels at 30 years (compared to 20 years), 9% at 40 years, 20% at 50 years, 37% at 60 years, 63% at 70 years, and 101% at 80 years. HE4 levels are 29% higher in smokers than in nonsmokers. In conclusion, HE4 levels in healthy subjects are associated with age and smoking status. Age-dependent reference limits are suggested

    Survey context and question wording affects self reported annoyance due to road traffic noise: a comparison between two cross-sectional studies

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Surveys are a common way to measure annoyance due to road traffic noise, but the method has some draw-backs. Survey context, question wording and answer alternatives could affect participation and answers and could have implications when comparing studies and/or performing pooled analyses. The aim of this study was to investigate the difference in annoyance reporting due to road traffic noise in two types of surveys of which one was introduced broadly and the other with the clearly stated aim of investigating noise and health.</p> <p>Methods</p> <p>Data was collected from two surveys carried out in the municipality of MalmĂś, southern Sweden in 2007 and 2008 (n = 2612 and n = 3810). The first survey stated an aim of investigating residential environmental exposure, especially noise and health. The second survey was a broad public health survey stating a broader aim. The two surveys had comparable questions regarding noise annoyance, although one used a 5-point scale and the other a 4-point scale. We used geographic information systems (GIS) to assess the average road and railway noise (L<sub>Aeq,24h</sub>) at the participants' residential address. Logistic regression was used to calculate odds ratios for annoyance in relation to noise exposure.</p> <p>Results</p> <p>Annoyance at least once a week due to road traffic noise was significantly more prevalent in the survey investigating environment and health compared to the public health survey at levels > 45 dB(A), but not at lower exposure levels. However no differences in annoyance were found when comparing the extreme alternatives "never" and "every day". In the study investigating environment and health, "Noise sensitive" persons were more likely to readily respond to the survey and were more annoyed by road traffic noise compared to the other participants in that survey.</p> <p>Conclusions</p> <p>The differences in annoyance reporting between the two surveys were mainly due to different scales, suggesting that extreme alternatives are to prefer before dichotomization when comparing results between the two. Although some findings suggested that noise-sensitive individuals were more likely to respond to the survey investigating noise and health, we could not find convincing evidence that contextual differences affected either answers or participation.</p

    The association between hip fracture and hip osteoarthritis: A case-control study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>There have been reports both supporting and refuting an inverse relationship between hip fracture and hip osteoarthritis (OA). We explore this relationship using a case-control study design.</p> <p>Methods</p> <p>Exclusion criteria were previous hip fracture (same side or contralateral side), age younger than 60 years, foreign nationality, pathological fracture, rheumatoid arthritis and cases were radiographic examinations were not found in the archives. We studied all subjects with hip fracture that remained after the exclusion process that were treated at Akureyri University Hospital, Iceland 1990-2008, n = 562 (74% women). Hip fracture cases were compared with a cohort of subjects with colon radiographs, n = 803 (54% women) to determine expected population prevalence of hip OA. Presence of radiographic hip OA was defined as a minimum joint space of 2.5 mm or less on an anteroposterior radiograph, or Kellgren and Lawrence grade 2 or higher. Possible causes of secondary osteoporosis were identified by review of medical records.</p> <p>Results</p> <p>The age-adjusted odds ratio (OR) for subjects with hip fracture having radiographic hip OA was 0.30 (95% confidence interval [95% CI] 0.12-0.74) for men and 0.33 (95% CI 0.19-0.58) for women, compared to controls. The probability for subjects with hip fracture and hip OA having a secondary cause of osteoporosis was three times higher than for subjects with hip fracture without hip OA.</p> <p>Conclusion</p> <p>The results of our study support an inverse relationship between hip fractures and hip OA.</p

    B Vitamins, Methionine and Alcohol Intake and Risk of Colon Cancer in Relation to BRAF Mutation and CpG Island Methylator Phenotype (CIMP)

    Get PDF
    One-carbon metabolism appears to play an important role in DNA methylation reaction. Evidence suggests that a low intake of B vitamins or high alcohol consumption increases colorectal cancer risk. How one-carbon nutrients affect the CpG island methylator phenotype (CIMP) or BRAF mutation status in colon cancer remains uncertain.Utilizing incident colon cancers in a large prospective cohort of women (the Nurses' Health Study), we determined BRAF status (N = 386) and CIMP status (N = 375) by 8 CIMP-specific markers [CACNA1G, CDKN2A (p16), CRABP1, IGF2, MLH1, NEUROG1, RUNX3, and SOCS1], and 8 other CpG islands (CHFR, HIC1, IGFBP3, MGMT, MINT-1, MINT-31, p14, and WRN). We examined the relationship between intake of one-carbon nutrients and alcohol and colon cancer risk, by BRAF mutation or CIMP status.Higher folate intake was associated with a trend towards low risk of CIMP-low/0 tumors [total folate intake ≥400 µg/day vs. <200 µg/day; the multivariate relative risk = 0.73; 95% CI = 0.53-1.02], whereas total folate intake had no influence on CIMP-high tumor risks (P(heterogeneity) = 0.73). Neither vitamin B(6), methionine or alcohol intake appeared to differentially influence risks for CIMP-high and CIMP-low/0 tumors. Using the 16-marker CIMP panel did not substantially alter our results. B vitamins, methionine or alcohol intake did not affect colon cancer risk differentially by BRAF status.This molecular pathological epidemiology study suggests that low level intake of folate may be associated with an increased risk of CIMP-low/0 colon tumors, but not that of CIMP-high tumors. However, the difference between CIMP-high and CIMP-low/0 cancer risks was not statistically significant, and additional studies are necessary to confirm these observations

    A methodological framework to distinguish spectrum effects from spectrum biases and to assess diagnostic and screening test accuracy for patient populations: Application to the Papanicolaou cervical cancer smear test

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>A spectrum effect was defined as differences in the sensitivity or specificity of a diagnostic test according to the patient's characteristics or disease features. A spectrum effect can lead to a spectrum bias when subgroup variations in sensitivity or specificity also affect the likelihood ratios and thus post-test probabilities. We propose and illustrate a methodological framework to distinguish spectrum effects from spectrum biases.</p> <p>Methods</p> <p>Data were collected for 1781 women having had a cervical smear test and colposcopy followed by biopsy if abnormalities were detected (the reference standard). Logistic models were constructed to evaluate both the sensitivity and specificity, and the likelihood ratios, of the test and to identify factors independently affecting the test's characteristics.</p> <p>Results</p> <p>For both tests, human papillomavirus test, study setting and age affected sensitivity or specificity of the smear test (spectrum effect), but only human papillomavirus test and study setting modified the likelihood ratios (spectrum bias) for clinical reading, whereas only human papillomavirus test and age modified the likelihood ratios (spectrum bias) for "optimized" interpretation.</p> <p>Conclusion</p> <p>Fitting sensitivity, specificity and likelihood ratios simultaneously allows the identification of covariates that independently affect diagnostic or screening test results and distinguishes spectrum effect from spectrum bias. We recommend this approach for the development of new tests, and for reporting test accuracy for different patient populations.</p

    The Genomic Ancestry of Individuals from Different Geographical Regions of Brazil Is More Uniform Than Expected

    Get PDF
    Based on pre-DNA racial/color methodology, clinical and pharmacological trials have traditionally considered the different geographical regions of Brazil as being very heterogeneous. We wished to ascertain how such diversity of regional color categories correlated with ancestry. Using a panel of 40 validated ancestry-informative insertion-deletion DNA polymorphisms we estimated individually the European, African and Amerindian ancestry components of 934 self-categorized White, Brown or Black Brazilians from the four most populous regions of the Country. We unraveled great ancestral diversity between and within the different regions. Especially, color categories in the northern part of Brazil diverged significantly in their ancestry proportions from their counterparts in the southern part of the Country, indicating that diverse regional semantics were being used in the self-classification as White, Brown or Black. To circumvent these regional subjective differences in color perception, we estimated the general ancestry proportions of each of the four regions in a form independent of color considerations. For that, we multiplied the proportions of a given ancestry in a given color category by the official census information about the proportion of that color category in the specific region, to arrive at a “total ancestry” estimate. Once such a calculation was performed, there emerged a much higher level of uniformity than previously expected. In all regions studied, the European ancestry was predominant, with proportions ranging from 60.6% in the Northeast to 77.7% in the South. We propose that the immigration of six million Europeans to Brazil in the 19th and 20th centuries - a phenomenon described and intended as the “whitening of Brazil” - is in large part responsible for dissipating previous ancestry dissimilarities that reflected region-specific population histories. These findings, of both clinical and sociological importance for Brazil, should also be relevant to other countries with ancestrally admixed populations

    Effects of Alcohol on the Acquisition and Expression of Fear Potentiated Startle in Mouse Lines Selectively Bred for High and Low Alcohol Preference

    Get PDF
    Rationale: Anxiety disorders and alcohol-use disorders frequently co-occur in humans perhaps because alcohol relieves anxiety. Studies in humans and rats indicate that alcohol may have greater anxiolytic effects in organisms with increased genetic propensity for high alcohol consumption. Objectives and Methods: The purpose of this study was to investigate the effects of moderate doses of alcohol (0.5, 1.0, 1.5 g/kg) on the acquisition and expression of anxiety-related behavior using a fear-potentiated startle (FPS) procedure. Experiments were conducted in two replicate pairs of mouse lines selectively bred for high- (HAP1 and HAP2) and low- (LAP1 and LAP2) alcohol preference; these lines have previously shown a genetic correlation between alcohol preference and FPS (HAP\u3eLAP; Barrenha and Chester 2007). In a control experiment, the effect of diazepam (4.0 mg/kg) on the expression of FPS was tested in HAP2 and LAP2 mice. Results: The 1.5 g/kg alcohol dose moderately decreased the expression of FPS in both HAP lines but not LAP lines. Alcohol had no effect on the acquisition of FPS in any line. Diazepam reduced FPS to a similar extent in both HAP2 and LAP2 mice. Conclusions: HAP mice may be more sensitive to the anxiolytic effects of alcohol than LAP mice when alcohol is given prior to the expression of FPS. These data collected in two pairs of HAP/LAP mouse lines suggest that the anxiolytic response to alcohol in HAP mice may be genetically correlated with their propensity toward high alcohol preference and robust FPS
    • …
    corecore