42 research outputs found

    The Heritability of Kidney Function Using an Older Australian Twin Population

    Get PDF
    Introduction: Twin studies are unique population models which estimate observed rather than inferred genetic components of complex traits. Nonmonogenic chronic kidney disease (CKD) is a complex disease process with strong genetic and environmental influences, amenable to twin studies. We aimed to assess the heritability of CKD using twin analysis and modeling within Older Australian Twin Study (OATS) data. Methods: OATS had 109 dizygotic (DZ) and 126 monozygotic (MZ) twin pairs with paired serum creatinine levels. Heritability of kidney function as estimated glomerular filtration rate (eGFR CKD Epidemiology Collaboration [CKD-EPI]) was modeled using the ACE model to estimate additive heritability (A), common (C), and unique (E) environmental factors. Intratwin pair analysis using mixed effects logistic regression allowed analysis of variation in eGFR from established CKD risk factors. Results: The median age was 69.71 (interquartile range 78.4–83.0) years, with 65% female, and a mean CKD-EPI of 82.8 ml/min (SD 6.7). The unadjusted ACE model determined kidney function to be 33% genetically determined (A), 18% shared genetic-environmental (C), and 49% because of unique environment (E). This remained unchanged when adjusted for age, hypertension, and sex. Hypertension was associated with eGFR; however, intertwin variance in hypertension did not explain variance in eGFR. Two or more hypertension medications were associated with decreased eGFR (P = 0.009). Conclusion: This study estimates observed heritability at 33%, notably higher than inferred heritability in genome-wide association study (GWAS) (7.1%–18%). Epigenetics and other genomic phenomena may explain this heritability gap. Difference in antihypertension medications explains part of unique environmental exposures, though discordance in hypertension and diabetes does not

    Outbreak detection algorithms for seasonal disease data: a case study using ross river virus disease

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Detection of outbreaks is an important part of disease surveillance. Although many algorithms have been designed for detecting outbreaks, few have been specifically assessed against diseases that have distinct seasonal incidence patterns, such as those caused by vector-borne pathogens.</p> <p>Methods</p> <p>We applied five previously reported outbreak detection algorithms to Ross River virus (RRV) disease data (1991-2007) for the four local government areas (LGAs) of Brisbane, Emerald, Redland and Townsville in Queensland, Australia. The methods used were the Early Aberration Reporting System (EARS) C1, C2 and C3 methods, negative binomial cusum (NBC), historical limits method (HLM), Poisson outbreak detection (POD) method and the purely temporal SaTScan analysis. Seasonally-adjusted variants of the NBC and SaTScan methods were developed. Some of the algorithms were applied using a range of parameter values, resulting in 17 variants of the five algorithms.</p> <p>Results</p> <p>The 9,188 RRV disease notifications that occurred in the four selected regions over the study period showed marked seasonality, which adversely affected the performance of some of the outbreak detection algorithms. Most of the methods examined were able to detect the same major events. The exception was the seasonally-adjusted NBC methods that detected an excess of short signals. The NBC, POD and temporal SaTScan algorithms were the only methods that consistently had high true positive rates and low false positive and false negative rates across the four study areas. The timeliness of outbreak signals generated by each method was also compared but there was no consistency across outbreaks and LGAs.</p> <p>Conclusions</p> <p>This study has highlighted several issues associated with applying outbreak detection algorithms to seasonal disease data. In lieu of a true gold standard, a quantitative comparison is difficult and caution should be taken when interpreting the true positives, false positives, sensitivity and specificity.</p

    Predictors of significant coronary artery disease in atrial fibrillation: are cardiac troponins a useful measure

    Get PDF
    Background Cardiac Troponin I (cTnI) is frequently measured in patients presenting with symptomatic atrial fibrillation (AF). The significance of elevated cTnI levels in this patient cohort is unclear. We investigated the value of cTnI elevation in this setting and whether it is predictive for significant coronary artery disease (sCAD). Methods We conducted a retrospective, single-center, case–control study of 231 patients who presented with symptomatic AF to The Prince Charles Hospital emergency department, Brisbane, Australia between 2006 and 2014. Patients who underwent serial cTnI testing and assessment for CAD were included. Clinical variables that are known to predict CAD and could potentially predict cTnI elevation were collected. Binary logistic regression was performed to identify predictors of sCAD and cTnI elevation. Results Cardiac Troponin I elevation above standard cut off was not predictive for sCAD after adjustment for other predictors (OR 1.62, 95% CI 0.79–3.32. p\ua0=\ua00.19). However, the highest cTnI concentration value (cTnI peak) was predictive for sCAD (OR 2.02, 95% CI 1.02–3.97, p\ua0=\ua00.04). Dyspnea on presentation (OR 4.52, 95% CI 1.87–10.91, p\ua0=\ua00.001), known coronary artery disease (OR 3.44, 95% CI 1.42–8.32, p\ua0=\ua00.006), and ST depression on the initial electrocardiogram (OR 2.57, 95% CI 1.11–5.97, p\ua0=\ua00.028) predicted sCAD in our cohort, while heart rate on initial presentation was inversely correlated with sCAD (OR 0.99, 95% CI 0.971–1.00, p\ua0=\ua00.034). Conclusion Troponin elevation is common in patients presenting to hospital with acute symptomatic AF and it is not a reliable indicator for underlying sCAD in this patient cohort. However, cTnI peak was a predictor of significant coronary artery disease

    Population screening for glucose-6-phosphate dehydrogenase deficiencies in Isabel Province, Solomon Islands, using a modified enzyme assay on filter paper dried bloodspots

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Glucose-6-phosphate dehydrogenase deficiency poses a significant impediment to primaquine use for the elimination of liver stage infection with <it>Plasmodium vivax </it>and for gametocyte clearance, because of the risk of life-threatening haemolytic anaemia that can occur in G6PD deficient patients. Although a range of methods for screening G6PD deficiency have been described, almost all require skilled personnel, expensive laboratory equipment, freshly collected blood, and are time consuming; factors that render them unsuitable for mass-screening purposes.</p> <p>Methods</p> <p>A published WST8/1-methoxy PMS method was adapted to assay G6PD activity in a 96-well format using dried blood spots, and used it to undertake population screening within a malaria survey undertaken in Isabel Province, Solomon Islands. The assay results were compared to a biochemical test and a recently marketed rapid diagnostic test.</p> <p>Results</p> <p>Comparative testing with biochemical and rapid diagnostic test indicated that results obtained by filter paper assay were accurate providing that blood spots were assayed within 5 days when stored at ambient temperature and 10 days when stored at 4 degrees. Screening of 8541 people from 41 villages in Isabel Province, Solomon Islands revealed the prevalence of G6PD deficiency as defined by enzyme activity < 30% of normal control was 20.3% and a prevalence of severe deficiency that would predispose to primaquine-induced hemolysis (WHO Class I-II) of 6.9%.</p> <p>Conclusions</p> <p>The assay enabled simple and quick semi-quantitative population screening in a malaria-endemic region. The study indicated a high prevalence of G6PD deficiency in Isabel Province and highlights the critical need to consider G6PD deficiency in the context of <it>P. vivax </it>malaria elimination strategies in Solomon Islands, particularly in light of the potential role of primaquine mass drug administration.</p

    Global sequence variation in the histidine-rich proteins 2 and 3 of Plasmodium falciparum: implications for the performance of malaria rapid diagnostic tests

    Get PDF
    Background. Accurate diagnosis is essential for prompt and appropriate treatment of malaria. While rapid diagnostic tests (RDTs) offer great potential to improve malaria diagnosis, the sensitivity of RDTs has been reported to be highly variable. One possible factor contributing to variable test performance is the diversity of parasite antigens. This is of particular concern for Plasmodium falciparum histidine-rich protein 2 (PfHRP2)-detecting RDTs since PfHRP2 has been reported to be highly variable in isolates of the Asia-Pacific region. Methods. The pfhrp2 exon 2 fragment from 458 isolates of P. falciparum collected from 38 countries was amplified and sequenced. For a subset of 80 isolates, the exon 2 fragment of histidine-rich protein 3 (pfhrp3) was also amplified and sequenced. DNA sequence and statistical analysis of the variation observed in these genes was conducted. The potential impact of the pfhrp2 variation on RDT detection rates was examined by analysing the relationship between sequence characteristics of this gene and the results of the WHO product testing of malaria RDTs: Round 1 (2008), for 34 PfHRP2-detecting RDTs. Results. Sequence analysis revealed extensive variations in the number and arrangement of various repeats encoded by the genes in parasite populations world-wide. However, no statistically robust correlation between gene structure and RDT detection rate for P. falciparum parasites at 200 parasites per microlitre was identified. Conclusions. The results suggest that despite extreme sequence variation, diversity of PfHRP2 does not appear to be a major cause of RDT sensitivity variation

    Spatial-temporal epidemiological analyses of two sympatric, co-endemic alphaviral diseases in Queensland, Australia

    Get PDF
    Background: The two most reported mosquito-borne diseases in Queensland, a northern state of Australia, are Ross River virus (RRV) disease and Barmah Forest virus (BFV) disease. Both diseases are endemic in Queensland and have similar clinical symptoms and comparable transmission cycles involving a complex inter-relationship between human hosts, various mosquito vectors, and a range of nonhuman vertebrate hosts, including marsupial mammals that are unique to the Australasian region. Although these viruses are thought to share similar vectors and vertebrate hosts, RRV is four times more prevalent than BFV in Queensland. Methods: We performed a retrospective analysis of BFV and RRV human disease notification data collected from 1995 to 2007 in Queensland to ascertain whether there were differences in the incidence patterns of RRV and BFV disease. In particular, we compared the temporal incidence and spatial distribution of both diseases and considered the relationship between their disease dynamics. We also investigated whether a peak in BFV incidence during spring was indicative of the following RRV and BFV transmission season incidence levels. Results: Although there were large differences in the notification rates of the two diseases, they had similar annual temporal patterns, but there were regional variations between the length and magnitude of the transmission seasons. During periods of increased disease activity, however, there was no association between the dynamics of the two diseases. Conclusions: The results from this study suggest that while RRV and BFV share similar mosquito vectors, there are significant differences in the ecology of these viruses that result in different epidemic patterns of disease incidence. Further investigation is required into the ecology of each virus to determine which factors are important in promoting RRV and BFV disease outbreaks

    Can the learning curve of laparoscopic sacrocolpopexy be reduced by a structured training program?

    No full text
    Objective: The aim of this study was to establish whether the learning curve for laparoscopic sacral colpopexy (LSC) could be significantly reduced in a structured learning program.Methods: We conducted a prospective study aimed at mapping the learning curve of LSC in the setting of a structured learning program for a urogynecology fellow at the Royal Brisbane and Women's Hospital.The fellow was laparoscopic suturing and dissection naive at the commencement of her fellow position and was required to assist in 20 LSCs, video-edit 2 procedures, and undertake laparoscopic suturing and knot tying training on a laparoscopic trainer for 2 h/wk during the trial period. After the completion of this structured learning program, the fellow began performing LSC as the primary surgeon.Symptomatic assessment of pelvic organ prolapse and pelvic floor dysfunction was undertaken preoperatively and 12 months postoperatively using the Australian Pelvic Floor Questionnaire.Objective success at 12 months was defined as less than stage 2 prolapse in any compartment. Subjective success was defined as no prolapse on Q 28 to 31 on the Australian Pelvic Floor Questionnaire, and patient-determined success was defined as much or very much better on the Patient Global Impression of Improvement at 12 months.Results: Five consecutive LSC in 90 minutes or less without intraoperative or postoperative complications was achieved by case 18. Overall objective success at 12 months was 91%, and subjective and patient-determined success was 95%.Conclusion: Previous studies on LSC that report a similar learning curve have recorded much longer operating times. We believe that the shorter operating time, without compromise to outcomes and complication rates, is a result of the structured learning program

    Factors associated with infants receiving their mother's own breast milk on discharge from hospital in a unit where pasteurised donor human milk is available

    No full text
    Aim: To determine the proportion of very preterm infants who were exclusively fed breast milk at the time of discharge home, before and after the availability of pasteurised donor human milk (PDHM). Methods: This was an observational retrospective cohort study with historical comparison, comparing two cohorts

    Repeat thermal ablation for local progression of lung tumours: how safe and efficacious is it?

    No full text
    Aim: To retrospectively evaluate the safety and efficacy of repeat thermal ablation for local progression of lung tumours after prior ablation(s).Methods: From December 2009 to March 2017, 13 patients underwent repeat ablation (11 repeat microwave ablations and 2 repeat radiofrequency ablations) of a lung tumour (9 non-small cell lung carcinomas, 3 metastatic colorectal adenocarcinomas, 1 metastatic pelvic sarcoma) for local progression after prior ablation(s). Safety of the procedure was assessed by presence or absence of adverse events. Efficacy of the procedure was assessed by local tumour response to ablation and survival time.Results: Repeat ablation procedures were safe, without major adverse events. Median length of hospital stay was 2 days (interquartile range 1-2). Pneumothorax was the most common complication [5 (38%) of 13 repeat ablation procedures]. There was one death within 30 days of ablation, but the cause of death and its relation to the procedure were unknown. Of the 12 patients with imaging follow-up [median follow-up 26 months (range 3-62)], 10 (83%) had complete ablation and 2 (17%) had local progression. Of all 13 patients, 8 (62%) were alive and 5 (38%) had died with a median overall survival of 43 months (95% confidence interval 36-49 months).Conclusion: Repeat ablation in locally progressing tumours after prior ablation attempt(s) is a safe therapeutic option and often achieves local tumour control
    corecore