14 research outputs found

    Supporting the timely, easy and cost effective access to high-quality linked data via the Custodian-Controlled Data Repository

    Get PDF
    Introduction While linkage units perform population data linkage with high efficiency, other parts of the workflow from custodians to researchers remain largely outside the control of linkage operators. Most importantly, resource constraints at data custodians often limit quality control (QC) efforts and lead to delays in the data delivery to researchers. Objectives and Approach overcome these challenges, we have created the Data integration Unit (DIU) to undertake content data management and delivery in conjunction with the custodians who remain the principal data curators. Data is managed in the Custodian-Controlled Data Repository (CDDR) a highly secure virtual repository for data storage, analysis and access, established and operated by the DIU. Stringent controls for user access and data flows ensure that data is provided safely by custodians. Overseen by the custodians, DIU staff undertake QC activities, and integrate and deliver multiple datasets for approved linkage projects. Results Long-term data storage in the CCDR decreases data custodian workloads by reducing the frequency of content data provision to periodic updates. Feedback loops built into the QC process allow custodians to improve their datasets by learning from data issues identified by the DIU. Extensive QC undertaken by DIU staff on individual datasets and data validation across multiple datasets held in the CCDR ensure that the quality of data provided to researchers is improved. Moreover, DIU staff dedicated to data integration provide faster content data delivery to researchers. Lastly, the CCDR reduces the number of custodians researchers need to liaise with for data provision. Conclusion/Implications Operational since February 2018, the DIU has delivered content data for several linkage projects, based on key datasets stored in the CCDR. The incorporation of additional datasets is currently negotiated. Recognising recent developments in secure analytics infrastructure, the further evolution of the CCDR towards a cloud-based model is anticipated

    Should cities hosting mass gatherings invest in public health surveillance and planning? Reflections from a decade of mass gatherings in Sydney, Australia

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Mass gatherings have been defined by the World Health Organisation as "events attended by a sufficient number of people to strain the planning and response resources of a community, state or nation". This paper explores the public health response to mass gatherings in Sydney, the factors that influenced the extent of deployment of resources and the utility of planning for mass gatherings as a preparedness exercise for other health emergencies.</p> <p>Discussion</p> <p>Not all mass gatherings of people require enhanced surveillance and additional response. The main drivers of extensive public health planning for mass gatherings reflect geographical spread, number of international visitors, event duration and political and religious considerations. In these instances, the implementation of a formal risk assessment prior to the event with ongoing daily review is important in identifying public health hazards.</p> <p>Developing and utilising event-specific surveillance to provide early-warning systems that address the specific risks identified through the risk assessment process are essential. The extent to which additional resources are required will vary and depend on the current level of surveillance infrastructure.</p> <p>Planning the public health response is the third step in preparing for mass gatherings. If the existing public health workforce has been regularly trained in emergency response procedures then far less effort and resources will be needed to prepare for each mass gathering event. The use of formal emergency management structures and co-location of surveillance and planning operational teams during events facilitates timely communication and action.</p> <p>Summary</p> <p>One-off mass gathering events can provide a catalyst for innovation and engagement and result in opportunities for ongoing public health planning, training and surveillance enhancements that outlasted each event.</p

    Mortality after admission for acute myocardial infarction in Aboriginal and non-Aboriginal people in New South Wales, Australia: a multilevel data linkage study

    Get PDF
    Background - Heart disease is a leading cause of the gap in burden of disease between Aboriginal and non-Aboriginal Australians. Our study investigated short- and long-term mortality after admission for Aboriginal and non-Aboriginal people admitted with acute myocardial infarction (AMI) to public hospitals in New South Wales, Australia, and examined the impact of the hospital of admission on outcomes. Methods - Admission records were linked to mortality records for 60047 patients aged 25–84 years admitted with a diagnosis of AMI between July 2001 and December 2008. Multilevel logistic regression was used to estimate adjusted odds ratios (AOR) for 30- and 365-day all-cause mortality. Results - Aboriginal patients admitted with an AMI were younger than non-Aboriginal patients, and more likely to be admitted to lower volume, remote hospitals without on-site angiography. Adjusting for age, sex, year and hospital, Aboriginal patients had a similar 30-day mortality risk to non-Aboriginal patients (AOR: 1.07; 95% CI 0.83-1.37) but a higher risk of dying within 365 days (AOR: 1.34; 95% CI 1.10-1.63). The latter difference did not persist after adjustment for comorbid conditions (AOR: 1.12; 95% CI 0.91-1.38). Patients admitted to more remote hospitals, those with lower patient volume and those without on-site angiography had increased risk of short and long-term mortality regardless of Aboriginal status. Conclusions - Improving access to larger hospitals and those with specialist cardiac facilities could improve outcomes following AMI for all patients. However, major efforts to boost primary and secondary prevention of AMI are required to reduce the mortality gap between Aboriginal and non-Aboriginal people

    Estimation of a lower bound for the cumulative incidence of failure of female surgical sterilisation in NSW: a population-based study.

    Get PDF
    Female tubal sterilisation, often referred to as "tubal ligation" but more often performed these days using laparoscopically-applied metal clips, remains a popular form of contraception in women who have completed their families. A review of the literature on the incidence of failure of tubal sterilisation found many reports of case-series and small clinic-based studies, but only a few larger studies with good epidemiological designs, most recently the US CREST study conducted during the 1980s and early 1990s. The CREST study reported a conditional (life-table) cumulative incidence of failure of 0.55, 0.84, 1.18 and 1.85 per 100 women at 1, 2, 4 and 10 years of follow-up respectively. The study described here estimated a lower bound for the incidence of tubal sterilisation failure in NSW by probabilistically linking routinely-collected hospital admission records for women undergoing sterilisation surgery to hospital admission records for the same women which were indicative of subsequent conception or which represented censoring events such as hysterectomy or death in hospital. Data for the period July 1992 to June 2000 were used. Kaplan-Meier and proportional-hazards survival analyses were performed on the resulting linked data set. The conditional cumulative incidence per 100 women at 1, 2 4 and 8 years of follow-up was estimated to be 0.74 (95% CI 0.68-0.81), 1.05 (0.97-1.13), 1.33 (1.23-1.42) and 1.51 (1.39-1.62) respectively. Forty percent of failures ended in abortion and 14% presented as ectopic pregnancies. Age, private health insurance status and sterilisation in a smaller hospital were all found to be associated with lower rates of failure. Strong evidence of time-limited excess numbers of failures in women undergoing surgery in particular hospitals was also found. The study demonstrates the feasibility of using linked, routinely-collected health data to evaluate relatively rare, long-term outcomes such as sterilisation failure on a population-wide basis

    Estimation of a lower bound for the cumulative incidence of failure of female surgical sterilisation in NSW: a population-based study.

    Get PDF
    Female tubal sterilisation, often referred to as "tubal ligation" but more often performed these days using laparoscopically-applied metal clips, remains a popular form of contraception in women who have completed their families. A review of the literature on the incidence of failure of tubal sterilisation found many reports of case-series and small clinic-based studies, but only a few larger studies with good epidemiological designs, most recently the US CREST study conducted during the 1980s and early 1990s. The CREST study reported a conditional (life-table) cumulative incidence of failure of 0.55, 0.84, 1.18 and 1.85 per 100 women at 1, 2, 4 and 10 years of follow-up respectively. The study described here estimated a lower bound for the incidence of tubal sterilisation failure in NSW by probabilistically linking routinely-collected hospital admission records for women undergoing sterilisation surgery to hospital admission records for the same women which were indicative of subsequent conception or which represented censoring events such as hysterectomy or death in hospital. Data for the period July 1992 to June 2000 were used. Kaplan-Meier and proportional-hazards survival analyses were performed on the resulting linked data set. The conditional cumulative incidence per 100 women at 1, 2 4 and 8 years of follow-up was estimated to be 0.74 (95% CI 0.68-0.81), 1.05 (0.97-1.13), 1.33 (1.23-1.42) and 1.51 (1.39-1.62) respectively. Forty percent of failures ended in abortion and 14% presented as ectopic pregnancies. Age, private health insurance status and sterilisation in a smaller hospital were all found to be associated with lower rates of failure. Strong evidence of time-limited excess numbers of failures in women undergoing surgery in particular hospitals was also found. The study demonstrates the feasibility of using linked, routinely-collected health data to evaluate relatively rare, long-term outcomes such as sterilisation failure on a population-wide basis

    Reliability of patient-reported complications following hip or knee arthroplasty procedures

    No full text
    Abstract Background Patient reported outcomes are increasingly used to assess the success of surgical procedures. Patient reported complications are often included as an outcome. However, these data must be validated to be accurate and useful in clinical practice. Methods This was a retrospective descriptive study of 364 patients who had completed their six-month follow-up review questionnaire in the Arthroplasty Clinical Outcomes Registry, National (ACORN), an Australian orthopaedic registry. Patient-reported complications following total hip arthroplasty (THA) and total knee arthroplasty (TKA) were compared to surgeon-reported complications recorded in their electronic medical records at their various follow-up appointments. Sensitivity, specificity, positive predictive value and negative predictive value were calculated. Agreement was assessed using percentage agreement and Cohen’s kappa. Results Patient-reported data from the ACORN registry returned overall low sensitivity (0.14), negative predictive value (0.13) and kappa values (0.11), but very high specificity (0.98), positive predictive value (0.98) and agreement values (96.3%) for reporting of complications when compared to surgeon-reported data. Values varied depending on the type and category of complication. Conclusion Patients are accurate in reporting the absence of complications, but not the presence. Sensitivity of patient-reported complications needs to be improved. Greater attention to the clarity of the questions asked may help in this respect

    Comparative effectiveness of aspirin for symptomatic venous thromboembolism prophylaxis in patients undergoing total joint arthroplasty, a cohort study

    No full text
    Abstract Background This study compares the symptomatic 90-day venous thromboembolism (VTE) rates in patients receiving aspirin to patients receiving low-molecular weight heparin (LMWH) or direct oral anticoagulants (DOACs), after total hip (THA) and total knee arthroplasty (TKA). Methods Data were collected from a multi-centre cohort study, including demographics, confounders and prophylaxis type (aspirin alone, LMWH alone, aspirin and LMWH, and DOACs). The primary outcome was symptomatic 90-day VTE. Secondary outcomes were major bleeding, joint related reoperation and mortality within 90 days. Data were analysed using logistic regression, the Student’s t and Fisher’s exact tests (unadjusted) and multivariable regression (adjusted). Results There were 1867 eligible patients; 365 (20%) received aspirin alone, 762 (41%) LMWH alone, 482 (26%) LMWH and aspirin and 170 (9%) DOAC. The 90-day VTE rate was 2.7%; lowest in the aspirin group (1.6%), compared to 3.6% for LMWH, 2.3% for LMWH and aspirin and 2.4% for DOACs. After adjusted analysis, predictors of VTE were prophylaxis duration < 14 days (OR = 6.7, 95% CI 3.5–13.1, p < 0.001) and history of previous VTE (OR = 2.4, 95% CI 1.1–5.8, p = 0.05). There were no significant differences in the primary or secondary outcomes between prophylaxis groups. Conclusions Aspirin may be suitable for VTE prophylaxis following THA and TKA. The comparatively low unadjusted 90-day VTE rate in the aspirin group may have been due to selective use in lower-risk patients. Trial Registration This study was registered at ClinicalTrials.gov, trial number NCT01899443 (15/07/2013)

    Harmonization of Computable Oncology Regimens and their Application to Australian Cancer Treatment Data

    No full text
    Background: Observational research into patterns of variation in chemotherapy delivery and their effects on outcomes is greatly facilitated by having a defined baseline treatment protocol prescription against which observed drug exposures can be compared. This requires accurate identification of the intended treatment protocol for each patient, as well as detailed abstraction of its constituent components (i.e. drugs, dose, frequency, duration). With this, it becomes possible to derive measures such as reduced dose intensity, early treatment termination or adjustments to the treatment schedule. Methods: In this work, we first derive a computable form of Australian chemotherapy regimens as published on the eviQ web site. This abstraction is then mapped to the existing US HemOnc ontology, before being applied to a real-world drug delivery dataset to demonstrate its utility for detecting variation in cancer care. Results: It was possible to resolve relationships between these vocabularies with high completeness and accuracy (93%). Further to this, real-world drug delivery data was able to be matched to eviQ protocols for up to 87% of delivered regimens using an episodic data model. Of the delivered regimens mapped to eviQ, 92% of a validation subset with known protocol prescriptions matched their nominated regimen. Conclusions: By applying relatively simple, rule-based algorithms to computable protocol abstractions, it is possible to achieve matching between reference regimens and as-delivered treatment with a high degree of accuracy and completeness. These techniques provide a foundation of baseline protocol definitions for future work to characterize patterns of variation between patients’ prescribed and delivered systemic anti-cancer treatments. </p
    corecore