14,083 research outputs found

    The promotion of data sharing in pharmacoepidemiology

    Get PDF
    This article addresses the role of pharmacoepidemiology in patient safety and the crucial role of data sharing in ensuring that such activities occur. Against the backdrop of proposed reforms of European data protection legislation, it considers whether the current legislative landscape adequately facilitates this essential data sharing. It is argued that rather than maximising and promoting the benefits of such activities by facilitating data sharing, current and proposed legislative landscapes hamper these vital activities. The article posits that current and proposed data protection approaches to pharmacoepidemiology — and more broadly, re-uses of data — should be reoriented towards enabling these important safety enhancing activities. Two potential solutions are offered: 1) a dedicated working party on data reuse for health research and 2) the introduction of new, dedicated legislation

    How pharmacoepidemiology networks can manage distributed analyses to improve replicability and transparency and minimize bias

    Get PDF
    Several pharmacoepidemiology networks have been developed over the past decade that use a distributed approach, implementing the same analysis at multiple data sites, to preserve privacy and minimize data sharing. Distributed networks are efficient, by interrogating data on very large populations. The structure of these networks can also be leveraged to improve replicability, increase transparency, and reduce bias. We describe some features of distributed networks using, as examples, the Canadian Network for Observational Drug Effect Studies, the Sentinel System in the USA, and the European Research Network of Pharmacovigilance and Pharmacoepidemiology. Common protocols, analysis plans, and data models, with policies on amendments and protocol violations, are key features. These tools ensure that studies can be audited and repeated as necessary. Blinding and strict conflict of interest policies reduce the potential for bias in analyses and interpretation. These developments should improve the timeliness and accuracy of information used to support both clinical and regulatory decisions

    The experience of accommodating privacy restrictions during implementation of a large-scale surveillance study of an osteoporosis medication.

    Get PDF
    PurposeTo explore whether privacy restrictions developed to protect patients have complicated research within a 15-year surveillance study conducted with US cancer registries.MethodsData from enrolling 27 cancer registries over a 10-year period were examined to describe the amount of time needed to obtain study approval. We also analyzed the proportion of patients that completed a research interview out of the total reported by the registries and examined factors thought to influence this measure.ResultsThe average length of the research review process from submission to approval of the research was 7 months (range, <1 to 24 months), and it took 6 months or more to obtain approval of the research at 41% of the cancer registries. Most registries (78%) required additional permission steps to gain access to patients for research. After adjustment for covariates, the interview response proportion was 110% greater (ratio of response proportion = 2.1; 95% confidence interval: 1.3, 3.3) when the least restrictive versus the most restrictive permission steps were required. An interview was more often completed for patients (or proxies) if patients were alive, within a year of being diagnosed, or identified earlier in the study.ConclusionsLengthy research review processes increased the time between diagnosis and provision of patient information to the researcher. Requiring physician permission for access to patients was associated with lower subject participation. A single national point of entry for use of cancer registry data in health research is worthy of consideration to make the research approval process efficient. © 2016 The Authors. Pharmacoepidemiology and Drug Safety published by John Wiley & Sons Ltd

    Pharmacoepidemiology and the Elderly

    Get PDF
    Summary: Pharmacoepidemiology employs the methods of epidemiology to study the frequency, determinants, and outcomes of drug therapy. Pharmacoepidemiology is becoming increasingly important with the aging of Western populations, due to the increased prevalence of medication use among older persons

    Interactive exploration of population scale pharmacoepidemiology datasets

    Full text link
    Population-scale drug prescription data linked with adverse drug reaction (ADR) data supports the fitting of models large enough to detect drug use and ADR patterns that are not detectable using traditional methods on smaller datasets. However, detecting ADR patterns in large datasets requires tools for scalable data processing, machine learning for data analysis, and interactive visualization. To our knowledge no existing pharmacoepidemiology tool supports all three requirements. We have therefore created a tool for interactive exploration of patterns in prescription datasets with millions of samples. We use Spark to preprocess the data for machine learning and for analyses using SQL queries. We have implemented models in Keras and the scikit-learn framework. The model results are visualized and interpreted using live Python coding in Jupyter. We apply our tool to explore a 384 million prescription data set from the Norwegian Prescription Database combined with a 62 million prescriptions for elders that were hospitalized. We preprocess the data in two minutes, train models in seconds, and plot the results in milliseconds. Our results show the power of combining computational power, short computation times, and ease of use for analysis of population scale pharmacoepidemiology datasets. The code is open source and available at: https://github.com/uit-hdl/norpd_prescription_analyse

    Nine years of comparative effectiveness research education and training: initiative supported by the PhRMA Foundation

    Get PDF
    The term comparative effectiveness research (CER) took center stage with passage of the American Recovery and Reinvestment Act (2009). The companion US$1.1 billion in funding prompted the launch of initiatives to train the scientific workforce capable of conducting and using CER. Passage of the Patient Protection and Affordable Care Act (2010) focused these initiatives on patients, coining the term ‘patient-centered outcomes research’ (PCOR). Educational and training initiatives were soon launched. This report describes the initiative of the Pharmaceutical Research and Manufacturers Association of America (PhRMA) Foundation. Through provision of grant funding to six academic Centers of Excellence, to spearheading and sponsoring three national conferences, the PhRMA Foundation has made significant contributions to creation of the scientific workforce that conducts and uses CER/PCOR

    What does validation of cases in electronic record databases mean? The potential contribution of free text

    Get PDF
    Electronic health records are increasingly used for research. The definition of cases or endpoints often relies on the use of coded diagnostic data, using a pre-selected group of codes. Validation of these cases, as ‘true’ cases of the disease, is crucial. There are, however, ambiguities in what is meant by validation in the context of electronic records. Validation usually implies comparison of a definition against a gold standard of diagnosis and the ability to identify false negatives (‘true’ cases which were not detected) as well as false positives (detected cases which did not have the condition). We argue that two separate concepts of validation are often conflated in existing studies. Firstly, whether the GP thought the patient was suffering from a particular condition (which we term confirmation or internal validation) and secondly, whether the patient really had the condition (external validation). Few studies have the ability to detect false negatives who have not received a diagnostic code. Natural language processing is likely to open up the use of free text within the electronic record which will facilitate both the validation of the coded diagnosis and searching for false negatives
    corecore