23 research outputs found

    Inter-rater reliability of data elements from a prototype of the Paul Coverdell National Acute Stroke Registry

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The Paul Coverdell National Acute Stroke Registry (PCNASR) is a U.S. based national registry designed to monitor and improve the quality of acute stroke care delivered by hospitals. The registry monitors care through specific performance measures, the accuracy of which depends in part on the reliability of the individual data elements used to construct them. This study describes the inter-rater reliability of data elements collected in Michigan's state-based prototype of the PCNASR.</p> <p>Methods</p> <p>Over a 6-month period, 15 hospitals participating in the Michigan PCNASR prototype submitted data on 2566 acute stroke admissions. Trained hospital staff prospectively identified acute stroke admissions, abstracted chart information, and submitted data to the registry. At each hospital 8 randomly selected cases were re-abstracted by an experienced research nurse. Inter-rater reliability was estimated by the kappa statistic for nominal variables, and intraclass correlation coefficient (ICC) for ordinal and continuous variables. Factors that can negatively impact the kappa statistic (i.e., trait prevalence and rater bias) were also evaluated.</p> <p>Results</p> <p>A total of 104 charts were available for re-abstraction. Excellent reliability (kappa or ICC > 0.75) was observed for many registry variables including age, gender, black race, hemorrhagic stroke, discharge medications, and modified Rankin Score. Agreement was at least moderate (i.e., 0.75 > kappa ≥; 0.40) for ischemic stroke, TIA, white race, non-ambulance arrival, hospital transfer and direct admit. However, several variables had poor reliability (kappa < 0.40) including stroke onset time, stroke team consultation, time of initial brain imaging, and discharge destination. There were marked systematic differences between hospital abstractors and the audit abstractor (i.e., rater bias) for many of the data elements recorded in the emergency department.</p> <p>Conclusion</p> <p>The excellent reliability of many of the data elements supports the use of the PCNASR to monitor and improve care. However, the poor reliability for several variables, particularly time-related events in the emergency department, indicates the need for concerted efforts to improve the quality of data collection. Specific recommendations include improvements to data definitions, abstractor training, and the development of ED-based real-time data collection systems.</p

    Plate-based diversity subset screening generation 2: An improved paradigm for high throughput screening of large compound files

    Get PDF
    High throughput screening (HTS) is an effective method for lead and probe discovery that is widely used in industry and academia to identify novel chemical matter and to initiate the drug discovery process. However, HTS can be time-consuming and costly and the use of subsets as an efficient alternative to screening these large collections has been investigated. Subsets may be selected on the basis of chemical diversity, molecular properties, biological activity diversity, or biological target focus. Previously we described a novel form of subset screening: plate-based diversity subset (PBDS) screening, in which the screening subset is constructed by plate selection (rather than individual compound cherry-picking), using algorithms that select for compound quality and chemical diversity on a plate basis. In this paper, we describe a second generation approach to the construction of an updated subset: PBDS2, using both plate and individual compound selection, that has an improved coverage of the chemical space of the screening file, whilst only selecting the same number of plates for screening. We describe the validation of PBDS2 and its successful use in hit and lead discovery. PBDS2 screening became the default mode of singleton (one compound per well) HTS for lead discovery in Pfizer

    Shaping a screening file for maximal lead discovery efficiency and effectiveness: elimination of molecular redundancy

    Get PDF
    High Throughput Screening (HTS) is a successful strategy for finding hits and leads that have the opportunity to be converted into drugs. In this paper we highlight novel computational methods used to select compounds to build a new screening file at Pfizer and the analytical methods we used to assess their quality. We also introduce the novel concept of molecular redundancy to help decide on the density of compounds required in any region of chemical space in order to be confident of running successful HTS campaigns

    Personalizing health care: feasibility and future implications

    Get PDF
    Considerable variety in how patients respond to treatments, driven by differences in their geno- and/ or phenotypes, calls for a more tailored approach. This is already happening, and will accelerate with developments in personalized medicine. However, its promise has not always translated into improvements in patient care due to the complexities involved. There are also concerns that advice for tests has been reversed, current tests can be costly, there is fragmentation of funding of care, and companies may seek high prices for new targeted drugs. There is a need to integrate current knowledge from a payer’s perspective to provide future guidance. Multiple findings including general considerations; influence of pharmacogenomics on response and toxicity of drug therapies; value of biomarker tests; limitations and costs of tests; and potentially high acquisition costs of new targeted therapies help to give guidance on potential ways forward for all stakeholder groups. Overall, personalized medicine has the potential to revolutionize care. However, current challenges and concerns need to be addressed to enhance its uptake and funding to benefit patients
    corecore