34 research outputs found

    Predicting implementation from organizational readiness for change: a study protocol

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>There is widespread interest in measuring organizational readiness to implement evidence-based practices in clinical care. However, there are a number of challenges to validating organizational measures, including inferential bias arising from the halo effect and method bias - two threats to validity that, while well-documented by organizational scholars, are often ignored in health services research. We describe a protocol to comprehensively assess the psychometric properties of a previously developed survey, the Organizational Readiness to Change Assessment.</p> <p>Objectives</p> <p>Our objective is to conduct a comprehensive assessment of the psychometric properties of the Organizational Readiness to Change Assessment incorporating methods specifically to address threats from halo effect and method bias.</p> <p>Methods and Design</p> <p>We will conduct three sets of analyses using longitudinal, secondary data from four partner projects, each testing interventions to improve the implementation of an evidence-based clinical practice. Partner projects field the Organizational Readiness to Change Assessment at baseline (n = 208 respondents; 53 facilities), and prospectively assesses the degree to which the evidence-based practice is implemented. We will conduct predictive and concurrent validities using hierarchical linear modeling and multivariate regression, respectively. For predictive validity, the outcome is the change from baseline to follow-up in the use of the evidence-based practice. We will use intra-class correlations derived from hierarchical linear models to assess inter-rater reliability. Two partner projects will also field measures of job satisfaction for convergent and discriminant validity analyses, and will field Organizational Readiness to Change Assessment measures at follow-up for concurrent validity (n = 158 respondents; 33 facilities). Convergent and discriminant validities will test associations between organizational readiness and different aspects of job satisfaction: satisfaction with leadership, which should be highly correlated with readiness, versus satisfaction with salary, which should be less correlated with readiness. Content validity will be assessed using an expert panel and modified Delphi technique.</p> <p>Discussion</p> <p>We propose a comprehensive protocol for validating a survey instrument for assessing organizational readiness to change that specifically addresses key threats of bias related to halo effect, method bias and questions of construct validity that often go unexplored in research using measures of organizational constructs.</p

    Workshop/conference report—Quantitative bioanalytical methods validation and implementation: Best practices for chromatographic and ligand binding assays

    No full text
    For quantitative bioanalytical method validation procedure and requirements, there was a relatively good agreement between chromatographic assays and ligand-binding assays. It was realized that the quantitative and qualitative aspects of bioanalytical method validation should be reviewed and applied appropriately.Some of the major concerns between the 2 methodologies related to the acceptable total error for precision and accuracy determination and acceptance criteria for an analytical run. The acceptable total error for precision and accuracy for both the methodologies is less than 30. The 4–6–15 rule for accepting an analytical run by a chromatographic method remained acceptable while a 4–6–20 rule was recommended for ligand-binding methodology.The 3rd AAPS/FDA Bioanalytical Workshop clarified the issues related to placement of QC samples, determination of matrix effect, stability considerations, use of internal standards, and system suitability tests.There was a major concern and issues raised with respect to stability and reproducibility of incurred samples. This should be addressed for all analytical methods employed. It was left to the investigators to use their scientific judgment to address the issue.In general, the 3rd AAPS/FDA Bioanalytical Workshop provided a forum to discuss and clarify regulatory concerns regarding bioanalytical method validation issues

    Dissolution Testing for Generic Drugs: An FDA Perspective

    No full text
    In vitro dissolution testing is an important tool used for development and approval of generic dosage forms. The objective of this article is to summarize how dissolution testing is used for the approval of safe and effective generic drug products in the United States (US). Dissolution testing is routinely used for stability and quality control purposes for both oral and non-oral dosage forms. The dissolution method should be developed using an appropriate validated method depending on the dosage form. There are several ways in which dissolution testing plays a pivotal role in regulatory decision-making. It may be used to waive in vivo bioequivalence (BE) study requirements, as BE documentation for Scale Up and Post Approval Changes (SUPAC), and to predict the potential for a modified-release (MR) drug product to dose-dump if co-administered with alcoholic beverages. Thus, in vitro dissolution testing plays a major role in FDA’s efforts to reduce the regulatory burden and unnecessary human studies in generic drug development without sacrificing the quality of the drug products

    Appropriate calibration curve fitting in ligand binding assays

    No full text
    Calibration curves for ligand binding assays are generally characterized by a nonlinear relationship between the mean response and the analyte concentration. Typically, the response exhibits a sigmoidal relationship with concentration. The currently accepted reference model for these calibration curves is the 4-parameter logistic (4-PL) model, which optimizes accuracy and precision over the maximum usable calibration range. Incorporation of weighting into the model requires additional effort but generally results in improved calibration curve performance. For calibration curves with some asymmetry, introduction of a fifth parameter (5-PL) may further improve the goodness of fit of the experimental data to the algorithm. Alternative models should be used with caution and with knowledge of the accuracy and precision performance of the model across the entire calibration range, but particularly at upper and lower analyte concentration areas, where the 4-and 5-PL algorithms generally outperform alternative models. Several assay design parameters, such as placement of calibrator concentrations across the selected range and assay layout on multiwell plates, should be considered, to enable optimal application of the 4- or 5-PL model. The fit of the experimental data to the model should be evaluated by assessment of agreement of nominal and model-predicted data for calibrators

    Statistical Considerations for Assessment of Bioanalytical Incurred Sample Reproducibility

    No full text
    Bioanalytical method validation is generally conducted using standards and quality control (QC) samples which are prepared to be as similar as possible to the study samples (incurred samples) which are to be analyzed. However, there are a variety of circumstances in which the performance of a bioanalytical method when using standards and QCs may not adequately approximate that when using incurred samples. The objective of incurred sample reproducibility (ISR) testing is to demonstrate that a bioanalytical method will produce consistent results from study samples when re-analyzed on a separate occasion. The Third American Association of Pharmaceutical Scientists (AAPS)/Food and Drug Administration (FDA) Bioanalytical Workshop and subsequent workshops have led to widespread industry adoption of the so-called “4–6–20” rule for assessing incurred sample reproducibility (i.e. at least 66.7% of the re-analyzed incurred samples must agree within ±20% of the original result), though the performance of this rule in the context of ISR testing has not yet been evaluated. This paper evaluates the performance of the 4–6–20 rule, provides general recommendations and guidance on appropriate experimental designs and sample sizes for ISR testing, discusses the impact of repeated ISR testing across multiple clinical studies, and proposes alternative acceptance criteria for ISR testing based on formal statistical methodology
    corecore