18 research outputs found

    Satisfaction With Psychology Training In the Veterans Healthcare Administration

    Get PDF
    Given that VA is the largest trainer of psychologists in the United States, this study sought to understand satisfaction with VA psychology training and which elements of training best predict trainees\u27 positive perceptions of training (e.g., willingness to choose training experience again, stated intentions to work in VA). Psychology trainees completed the Learners\u27 Perceptions Survey (LPS) from 2005 to 2017 (N = 5,342). Satisfaction was uniformly high. Trainee satisfaction was significantly associated with level of training, facility complexity, and some patient-mix factors. Learning environment (autonomy, time with patients, etc.), clinical faculty/preceptors (teaching ability, accessibility, etc.), and personal experiences (work/life balance, personal responsibility for patient care, etc.) were the biggest drivers of stated willingness to repeat training experiences in VA and seek employment there. Results have implications for psychologists involved in the provision of a training experience valued by trainees

    Consequences of Model Misspecification for Maximum Likelihood Estimation with Missing Data

    No full text
    Researchers are often faced with the challenge of developing statistical models with incomplete data. Exacerbating this situation is the possibility that either the researcher’s complete-data model or the model of the missing-data mechanism is misspecified. In this article, we create a formal theoretical framework for developing statistical models and detecting model misspecification in the presence of incomplete data where maximum likelihood estimates are obtained by maximizing the observable-data likelihood function when the missing-data mechanism is assumed ignorable. First, we provide sufficient regularity conditions on the researcher’s complete-data model to characterize the asymptotic behavior of maximum likelihood estimates in the simultaneous presence of both missing data and model misspecification. These results are then used to derive robust hypothesis testing methods for possibly misspecified models in the presence of Missing at Random (MAR) or Missing Not at Random (MNAR) missing data. Second, we introduce a method for the detection of model misspecification in missing data problems using recently developed Generalized Information Matrix Tests (GIMT). Third, we identify regularity conditions for the Missing Information Principle (MIP) to hold in the presence of model misspecification so as to provide useful computational covariance matrix estimation formulas. Fourth, we provide regularity conditions that ensure the observable-data expected negative log-likelihood function is convex in the presence of partially observable data when the amount of missingness is sufficiently small and the complete-data likelihood is convex. Fifth, we show that when the researcher has correctly specified a complete-data model with a convex negative likelihood function and an ignorable missing-data mechanism, then its strict local minimizer is the true parameter value for the complete-data model when the amount of missingness is sufficiently small. Our results thus provide new robust estimation, inference, and specification analysis methods for developing statistical models with incomplete data

    Generalized Information Matrix Tests for Detecting Model Misspecification

    No full text
    Generalized Information Matrix Tests (GIMTs) have recently been used for detecting the presence of misspecification in regression models in both randomized controlled trials and observational studies. In this paper, a unified GIMT framework is developed for the purpose of identifying, classifying, and deriving novel model misspecification tests for finite-dimensional smooth probability models. These GIMTs include previously published as well as newly developed information matrix tests. To illustrate the application of the GIMT framework, we derived and assessed the performance of new GIMTs for binary logistic regression. Although all GIMTs exhibited good level and power performance for the larger sample sizes, GIMT statistics with fewer degrees of freedom and derived using log-likelihood third derivatives exhibited improved level and power performance

    Catching Up on Health Outcomes: The Texas Medication Algorithm Project

    No full text
    OBJECTIVE: To develop a statistic measuring the impact of algorithm-driven disease management programs on outcomes for patients with chronic mental illness that allowed for treatment-as-usual controls to “catch up” to early gains of treated patients. DATA SOURCES/STUDY SETTING: Statistical power was estimated from simulated samples representing effect sizes that grew, remained constant, or declined following an initial improvement. Estimates were based on the Texas Medication Algorithm Project on adult patients (age≥18) with bipolar disorder (n=267) who received care between 1998 and 2000 at 1 of 11 clinics across Texas. STUDY DESIGN: Study patients were assessed at baseline and three-month follow-up for a minimum of one year. Program tracks were assigned by clinic. DATA COLLECTION/EXTRACTION METHODS: Hierarchical linear modeling was modified to account for declining-effects. Outcomes were based on 30-item Inventory for Depression Symptomatology—Clinician Version. PRINCIPAL FINDINGS: Declining-effect analyses had significantly greater power detecting program differences than traditional growth models in constant and declining-effects cases. Bipolar patients with severe depressive symptoms in an algorithm-driven, disease management program reported fewer symptoms after three months, with treatment-as-usual controls “catching up” within one year. CONCLUSIONS: In addition to psychometric properties, data collection design, and power, investigators should consider how outcomes unfold over time when selecting an appropriate statistic to evaluate service interventions. Declining-effect analyses may be applicable to a wide range of treatment and intervention trials
    corecore