79 research outputs found

    Evaluating the Construct Validity of the Norwegian Version of the Level of Personality Functioning Scale - Brief Form 2.0 in a Large Clinical Sample

    Get PDF
    The Level of Personality Functioning - Brief Form 2.0 (LPFS-BF 2.0) is a 12-item self-report questionnaire developed to gain a quick impression of the severity of personality pathology according to the DSM-5 Alternative Model for Personality Disorders (AMPD). The current study evaluated the construct validity and reliability of the Norwegian version of the LPFS-BF 2.0 in a large clinical sample (N = 1673). Dimensionality was examined using confirmatory factor analysis and bifactor analysis followed by an analysis of distinctiveness of the subscales using the proportional reduction in mean squared error (PRMSE), and the concurrent validity was examined using correlations with self-report questionnaires and clinical interviews assessing PDs according to section II of the DSM-5. Taking the findings of the dimensionality and concurrent validity results together, we found moderate to good support for the use of total scores for the Norwegian version of the LPFS-BF 2.0. We would advise against the use of subscale scores, since the subscales provided only a small amount of reliable unique variance.</p

    Evaluating the Construct Validity of the Norwegian Version of the Level of Personality Functioning Scale - Brief Form 2.0 in a Large Clinical Sample

    Get PDF
    The Level of Personality Functioning - Brief Form 2.0 (LPFS-BF 2.0) is a 12-item self-report questionnaire developed to gain a quick impression of the severity of personality pathology according to the DSM-5 Alternative Model for Personality Disorders (AMPD). The current study evaluated the construct validity and reliability of the Norwegian version of the LPFS-BF 2.0 in a large clinical sample (N = 1673). Dimensionality was examined using confirmatory factor analysis and bifactor analysis followed by an analysis of distinctiveness of the subscales using the proportional reduction in mean squared error (PRMSE), and the concurrent validity was examined using correlations with self-report questionnaires and clinical interviews assessing PDs according to section II of the DSM-5. Taking the findings of the dimensionality and concurrent validity results together, we found moderate to good support for the use of total scores for the Norwegian version of the LPFS-BF 2.0. We would advise against the use of subscale scores, since the subscales provided only a small amount of reliable unique variance.</p

    Evaluating the Construct Validity of the Norwegian Version of the Level of Personality Functioning Scale - Brief Form 2.0 in a Large Clinical Sample

    Get PDF
    The Level of Personality Functioning - Brief Form 2.0 (LPFS-BF 2.0) is a 12-item self-report questionnaire developed to gain a quick impression of the severity of personality pathology according to the DSM-5 Alternative Model for Personality Disorders (AMPD). The current study evaluated the construct validity and reliability of the Norwegian version of the LPFS-BF 2.0 in a large clinical sample (N = 1673). Dimensionality was examined using confirmatory factor analysis and bifactor analysis followed by an analysis of distinctiveness of the subscales using the proportional reduction in mean squared error (PRMSE), and the concurrent validity was examined using correlations with self-report questionnaires and clinical interviews assessing PDs according to section II of the DSM-5. Taking the findings of the dimensionality and concurrent validity results together, we found moderate to good support for the use of total scores for the Norwegian version of the LPFS-BF 2.0. We would advise against the use of subscale scores, since the subscales provided only a small amount of reliable unique variance.</p

    Evaluating the Construct Validity of the Norwegian Version of the Level of Personality Functioning Scale - Brief Form 2.0 in a Large Clinical Sample

    Get PDF
    The Level of Personality Functioning - Brief Form 2.0 (LPFS-BF 2.0) is a 12-item self-report questionnaire developed to gain a quick impression of the severity of personality pathology according to the DSM-5 Alternative Model for Personality Disorders (AMPD). The current study evaluated the construct validity and reliability of the Norwegian version of the LPFS-BF 2.0 in a large clinical sample (N = 1673). Dimensionality was examined using confirmatory factor analysis and bifactor analysis followed by an analysis of distinctiveness of the subscales using the proportional reduction in mean squared error (PRMSE), and the concurrent validity was examined using correlations with self-report questionnaires and clinical interviews assessing PDs according to section II of the DSM-5. Taking the findings of the dimensionality and concurrent validity results together, we found moderate to good support for the use of total scores for the Norwegian version of the LPFS-BF 2.0. We would advise against the use of subscale scores, since the subscales provided only a small amount of reliable unique variance.</p

    Robust Automated test Assembly for testlet-Based tests:An illustration with analytical reasoning items

    Get PDF
    In many high-stakes testing programs, testlets are used to increase efficiency. Since responses to items belonging to the same testlet not only depend on the latent ability but also on correct reading, understanding, and interpretation of the stimulus, the assumption of local independence does not hold. Testlet response theory (TRT) models have been developed to deal with this dependency. For both logit and probit testlet models, a random testlet effect is added to the standard logit and probit item response theory (IRT) models. Even though this testlet effect might make the IRT models more realistic, application of these models in practice leads to new questions, for example, in automated test assembly (ATA). In many test assembly models, goals have been formulated for the amount of information the test should provide about the candidates. The amount of Fisher Information is often maximized or it has to meet a prespecified target. Since TRT models have a random testlet effect, Fisher Information contains a random effect as well. The question arises as to how this random effect in ATA should be dealt with. A method based on robust optimization techniques for dealing with uncertainty in test assembly due to random testlet effects is presented. The method is applied in the context of a high-stakes testing program, and the impact of this robust test assembly method is studied. Results are discussed, advantages of the use of robust test assembly are mentioned, and recommendations about the use of the new method are given

    Evaluating the Construct Validity of the Norwegian Version of the Level of Personality Functioning Scale - Brief Form 2.0 in a Large Clinical Sample

    Get PDF
    The Level of Personality Functioning - Brief Form 2.0 (LPFS-BF 2.0) is a 12-item self-report questionnaire developed to gain a quick impression of the severity of personality pathology according to the DSM-5 Alternative Model for Personality Disorders (AMPD). The current study evaluated the construct validity and reliability of the Norwegian version of the LPFS-BF 2.0 in a large clinical sample (N = 1673). Dimensionality was examined using confirmatory factor analysis and bifactor analysis followed by an analysis of distinctiveness of the subscales using the proportional reduction in mean squared error (PRMSE), and the concurrent validity was examined using correlations with self-report questionnaires and clinical interviews assessing PDs according to section II of the DSM-5. Taking the findings of the dimensionality and concurrent validity results together, we found moderate to good support for the use of total scores for the Norwegian version of the LPFS-BF 2.0. We would advise against the use of subscale scores, since the subscales provided only a small amount of reliable unique variance.</p

    Inventory of assessment practices in people with profound intellectual and multiple disabilities in three European countries

    Get PDF
    BACKGROUND: Knowledge about the quality of assessment methods used in the support of people with profound intellectual and multiple disabilities (PIMD) is scarce. This study aimed to provide an overview of the assessment methods used in practice and to examine whether these instruments were studied for their psychometric properties for people with PIMD. METHOD: Professionals (N = 148) from three European countries completed a survey on assessment practices. We performed a literature search to find information about the psychometric properties of the instruments that were identified in the survey. RESULTS: Of the participants, 78.1% used assessments that were not developed for people with PIMD. Documentation on psychometric properties was found for 8 out of 116 instruments. CONCLUSIONS: Most of the instruments in use were not designed for people with PIMD, and information about their quality is lacking. Guidelines are needed regarding the use and development of assessment methods for people with PIMD

    Visualizing Uncertainty to Promote Clinicians’ Understanding of Measurement Error

    Get PDF
    Measurement error is an inherent part of any test score. This uncertainty is generally communicated in ways that can be difficult to understand for clinical practitioners. In this empirical study, we evaluate the impact of several communication formats on the interpretation of measurement accuracy and its influence on the decision-making process in clinical practice. We provided 230 clinical practitioners with score reports in five formats: textual, error bar, violin plot, diamond plot, and quantile dot plot. We found that quantile dot plots significantly increased accuracy in the assessment of measurement uncertainty compared with other formats. However, a direct relation between visualization format and decision quality could not be found. Although traditional confidence intervals and error bars were favored by many participants due to their familiarity, responses revealed several misconceptions that make the suitability of these formats for communicating uncertainty questionable. Our results indicate that new visualization formats can successfully reduce errors in interpretation

    Visualizing Uncertainty to Promote Clinicians’ Understanding of Measurement Error

    Get PDF
    Measurement error is an inherent part of any test score. This uncertainty is generally communicated in ways that can be difficult to understand for clinical practitioners. In this empirical study, we evaluate the impact of several communication formats on the interpretation of measurement accuracy and its influence on the decision-making process in clinical practice. We provided 230 clinical practitioners with score reports in five formats: textual, error bar, violin plot, diamond plot, and quantile dot plot. We found that quantile dot plots significantly increased accuracy in the assessment of measurement uncertainty compared with other formats. However, a direct relation between visualization format and decision quality could not be found. Although traditional confidence intervals and error bars were favored by many participants due to their familiarity, responses revealed several misconceptions that make the suitability of these formats for communicating uncertainty questionable. Our results indicate that new visualization formats can successfully reduce errors in interpretation

    Visualizing Uncertainty to Promote Clinicians’ Understanding of Measurement Error

    Get PDF
    Measurement error is an inherent part of any test score. This uncertainty is generally communicated in ways that can be difficult to understand for clinical practitioners. In this empirical study, we evaluate the impact of several communication formats on the interpretation of measurement accuracy and its influence on the decision-making process in clinical practice. We provided 230 clinical practitioners with score reports in five formats: textual, error bar, violin plot, diamond plot, and quantile dot plot. We found that quantile dot plots significantly increased accuracy in the assessment of measurement uncertainty compared with other formats. However, a direct relation between visualization format and decision quality could not be found. Although traditional confidence intervals and error bars were favored by many participants due to their familiarity, responses revealed several misconceptions that make the suitability of these formats for communicating uncertainty questionable. Our results indicate that new visualization formats can successfully reduce errors in interpretation
    • …
    corecore