77 research outputs found

    Evaluating manifest monotonicity using Bayes factors

    Get PDF
    The assumption of latent monotonicity in item response theory models for dichotomous data cannot be evaluated directly, but observable consequences such as manifest monotonicity facilitate the assessment of latent monotonicity in real data. Standard methods for evaluating manifest monotonicity typically produce a test statistic that is geared toward falsification, which can only provide indirect support in favor of manifest monotonicity. We propose the use of Bayes factors to quantify the degree of support available in the data in favor of manifest monotonicity or against manifest monotonicity. Through the use of informative hypotheses, this procedure can also be used to determine the support for manifest monotonicity over substantively or statistically relevant alternatives to manifest monotonicity, rendering the procedure highly flexible. The performance of the procedure is evaluated using a simulation study, and the application of the procedure is illustrated using empirical data. Keywords: Bayes factor, essential monotonicity, item response theory, latent monotonicity, manifest monotonicit

    Item-Score Reliability as a Selection Tool in Test Construction

    Get PDF
    This study investigates the usefulness of item-score reliability as a criterion for item selection in test construction. Methods MS, λ6, and CA were investigated as item-assessment methods in item selection and compared to the corrected item-total correlation, which was used as a benchmark. An ideal ordering to add items to the test (bottom-up procedure) or omit items from the test (top-down procedure) was defined based on the population test-score reliability. The orderings the four item-assessment methods produced in samples were compared to the ideal ordering, and the degree of resemblance was expressed by means of Kendall's τ. To investigate the concordance of the orderings across 1,000 replicated samples, Kendall's W was computed for each item-assessment method. The results showed that for both the bottom-up and the top-down procedures, item-assessment method CA and the corrected item-total correlation most closely resembled the ideal ordering. Generally, all item assessment methods resembled the ideal ordering better, and concordance of the orderings was greater, for larger sample sizes, and greater variance of the item discrimination parameters

    Formulation of the Comfort Women Discourse in International Society

    Get PDF
    Cystic fibrosis (CF) causes a relatively high medical consumption. A large part of the treatment takes place at home. Because data regarding nonhospital care are lacking, we wished to determine the costs of care of patients with CF outside the hospital. A questionnaire was sent to 73 patients with CF from two Dutch hospitals (response rate 64%, 14 children and 33 adults). Average consumption and average costs per patient per year were calculated for children and adults for six categories: non-hospital medical care; domestic help; diet; travelling because of CF; medication; and devices and special facilities at home, work or school. The average non-hospital costs of care amounted to £4,641 per child per year (range £712-13,269) and £10,242 per adult (range £1,653-26,571). Nonhospital medical care for children and adults accounted for, respectively, 8 and 5% of these costs, domestic help for 15 and 9%, diet for 10 and 7%, travelling because of CF for 4 and 8%, medication for 63 and 67%, and devices and special facilities at home, work or school for 1 and 4%. Nonhospital costs of care of cystic fibrosis are very high and amount to 50% of the total (medical and nonmedical) lifetime costs of cystic fibrosis

    Informed decision making about predictive DNA tests: arguments for more public visibility of personal deliberations about the good life

    Get PDF
    Since its advent, predictive DNA testing has been perceived as a technology that may have considerable impact on the quality of people’s life. The decision whether or not to use this technology is up to the individual client. However, to enable well considered decision making both the negative as well as the positive freedom of the individual should be supported. In this paper, we argue that current professional and public discourse on predictive DNA-testing is lacking when it comes to supporting positive freedom, because it is usually framed in terms of risk and risk management. We show how this ‘risk discourse’ steers thinking on the good life in a particular way. We go on to argue that empirical research into the actual deliberation and decision making processes of individuals and families may be used to enrich the environment of personal deliberation in three ways: (1) it points at a richer set of values that deliberators can take into account, (2) it acknowledges the shared nature of genes, and (3) it shows how one might frame decisions in a non-binary way. We argue that the public sharing and discussing of stories about personal deliberations offers valuable input for others who face similar choices: it fosters their positive freedom to shape their view of the good life in relation to DNA-diagnostics. We conclude by offering some suggestions as to how to realize such public sharing of personal stories

    Molecular medicine and concepts of disease: the ethical value of a conceptual analysis of emerging biomedical technologies

    Get PDF
    Although it is now generally acknowledged that new biomedical technologies often produce new definitions and sometimes even new concepts of disease, this observation is rarely used in research that anticipates potential ethical issues in emerging technologies. This article argues that it is useful to start with an analysis of implied concepts of disease when anticipating ethical issues of biomedical technologies. It shows, moreover, that it is possible to do so at an early stage, i.e. when a technology is only just emerging. The specific case analysed here is that of ‘molecular medicine’. This group of emerging technologies combines a ‘cascade model’ of disease processes with a ‘personal pattern’ model of bodily functioning. Whereas the ethical implications of the first are partly familiar from earlier—albeit controversial—forms of preventive and predictive medicine, those of the second are quite novel and potentially far-reaching

    Why checking model assumptions using null hypothesis significance tests does not suffice: A plea for plausibility

    Get PDF
    This article explores whether the null hypothesis significance testing (NHST) framework provides a sufficient basis for the evaluation of statistical model assumptions. It is argued that while NHST-based tests can provide some degree of confirmation for the model assumption that is evaluated-formulated as the null hypothesis-these tests do not inform us of the degree of support that the data provide for the null hypothesis and to what extent the null hypothesis should be considered to be plausible after having taken the data into account. Addressing the prior plausibility of the model assumption is unavoidable if the goal is to determine how plausible it is that the model assumption holds. Without assessing the prior plausibility of the model assumptions, it remains fully uncertain whether the model of interest gives an adequate description of the data and thus whether it can be considered valid for the application at hand. Although addressing the prior plausibility is difficult, ignoring the prior plausibility is not an option if we want to claim that the inferences of our statistical model can be relied upon

    Evaluating model assumptions in item response theory

    No full text
    This dissertation deals with the evaluation of model assumptions in the context of item response theory. Item response theory, also known as modern test theory, provides a statistical framework for the measurement of psychological constructs that cannot by observed directly, such as intelligence or depression. Item response theory models attempt to measure these psychological constructs by relating these 'latent traits' to observed responses on a set of items that are designed to measure these constructs (e.g., an intelligence test). For these item response theory models to be valid, the assumptions defining these models have to be valid. This dissertation deals with the issue of evaluating the model assumptions in item response theory. Two model assumptions are considered in particular: latent monotonicity and invariant item ordering. Observable consequences of these assumptions are presented and statistical tests are proposed that evaluate these observable consequences. Using these statistical procedures, it is possible to evaluate these model assumptions. The application of these procedures is illustrated using empirical data
    corecore