850 research outputs found

    Influence of Context on Item Parameters in Forced-Choice Personality Assessments

    Get PDF
    A fundamental assumption in computerized adaptive testing (CAT) is that item parameters are invariant with respect to context – items surrounding the administered item. This assumption, however, may not hold in forced-choice (FC) assessments, where explicit comparisons are made between items included in the same block. We empirically examined the influence of context on item parameters by comparing parameter estimates from two FC instruments. The first instrument was compiled of blocks of three items, whereas in the second, the context was manipulated by adding one item to each block, resulting in blocks of four. The item parameter estimates were highly similar. However, a small number of significant deviations were observed, confirming the importance of context when designing adaptive FC assessments. Two patterns of such deviations were identified, and methods to reduce their occurrences in a FC CAT setting were proposed. It was shown that with a small proportion of violations of the parameter invariance assumption, score estimation remained stable

    Structural, item, and test generalizability of the psychopathology checklist - revised to offenders with intellectual disabilities

    Get PDF
    The Psychopathy Checklist–Revised (PCL-R) is the most widely used measure of psychopathy in forensic clinical practice, but the generalizability of the measure to offenders with intellectual disabilities (ID) has not been clearly established. This study examined the structural equivalence and scalar equivalence of the PCL-R in a sample of 185 male offenders with ID in forensic mental health settings, as compared with a sample of 1,212 male prisoners without ID. Three models of the PCL-R’s factor structure were evaluated with confirmatory factor analysis. The 3-factor hierarchical model of psychopathy was found to be a good fit to the ID PCL-R data, whereas neither the 4-factor model nor the traditional 2-factor model fitted. There were no cross-group differences in the factor structure, providing evidence of structural equivalence. However, item response theory analyses indicated metric differences in the ratings of psychopathy symptoms between the ID group and the comparison prisoner group. This finding has potential implications for the interpretation of PCL-R scores obtained with people with ID in forensic psychiatric settings

    Measuring and Predicting Individual Differences in Executive Functions at 14 Months: A Longitudinal Study.

    Get PDF
    This study of 195 (108 boys) children seen twice during infancy (Time 1: 4.12 months; Time 2: 14.42 months) aimed to investigate the associations between and infant predictors of executive function (EF) at 14 months. Infants showed high levels of compliance with the EF tasks at 14 months. There was little evidence of cohesion among EF tasks but simple response inhibition was related to performance on two other EF tasks. Infant attention (but not parent-rated temperament) at 4 months predicted performance on two of the four EF tasks at 14 months. Results suggest that EF skills build on simpler component skills such as attention and response inhibition.ESR

    Conceptualising computerized adaptive testing for measurement of latent variables associated with physical objects

    Get PDF
    The notion of that more or less of a physical feature affects in different degrees the users' impression with regard to an underlying attribute of a product has frequently been applied in affective engineering. However, those attributes exist only as a premise that cannot directly be measured and, therefore, inferences based on their assessment are error-prone. To establish and improve measurement of latent attributes it is presented in this paper the concept of a stochastic framework using the Rasch model for a wide range of independent variables referred to as an item bank. Based on an item bank, computerized adaptive testing (CAT) can be developed. A CAT system can converge into a sequence of items bracketing to convey information at a user's particular endorsement level. It is through item banking and CAT that the financial benefits of using the Rasch model in affective engineering can be realised

    Curtailment and Stochastic Curtailment to Shorten the CES-D

    Get PDF
    The Center for Epidemiologic Studies-Depression (CES-D) scale is a well-known self-report instrument that is used to measure depressive symptomatology. Respondents who take the full-length version of the CES-D are administered a total of 20 items. This article investigates the use of curtailment and stochastic curtailment (SC), two sequential analysis methods that have recently been proposed for health questionnaires, to reduce the respondent burden associated with taking the CES-D. A post hoc simulation based on 1,392 adolescents' responses to the CES-D was used to compare these methods with a previously proposed computerized adaptive testing (CAT) approach. Curtailment lowered average test lengths by as much as 22% while always matching the classification decision of the full-length CES-D. SC and CAT achieved further reductions in average test length, with SC's classifications exhibiting more concordance with the full-length CES-D than do CAT's. Advantages and disadvantages of each method are discussed. © The Author(s) 2012

    Accelerated in vivo proliferation of memory phenotype CD4+ T-cells in human HIV-1 infection irrespective of viral chemokine co-receptor tropism.

    Get PDF
    CD4(+) T-cell loss is the hallmark of HIV-1 infection. CD4 counts fall more rapidly in advanced disease when CCR5-tropic viral strains tend to be replaced by X4-tropic viruses. We hypothesized: (i) that the early dominance of CCR5-tropic viruses results from faster turnover rates of CCR5(+) cells, and (ii) that X4-tropic strains exert greater pathogenicity by preferentially increasing turnover rates within the CXCR4(+) compartment. To test these hypotheses we measured in vivo turnover rates of CD4(+) T-cell subpopulations sorted by chemokine receptor expression, using in vivo deuterium-glucose labeling. Deuterium enrichment was modeled to derive in vivo proliferation (p) and disappearance (d*) rates which were related to viral tropism data. 13 healthy controls and 13 treatment-naive HIV-1-infected subjects (CD4 143-569 cells/ul) participated. CCR5-expression defined a CD4(+) subpopulation of predominantly CD45R0(+) memory cells with accelerated in vivo proliferation (p = 2.50 vs 1.60%/d, CCR5(+) vs CCR5(-); healthy controls; P<0.01). Conversely, CXCR4 expression defined CD4(+) T-cells (predominantly CD45RA(+) naive cells) with low turnover rates. The dominant effect of HIV infection was accelerated turnover of CCR5(+)CD45R0(+)CD4(+) memory T-cells (p = 5.16 vs 2.50%/d, HIV vs controls; P<0.05), naïve cells being relatively unaffected. Similar patterns were observed whether the dominant circulating HIV-1 strain was R5-tropic (n = 9) or X4-tropic (n = 4). Although numbers were small, X4-tropic viruses did not appear to specifically drive turnover of CXCR4-expressing cells (p = 0.54 vs 0.72 vs 0.44%/d in control, R5-tropic, and X4-tropic groups respectively). Our data are most consistent with models in which CD4(+) T-cell loss is primarily driven by non-specific immune activation

    Grifonin-1: A Small HIV-1 Entry Inhibitor Derived from the Algal Lectin, Griffithsin

    Get PDF
    Background: Griffithsin, a 121-residue protein isolated from a red algal Griffithsia sp., binds high mannose N-linked glycans of virus surface glycoproteins with extremely high affinity, a property that allows it to prevent the entry of primary isolates and laboratory strains of T- and M-tropic HIV-1. We used the sequence of a portion of griffithsin's sequence as a design template to create smaller peptides with antiviral and carbohydrate-binding properties. Methodology/Results: The new peptides derived from a trio of homologous β-sheet repeats that comprise the motifs responsible for its biological activity. Our most active antiviral peptide, grifonin-1 (GRFN-1), had an EC50 of 190.8±11.0 nM in in vitro TZM-bl assays and an EC50 of 546.6±66.1 nM in p24gag antigen release assays. GRFN-1 showed considerable structural plasticity, assuming different conformations in solvents that differed in polarity and hydrophobicity. Higher concentrations of GRFN-1 formed oligomers, based on intermolecular β-sheet interactions. Like its parent protein, GRFN-1 bound viral glycoproteins gp41 and gp120 via the N-linked glycans on their surface. Conclusion: Its substantial antiviral activity and low toxicity in vitro suggest that GRFN-1 and/or its derivatives may have therapeutic potential as topical and/or systemic agents directed against HIV-1

    A Natural Human Retrovirus Efficiently Complements Vectors Based on Murine Leukemia Virus

    Get PDF
    Background: Murine Leukemia Virus (MLV) is a rodent gammaretrovirus that serves as the backbone for common gene delivery tools designed for experimental and therapeutic applications. Recently, an infectious gammaretrovirus designated XMRV has been identified in prostate cancer patients. The similarity between the MLV and XMRV genomes suggests a possibility that the two viruses may interact when present in the same cell. Methodology/Principal Findings: We tested the ability of XMRV to complement replication-deficient MLV vectors upon coinfection of cultured human cells. We observed that XMRV can facilitate the spread of these vectors from infected to uninfected cells. This functional complementation occurred without any gross rearrangements in the vector structure, and the co-infected cells produced as many as 10 4 infectious vector particles per milliliter of culture medium. Conclusions/Significance: The possibility of encountering a helper virus when delivering MLV-based vectors to human cells in vitro and in vivo needs to be considered to ensure the safety of such procedures

    On environment difficulty and discriminating power

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/s10458-014-9257-1This paper presents a way to estimate the difficulty and discriminating power of any task instance. We focus on a very general setting for tasks: interactive (possibly multiagent) environments where an agent acts upon observations and rewards. Instead of analysing the complexity of the environment, the state space or the actions that are performed by the agent, we analyse the performance of a population of agent policies against the task, leading to a distribution that is examined in terms of policy complexity. This distribution is then sliced by the algorithmic complexity of the policy and analysed through several diagrams and indicators. The notion of environment response curve is also introduced, by inverting the performance results into an ability scale. We apply all these concepts, diagrams and indicators to two illustrative problems: a class of agent-populated elementary cellular automata, showing how the difficulty and discriminating power may vary for several environments, and a multiagent system, where agents can become predators or preys, and may need to coordinate. Finally, we discuss how these tools can be applied to characterise (interactive) tasks and (multi-agent) environments. These characterisations can then be used to get more insight about agent performance and to facilitate the development of adaptive tests for the evaluation of agent abilities.I thank the reviewers for their comments, especially those aiming at a clearer connection with the field of multi-agent systems and the suggestion of better approximations for the calculation of the response curves. The implementation of the elementary cellular automata used in the environments is based on the library 'CellularAutomaton' by John Hughes for R [58]. I am grateful to Fernando Soler-Toscano for letting me know about their work [65] on the complexity of 2D objects generated by elementary cellular automata. I would also like to thank David L. Dowe for his comments on a previous version of this paper. This work was supported by the MEC/MINECO projects CONSOLIDER-INGENIO CSD2007-00022 and TIN 2010-21062-C02-02, GVA project PROMETEO/2008/051, the COST - European Cooperation in the field of Scientific and Technical Research IC0801 AT, and the REFRAME project, granted by the European Coordinated Research on Long-term Challenges in Information and Communication Sciences & Technologies ERA-Net (CHIST-ERA), and funded by the Ministerio de Economia y Competitividad in Spain (PCIN-2013-037).José Hernández-Orallo (2015). On environment difficulty and discriminating power. Autonomous Agents and Multi-Agent Systems. 29(3):402-454. https://doi.org/10.1007/s10458-014-9257-1S402454293Anderson, J., Baltes, J., & Cheng, C. T. (2011). Robotics competitions as benchmarks for ai research. The Knowledge Engineering Review, 26(01), 11–17.Andre, D., & Russell, S. J. (2002). State abstraction for programmable reinforcement learning agents. In Proceedings of the National Conference on Artificial Intelligence (pp. 119–125). Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999.Antunes, L., Fortnow, L., van Melkebeek, D., & Vinodchandran, N. V. (2006). Computational depth: Concept and applications. Theoretical Computer Science, 354(3), 391–404. Foundations of Computation Theory (FCT 2003), 14th Symposium on Fundamentals of Computation Theory 2003.Arai, K., Kaminka, G. A., Frank, I., & Tanaka-Ishii, K. (2003). Performance competitions as research infrastructure: Large scale comparative studies of multi-agent teams. Autonomous Agents and Multi-Agent Systems, 7(1–2), 121–144.Ashcraft, M. H., Donley, R. D., Halas, M. A., & Vakali, M. (1992). Chapter 8 working memory, automaticity, and problem difficulty. In Jamie I.D. Campbell (Ed.), The nature and origins of mathematical skills, volume 91 of advances in psychology (pp. 301–329). North-Holland.Ay, N., Müller, M., & Szkola, A. (2010). Effective complexity and its relation to logical depth. IEEE Transactions on Information Theory, 56(9), 4593–4607.Barch, D. M., Braver, T. S., Nystrom, L. E., Forman, S. D., Noll, D. C., & Cohen, J. D. (1997). Dissociating working memory from task difficulty in human prefrontal cortex. Neuropsychologia, 35(10), 1373–1380.Bordini, R. H., Hübner, J. F., & Wooldridge, M. (2007). Programming multi-agent systems in AgentSpeak using Jason. London: Wiley. com.Boutilier, C., Reiter, R., Soutchanski, M., Thrun, S. et al. (2000). Decision-theoretic, high-level agent programming in the situation calculus. In Proceedings of the National Conference on Artificial Intelligence (pp. 355–362). Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999.Busoniu, L., Babuska, R., & De Schutter, B. (2008). A comprehensive survey of multiagent reinforcement learning. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, 38(2), 156–172.Chaitin, G. J. (1977). Algorithmic information theory. IBM Journal of Research and Development, 21, 350–359.Chedid, F. B. (2010). Sophistication and logical depth revisited. In 2010 IEEE/ACS International Conference on Computer Systems and Applications (AICCSA) (pp. 1–4). IEEE.Cheeseman, P., Kanefsky, B. & Taylor, W. M. (1991). Where the really hard problems are. In Proceedings of IJCAI-1991 (pp. 331–337).Dastani, M. (2008). 2APL: A practical agent programming language. Autonomous Agents and Multi-agent Systems, 16(3), 214–248.Delahaye, J. P. & Zenil, H. (2011). Numerical evaluation of algorithmic complexity for short strings: A glance into the innermost structure of randomness. Applied Mathematics and Computation, 219(1), 63–77Dowe, D. L. (2008). Foreword re C. S. Wallace. Computer Journal, 51(5), 523–560. Christopher Stewart WALLACE (1933–2004) memorial special issue.Dowe, D. L., & Hernández-Orallo, J. (2012). IQ tests are not for machines, yet. Intelligence, 40(2), 77–81.Du, D. Z., & Ko, K. I. (2011). Theory of computational complexity (Vol. 58). London: Wiley-Interscience.Elo, A. E. (1978). The rating of chessplayers, past and present (Vol. 3). London: Batsford.Embretson, S. E., & Reise, S. P. (2000). Item response theory for psychologists. London: Lawrence Erlbaum.Fatès, N. & Chevrier, V. (2010). How important are updating schemes in multi-agent systems? an illustration on a multi-turmite model. In Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1-Volume 1 (pp. 533–540). International Foundation for Autonomous Agents and Multiagent Systems.Ferber, J. & Müller, J. P. (1996). Influences and reaction: A model of situated multiagent systems. In Proceedings of Second International Conference on Multi-Agent Systems (ICMAS-96) (pp. 72–79).Ferrando, P. J. (2009). Difficulty, discrimination, and information indices in the linear factor analysis model for continuous item responses. Applied Psychological Measurement, 33(1), 9–24.Ferrando, P. J. (2012). Assessing the discriminating power of item and test scores in the linear factor-analysis model. Psicológica, 33, 111–139.Gent, I. P., & Walsh, T. (1994). Easy problems are sometimes hard. Artificial Intelligence, 70(1), 335–345.Gershenson, C. & Fernandez, N. (2012). Complexity and information: Measuring emergence, self-organization, and homeostasis at multiple scales. Complexity, 18(2), 29–44.Gruner, S. (2010). Mobile agent systems and cellular automata. Autonomous Agents and Multi-agent Systems, 20(2), 198–233.Hardman, D. K., & Payne, S. J. (1995). Problem difficulty and response format in syllogistic reasoning. The Quarterly Journal of Experimental Psychology, 48(4), 945–975.He, J., Reeves, C., Witt, C., & Yao, X. (2007). A note on problem difficulty measures in black-box optimization: Classification, realizations and predictability. Evolutionary Computation, 15(4), 435–443.Hernández-Orallo, J. (2000). Beyond the turing test. Journal of Logic Language & Information, 9(4), 447–466.Hernández-Orallo, J. (2000). On the computational measurement of intelligence factors. In A. Meystel (Ed.), Performance metrics for intelligent systems workshop (pp. 1–8). Gaithersburg, MD: National Institute of Standards and Technology.Hernández-Orallo, J. (2000). Thesis: Computational measures of information gain and reinforcement in inference processes. AI Communications, 13(1), 49–50.Hernández-Orallo, J. (2010). A (hopefully) non-biased universal environment class for measuring intelligence of biological and artificial systems. In M. Hutter et al. (Ed.), 3rd International Conference on Artificial General Intelligence (pp. 182–183). Atlantis Press Extended report at http://users.dsic.upv.es/proy/anynt/unbiased.pdf .Hernández-Orallo, J., & Dowe, D. L. (2010). Measuring universal intelligence: Towards an anytime intelligence test. Artificial Intelligence, 174(18), 1508–1539.Hernández-Orallo, J., Dowe, D. L., España-Cubillo, S., Hernández-Lloreda, M. V., & Insa-Cabrera, J. (2011). On more realistic environment distributions for defining, evaluating and developing intelligence. In J. Schmidhuber, K. R. Thórisson, & M. Looks (Eds.), LNAI series on artificial general intelligence 2011 (Vol. 6830, pp. 82–91). Berlin: Springer.Hernández-Orallo, J., Dowe, D. L., & Hernández-Lloreda, M. V. (2014). Universal psychometrics: Measuring cognitive abilities in the machine kingdom. Cognitive Systems Research, 27, 50–74.Hernández-Orallo, J., Insa, J., Dowe, D. L. & Hibbard, B. (2012). Turing tests with turing machines. In A. Voronkov (Ed.), The Alan Turing Centenary Conference, Turing-100, Manchester, 2012, volume 10 of EPiC Series (pp. 140–156).Hernández-Orallo, J. & Minaya-Collado, N. (1998). A formal definition of intelligence based on an intensional variant of Kolmogorov complexity. In Proceedings of International Symposium of Engineering of Intelligent Systems (EIS’98) (pp. 146–163). ICSC Press.Hibbard, B. (2009). Bias and no free lunch in formal measures of intelligence. Journal of Artificial General Intelligence, 1(1), 54–61.Hoos, H. H. (1999). Sat-encodings, search space structure, and local search performance. In 1999 International Joint Conference on Artificial Intelligence (Vol. 16, pp. 296–303).Insa-Cabrera, J., Benacloch-Ayuso, J. L., & Hernández-Orallo, J. (2012). On measuring social intelligence: Experiments on competition and cooperation. In J. Bach, B. Goertzel, & M. Iklé (Eds.), AGI, volume 7716 of lecture notes in computer science (pp. 126–135). Berlin: Springer.Insa-Cabrera, J., Dowe, D. L., España-Cubillo, S., Hernández-Lloreda, M. V., & Hernández-Orallo, J. (2011). Comparing humans and AI agents. In J. Schmidhuber, K. R. Thórisson, & M. Looks (Eds.), LNAI series on artificial general intelligence 2011 (Vol. 6830, pp. 122–132). Berlin: Springer.Knuth, D. E. (1973). Sorting and searching, volume 3 of the art of computer programming. Reading, MA: Addison-Wesley.Kotovsky, K., & Simon, H. A. (1990). What makes some problems really hard: Explorations in the problem space of difficulty. Cognitive Psychology, 22(2), 143–183.Legg, S. (2008). Machine super intelligence. PhD thesis, Department of Informatics, University of Lugano, June 2008.Legg, S., & Hutter, M. (2007). Universal intelligence: A definition of machine intelligence. Minds and Machines, 17(4), 391–444.Leonetti, M. & Iocchi, L. (2010). Improving the performance of complex agent plans through reinforcement learning. In Proceedings of the 2010 International Conference on Autonomous Agents and Multiagent Systems (Vol. 1, pp. 723–730). International Foundation for Autonomous Agents and Multiagent Systems.Levin, L. A. (1973). Universal sequential search problems. Problems of Information Transmission, 9(3), 265–266.Levin, L. A. (1986). Average case complete problems. SIAM Journal on Computing, 15, 285.Li, M., & Vitányi, P. (2008). An introduction to Kolmogorov complexity and its applications (3rd ed.). Berlin: Springer.Low, C. K., Chen, T. Y., & Rónnquist, R. (1999). Automated test case generation for bdi agents. Autonomous Agents and Multi-agent Systems, 2(4), 311–332.Madden, M. G., & Howley, T. (2004). Transfer of experience between reinforcement learning environments with progressive difficulty. Artificial Intelligence Review, 21(3), 375–398.Mellenbergh, G. J. (1994). Generalized linear item response theory. Psychological Bulletin, 115(2), 300.Michel, F. (2004). Formalisme, outils et éléments méthodologiques pour la modélisation et la simulation multi-agents. PhD thesis, Université des sciences et techniques du Languedoc, Montpellier.Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63(2), 81.Orponen, P., Ko, K. I., Schöning, U., & Watanabe, O. (1994). Instance complexity. Journal of the ACM (JACM), 41(1), 96–121.Simon, H. A., & Kotovsky, K. (1963). Human acquisition of concepts for sequential patterns. Psychological Review, 70(6), 534.Team, R., et al. (2013). R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing.Whiteson, S., Tanner, B., & White, A. (2010). The reinforcement learning competitions. The AI Magazine, 31(2), 81–94.Wiering, M., & van Otterlo, M. (Eds.). (2012). Reinforcement learning: State-of-the-art. Berlin: Springer.Wolfram, S. (2002). A new kind of science. Champaign, IL: Wolfram Media.Zatuchna, Z., & Bagnall, A. (2009). Learning mazes with aliasing states: An LCS algorithm with associative perception. Adaptive Behavior, 17(1), 28–57.Zenil, H. (2010). Compression-based investigation of the dynamical properties of cellular automata and other systems. Complex Systems, 19(1), 1–28.Zenil, H. (2011). Une approche expérimentale à la théorie algorithmique de la complexité. PhD thesis, Dissertation in fulfilment of the degree of Doctor in Computer Science, Université de Lille.Zenil, H., Soler-Toscano, F., Delahaye, J. P. & Gauvrit, N. (2012). Two-dimensional kolmogorov complexity and validation of the coding theorem method by compressibility. arXiv, preprint arXiv:1212.6745

    Psychometrics versus Representational Theory of Measurement

    Get PDF
    Erik Angner has argued that simultaneous endorsement of the representational theory of measurement (RTM) and psychometrics leads to inconsistency. His claim rests on an implicit assumption: RTM and psychometrics are full-fledged approaches to measurement. I argue that RTM and psychometrics are only partial approaches that deal with different aspects of measurement, and that therefore simultaneous endorsement of the two is not inconsistent. The argument has implications for the improvement of measurement practices. The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: gratefully acknowledges research funding from the following institutions: Cambridge AHRC (Arts and Humanities Research Council) Doctoral Training Partnership; the British Society for the Philosophy of Science; Cambridge Commonwealth, European and International Trust; and Newnham College
    corecore