20,463 research outputs found

    The Dutch Individualised Care Scale for patients and nurses : a psychometric validation study

    Get PDF
    Aims and objectives: Translating and psychometrically assessing the Individualised Care Scale (ICS) for patients and nurses for the Flemish and Dutch healthcare context. Background: Individualised care interventions have positive effects on health outcomes. However, there are no valid and reliable instruments for evaluating individualised care for the Flemish and Dutch healthcare context. Design: Psychometric validation study. Setting and participants: In Flemish hospitals, data were collected between February and June 2016, and in Dutch hospitals, data were collected between December 2014 and May 2015. Nurses with direct patient contact and a working experience of minimum 6 months on the wards could participate. Patient inclusion criteria were being an adult, being mentally competent, having an expected hospital stay of minimum 1 day, and being able to speak and read the Dutch language. In total, 845 patients and 569 nurses were included. Methods: The ICS was translated into Dutch using a forward–backward translation process. Minimal linguistic adaptations to the Dutch ICS were made to use the scale as a Flemish equivalent. Omega, Cronbach’s Alpha, mean inter-item correlations and standardised subscale correlations established the reliability and confirmatory factor analysis the construct validity of the ICS. Results: Internal consistency using Omega (Cronbach’s Alpha) ranged from 0.83 to 0.96 (0.82–0.95) for the ICSNurse and from 0.88 to 0.96 (0.87–0.96) for the ICSPatient. Fit indices of the confirmatory factor analysis indicated a good model fit, except for the root mean square error of approximation, which indicated only moderate model fit. Conclusion: The Dutch version of the ICS showed acceptable psychometric performance, supporting its use for the Dutch and Flemish healthcare context. Relevance to clinical practice: Knowledge of nurses’ and patients’ perceptions on individualised care will aid to target areas in the Dutch and Flemish healthcare context in which work needs to be undertaken to provide individualised nursing care

    Can the g Factor Play a Role in Artificial General Intelligence Research?

    Get PDF
    In recent years, a trend in AI research has started to pursue human-level, general artificial intelli-gence (AGI). Although the AGI framework is characterised by different viewpoints on what intelligence is and how to implement it in artificial systems, it conceptualises intelligence as flexible, general-purposed, and capable of self-adapting to different contexts and tasks. Two important ques-tions remain open: a) should AGI projects simu-late the biological, neural, and cognitive mecha-nisms realising the human intelligent behaviour? and b) what is the relationship, if any, between the concept of general intelligence adopted by AGI and that adopted by psychometricians, i.e., the g factor? In this paper, we address these ques-tions and invite researchers in AI to open a dis-cussion on the theoretical conceptions and practi-cal purposes of the AGI approach

    Joint perceptual decision-making: a case study in explanatory pluralism.

    Get PDF
    Traditionally different approaches to the study of cognition have been viewed as competing explanatory frameworks. An alternative view, explanatory pluralism, regards different approaches to the study of cognition as complementary ways of studying the same phenomenon, at specific temporal and spatial scales, using appropriate methodological tools. Explanatory pluralism has been often described abstractly, but has rarely been applied to concrete cases. We present a case study of explanatory pluralism. We discuss three separate ways of studying the same phenomenon: a perceptual decision-making task (Bahrami et al., 2010), where pairs of subjects share information to jointly individuate an oddball stimulus among a set of distractors. Each approach analyzed the same corpus but targeted different units of analysis at different levels of description: decision-making at the behavioral level, confidence sharing at the linguistic level, and acoustic energy at the physical level. We discuss the utility of explanatory pluralism for describing this complex, multiscale phenomenon, show ways in which this case study sheds new light on the concept of pluralism, and highlight good practices to critically assess and complement approaches

    Methods in Psychological Research

    Get PDF
    Psychologists collect empirical data with various methods for different reasons. These diverse methods have their strengths as well as weaknesses. Nonetheless, it is possible to rank them in terms of different critieria. For example, the experimental method is used to obtain the least ambiguous conclusion. Hence, it is the best suited to corroborate conceptual, explanatory hypotheses. The interview method, on the other hand, gives the research participants a kind of emphatic experience that may be important to them. It is for the reason the best method to use in a clinical setting. All non-experimental methods owe their origin to the interview method. Quasi-experiments are suited for answering practical questions when ecological validity is importa

    A computerized test of speed of language comprehension unconfounded by literacy

    Get PDF
    A computerised version of the Silly Sentences task developed for use with children (Baddeley et al, 1995) is found to be equivalent to the pencil-and-paper version from the SCOLP Test (Baddeley et al, 1992) with UK undergraduates, and is usable by a sample of young UK children. Because the sentences are presented aloud instead of being written, the computerised test is not affected by literacy skills. Translated into Kiswahili, the task was used in Tanzanian schools, despite the absence of an electricity supply and a very different cultural background. The decision latencies had a test-retest reliability of 0.69 over 5 months, and were independent of age and baseline decision speed. The task appears appropriate for longitudinal studies, including those in developing countries. Given its simplicity and the correlations with the original SCOLP version of the task, it may also be useful in studies on literate adults

    The Road Ahead for State Assessments

    Get PDF
    The adoption of the Common Core State Standards offers an opportunity to make significant improvements to the large-scale statewide student assessments that exist today, and the two US DOE-funded assessment consortia -- the Partnership for the Assessment of Readiness for College and Careers (PARCC) and the SMARTER Balanced Assessment Consortium (SBAC) -- are making big strides forward. But to take full advantage of this opportunity the states must focus squarely on making assessments both fair and accurate.A new report commissioned by the Rennie Center for Education Research & Policy and Policy Analysis for California Education (PACE), The Road Ahead for State Assessments, offers a blueprint for strengthening assessment policy, pointing out how new technologies are opening up new possibilities for fairer, more accurate evaluations of what students know and are able to do. Not all of the promises can yet be delivered, but the report provides a clear set of assessment-policy recommendations. The Road Ahead for State Assessments includes three papers on assessment policy.The first, by Mark Reckase of Michigan State University, provides an overview of computer adaptive assessment. Computer adaptive assessment is an established technology that offers detailed information on where students are on a learning continuum rather than a summary judgment about whether or not they have reached an arbitrary standard of "proficiency" or "readiness." Computer adaptivity will support the fair and accurate assessment of English learners (ELs) and lead to a serious engagement with the multiple dimensions of "readiness" for college and careers.The second and third papers give specific attention to two areas in which we know that current assessments are inadequate: assessments in science and assessments for English learners.In science, paper-and-pencil, multiple choice tests provide only weak and superficial information about students' knowledge and skills -- most specifically about their abilities to think scientifically and actually do science. In their paper, Chris Dede and Jody Clarke-Midura of Harvard University illustrate the potential for richer, more authentic assessments of students' scientific understanding with a case study of a virtual performance assessment now under development at Harvard. With regard to English learners, administering tests in English to students who are learning the language, or to speakers of non-standard dialects, inevitably confounds students' content knowledge with their fluency in Standard English, to the detriment of many students. In his paper, Robert Linquanti of WestEd reviews key problems in the assessment of ELs, and identifies the essential features of an assessment system equipped to provide fair and accurate measures of their academic performance.The report's contributors offer deeply informed recommendations for assessment policy, but three are especially urgent.Build a system that ensures continued development and increased reliance on computer adaptive testing. Computer adaptive assessment provides the essential foundation for a system that can produce fair and accurate measurement of English learners' knowledge and of all students' knowledge and skills in science and other subjects. Developing computer adaptive assessments is a necessary intermediate step toward a system that makes assessment more authentic by tightly linking its tasks and instructional activities and ultimately embedding assessment in instruction. It is vital for both consortia to keep these goals in mind, even in light of current technological and resource constraints.Integrate the development of new assessments with assessments of English language proficiency (ELP). The next generation of ELP assessments should take into consideration an English learners' specific level of proficiency in English. They will need to be based on ELP standards that sufficiently specify the target academic language competencies that English learners need to progress in and gain mastery of the Common Core Standards. One of the report's authors, Robert Linquanti, states: "Acknowledging and overcoming the challenges involved in fairly and accurately assessing ELs is integral and not peripheral to the task of developing an assessment system that serves all students well. Treating the assessment of ELs as a separate problem -- or, worse yet, as one that can be left for later -- calls into question the basic legitimacy of assessment systems that drive high-stakes decisions about students, teachers, and schools." Include virtual performance assessments as part of comprehensive state assessment systems. Virtual performance assessments have considerable promise for measuring students' inquiry and problem-solving skills in science and in other subject areas, because authentic assessment can be closely tied to or even embedded in instruction. The simulation of authentic practices in settings similar to the real world opens the way to assessment of students' deeper learning and their mastery of 21st century skills across the curriculum. We are just setting out on the road toward assessments that ensure fair and accurate measurement of performance for all students, and support for sustained improvements in teaching and learning. Developing assessments that realize these goals will take time, resources and long-term policy commitment. PARCC and SBAC are taking the essential first steps down a long road, and new technologies have begun to illuminate what's possible. This report seeks to keep policymakers' attention focused on the road ahead, to ensure that the choices they make now move us further toward the goal of college and career success for all students. This publication was released at an event on May 16, 2011
    • …
    corecore