473,091 research outputs found

    Consistency-Based Reliability Assessment

    Get PDF
    International audienceThis paper addresses the question of assessing the relative reliability of unknown information sources. We propose to consider a phase during which the consistency of information they report is analysed, whether it is the consistency of each single report, or the consistency of a report w.r.t. some trusted knowledge or the consistency of different reports together. We adopt an axiomatic approach by first giving postulates which characterize how the resulting reliability preorder should be; then we define a family of operators for building this preorder and demonstrate that it satisfies the proposed postulates

    Towards Consistency-Based Reliability Assessment

    Get PDF
    International audienceMOTIVATION : Merging information provided by several sources is an important issue and merging techniques have been extensively studied. When the reliability of the sources is not known, one can apply merging techniques such as majority or arbitration merging or distancebasedmerging for solving conflicts between information. At the opposite, if the reliability of the sources is known, either represented in a quantitative or in a qualitative way, then it can be used to manage contradictions: information provided by a source is generally weakened or ignored if it contradicts information provided by a more reliable source [1, 4, 6]. Assessing the reliability of information sources is thus crucial. The present paper addresses this key question. We adopt a qualitative point of view for reliability representation by assuming that the relative reliability of information sources is represented by a total preorder. This works considers that we have no information about the sources and in particular, we do not know if they are correct (i.e they provide true information) or not. We focus on a preliminary stage of observation and assessment of sources. We claim that during that stage the key issue is a consistency analysis of information provided by sources, whether it is the consistency of single reports or consistency w.r.t trusted knowledge or the consistency of different reports together. We adopt an axiomatic approach: first we give some postulates which characterize what this reliability preorder should be, then we define a generic operator for building this preorder in agreement with the postulates

    Internal Consistency Reliability Of Instruments Measuring Students Satisfaction As An Internal Customer (Application Of Factor Analysis)

    Get PDF
    Reliability is consistency of the instrument in measurement whatever it measures. There are different methods available for estimating internal consistency reliability base on a single administration of a given assessment. Measures of internal consistency are a popular set of assessments with Cronbach’s alpha () being the most favored. Two other measures of internal consistency, such as theta (Θ) and omega (Ω). Each of three measures and its computation is described using instrument for measuring students’ satisfaction as internal customer. Students’ satisfaction is the level of a student’s felt state resulting from comparing a product’s perceived performance (outcome) in relations to the student’s expectation. The purpose of this study to answer which of the three measures a highest one? The research was survey research using simple random sampling methods. The instrument was based on the definition above and it was tried out to 103 Post Graduate students’ State University of Jakarta (Universitas Negeri Jakarta). Because alpha () is a lower bound reliability assessment so this research the following holds < Θ < Ω for this instrument. It can be concluded that the questionnaire measuring students’ satisfaction has appropriate internal consistency reliability. Further try out is still needed to standardize the instrument. Key words: internal consistency reliability , Θ, and Ω, students’ satisfaction as an internal custome

    Consistency-Based Reliability Assessment

    Get PDF
    This paper addresses the question of assessing the relative reliability of unknown information sources. We propose to consider a phase during which the consistency of information they report is analysed, whether it is the consistency of each single report, or the consistency of a report w.r.t. some trusted knowledge or the consistency of different reports together. We adopt an axiomatic approach by first giving postulates which characterize how the resulting reliability preorder should be; then we define a family of operators for building this preorder and demonstrate that it satisfies the proposed postulates

    Design, validation and dissemination of an undergraduate assessment tool using SimManÂź in simulated medical emergencies

    Get PDF
    Background: Increasingly, medical students are being taught acute medicine using whole-body simulator manikins. Aim: We aimed to design, validate and make widely available two simple assessment tools to be used with Laerdal SimMan (R) for final year students. Methods: We designed two scenarios with criterion-based checklists focused on assessment and management of two medical emergencies. Members of faculty critiqued the assessments for face validity and checklists revised. We assessed three groups of different experience levels: Foundation Year 2 doctors, third and final year medical students. Differences between groups were analysed, and internal consistency and interrater reliability calculated. A generalisability analysis was conducted using scenario and rater as facets in design. Results: A maximum of two items were removed from either checklist following the initial survey. Significantly different scores for three groups of experience for both scenarios were reported (p0.90). Internal consistency was poor (alpha<50.5). Generalizability study results suggest that four cases would provide reliable discrimination between final year students. Conclusions: These assessments proved easy to administer and we have gone some way to demonstrating construct validity and reliability. We have made the material available on a simulator website to enable others to reproduce these assessments

    Inter-rater reliability, intra-rater reliability and internal consistency of the Brisbane Evidence-Based Language Test

    Get PDF
    Purpose: To examine the inter-rater reliability, intra-rater reliability, internal consistency and practice effects associated with a new test, the Brisbane Evidence-Based Language Test. Methods: Reliability estimates were obtained in a repeated-measures design through analysis of clinician video ratings of stroke participants completing the Brisbane Evidence-Based Language Test. Inter-rater reliability was determined by comparing 15 independent clinicians’ scores of 15 randomly selected videos. Intra-rater reliability was determined by comparing two clinicians’ scores of 35 videos when re-scored after a two-week interval. Results: Intraclass correlation coefficient (ICC) analysis demonstrated almost perfect inter-rater reliability (0.995; 95% confidence interval: 0.990–0.998), intra-rater reliability (0.994; 95% confidence interval: 0.989–0.997) and internal consistency (Cronbach’s α = 0.940 (95% confidence interval: 0.920–1.0)). Almost perfect correlations (0.998; 95% confidence interval: 0.995–0.999) between face-to-face and video ratings were obtained. Conclusion: The Brisbane Evidence-Based Language Test demonstrates almost perfect inter-rater reliability, intra-rater reliability and internal consistency. High correlation coefficients and narrow confidence intervals demonstrated minimal practice effects with scoring or influence of years of clinical experience on test scores. Almost perfect correlations between face-to-face and video scoring methods indicate these reliability estimates have direct application to everyday practice. The test is available from brisbanetest.org. Implications for Rehabilitation The Brisbane Evidence-Based Language Test is a new measure for the assessment of acquired language disorders. The Brisbane Evidence-Based Language Test demonstrated almost perfect inter-rater reliability, intra-rater reliability and internal consistency. High reliability estimates and narrow confidence intervals indicated that test ratings vary minimally when administered by clinicians of different experience levels, or different levels of familiarity with the new measure. The test is a reliable measure of language performance for use in clinical practice and research

    RELIABILITY OF THE DYNAMIC OCCUPATIONAL THERAPY COGNITIVE ASSESSMENT FOR CHILDREN (DOTCA-CH): THAI VERSION OF ORIENTATION, SPATIAL PERCEPTION, AND THINKING OPERATIONS SUBTESTS

    Get PDF
    The Dynamic Occupational Therapy Cognitive Assessment for Children (DOTCA-Ch) is a tool for finding out about cognitive problems in school-aged children. However, the DOTCA-Ch was developed in English for Western children. For this reason, it’s not appropriate for Thai children because of the differences of culture and language. The objectives of this study were aimed at translating the DOTCA-Ch in Orientation, Spatial Perception, and Thinking Operations sub tests to a Thai version on a World Health Organization back-translation process, and to examine its internal consistency, inter-rater reliability and test-retest reliability. Participants consisted of 38 intellectually impaired and learning disabled individuals between the ages of 6–12 years. Results from this study revealed high internal consistency in the Orientation sub test (?=.83) Spatial Perception sub test (?=.82) and Thinking Operations sub test (?=.82); high inter-rater reliability in the Orientation sub test (ICC =.83), Spatial Perception sub test (ICC =.84) and Thinking Operations sub test (ICC =.74); and high test-retest reliability in the Orientation sub test (ICC =.84), Spatial Perception sub test (ICC =.86), and Thinking Operations sub test (ICC =.85). These results indicate that the Thai version of the Orientation, Spatial Perception, and Thinking Operations sub test might be used as an appropriate assessment tool for Thai children, based on psychometric evidence including internal consistency, inter-rater reliability and test-retest reliability. However, additional study of other psychometric properties, including, predictive validity, concurrent reliability, and inter-rater reliability during the mediation process of this assessment tool needs to be carried out

    Reliability of team-based self-monitoring in critical events: A pilot study

    No full text
    Background: Teamwork is a critical component during critical events. Assessment is mandatory for remediation and to target training programmes for observed performance gaps. Methods: The primary purpose was to test the feasibility of team-based self-monitoring of crisis resource management with a validated teamwork assessment tool. A secondary purpose was to assess item-specific reliability and content validity in order to develop a modified context-optimised assessment tool.We conducted a prospective, single-centre study to assess team-based self-monitoring of teamwork after in-situ inter-professional simulated critical events by comparison with an assessment by observers. The Mayo High Performance Teamwork Scale (MHPTS) was used as the assessment tool with evaluation of internal consistency, item-specific consensus estimates for agreement between participating teams and observers, and content validity. Results: 105 participants and 58 observers completed the MHPTS after a total of 16 simulated critical events over 8 months. Summative internal consistency of the MHPTS calculated as Cronbachs alpha was acceptable with 0.712 for observers and 0.710 for participants. Overall consensus estimates for dichotomous data (agreement/non-agreement) was 0.62 (Cohens kappa; IQ-range 0.31-0.87). 6/16 items had excellent (kappa > 0.8) and 3/16 good reliability (kappa > 0.6). Short questions concerning easy to observe behaviours were more likely to be reliable. The MHPTS was modified using a threshold for good reliability of kappa > 0.6. The result is a 9 item self-assessment tool (TeamMonitor) with a calculated median kappa of 0.86 (IQ-range: 0.67-1.0) and good content validity.Conclusions: Team-based self-monitoring with the MHPTS to assess team performance during simulated critical events is feasible. A context-based modification of the tool is achievable with good internal consistency and content validity. Further studies are needed to investigate if team-based self-monitoring may be used as part of a programme of assessment to target training programmes for observed performance gaps. © 2013 Stocker et al.; licensee BioMed Central Ltd

    Assessing victim risk in cases of violent crime

    Get PDF
    Purpose: There is a body of evidence that suggests a range of psychosocial characteristics demarcate certain adults to be at an elevated risk for victimisation. To this end, the aim of the current study was to examine consistency between one police force, and a corresponding victim support service based in England, in their assessment of level of risk faced by victims of violent crime. Methodology: This study explored matched data on 869 adult victims of violent crime gathered from these two key services in Preston, namely Lancashire Constabulary and Victim Support, from which a sub-group of comparable ‘domestic violence’ cases (n=211) were selected for further examination. Findings: Data analyses revealed methodological inconsistencies in the assessment of victimisation resulting in discrepancies for recorded levels of risk in domestic violence cases across these two agencies. Practical implications: These findings provide a compelling argument for developing a more uniformed approach to victim assessment and indicate a significant training need. Value: This paper highlights areas of good practice and forwards several recommendations for improved practice that emphasises the integration of empirical research conducted by psychologists to boost the validity and reliability of risk assessment approaches and tools used

    Implementasi Alat Evaluasi Menggambar Busana di SMK Swasta Kelompok Pariwisata Kabupaten Sleman

    Full text link
    The objectives of the study were to determine the application procedure of the evaluation instrument of fashion drawing, to analyse the consistency of the evaluation results, and to determine the communicative reporting procedure of the evaluation results in the Department of Tourism of Vocational High Schools in Sleman regency, Yogyakarta. This study was categorised into a survey study. The population of this study were teachers and students. There were 30 samples selected with purposive sampling. There were six teachers as the assessors. The data collection techniques used consistency assessment instruments by examining the index of reliability. The report of the results of Drawing Fashion Design assessment was shown in the profile of the students. The data was analysed using quantitative descriptive with percentages. The consistency of the assessment was examined using cronbach alpha analysis. While the reporting procedure was analysed descriptively based on the calculation of the final score. The results of the study were the application procedure of the evaluation instrument that is started with the preparation of the teachers to provide the assessment instruments that includes the materials, the grid, the test items and the rubric. Based on the criteria, it is in the very good category with a mean score of 56.83. The consistency of the assessment results is in a good category with a mean score of 0,800. Based on the competency profile, The mean score of the students' assessment result was 77, 3. With the minimum score of 70, it was proved that all of the samples of this study were categorised into competent in fashion drawing
    • 

    corecore