7 research outputs found
Assessing Information Literacy Skills: A Rubric Approach
Academic librarians should explore new approaches to the assessment of information literacy skills. Satisfaction surveys and input/output measures do not provide librarians with adequate information about what students know and can do. Standardized multiple-choice tests and large-scale performance assessments also fail to provide the data librarians need to improve instruction locally. Librarians, facing accountability issues and possessing the desire to improve student learning, require a new approach to library instruction assessment. This study investigated the viability of a rubric approach to information literacy assessment and examined an analytic information literacy rubric designed to assess students' ability to evaluate website authority. The study addressed these questions: (1) To what degree can different groups of raters provide consistent scoring of student learning artifacts using a rubric? (2) To what degree can raters provide scores consistent with those assigned by the researcher? (3) To what degree can students use authority as a criterion to evaluate websites? This study revealed that multiple raters can use rubrics to produce consistent scoring of information literacy artifacts of student learning; however, different groups of raters in this study arrived at varying levels of agreement. For example, ENG 101 instructors produced significantly higher reliabilities than NCSU librarians and ENG 101 students, and NCSU librarians produced remarkably higher levels of agreement than external instruction and reference librarians. In addition to providing important findings regarding the five original rater groups, this study documented the emergence of an "expert" rater group, identified through kappa statistics and a "gold standard" approach to the examination of validity. These raters not only approximated the researcher's scores, they also achieved higher levels of agreement than any of the five original groups. This study suggests that librarians may require substantial training to overcome barriers blocking expert rater status. Finally, this study found that most students can cite specific indicators of authority when evaluating a website. Nearly all students can locate and identify these authority indicators in a website. However, many students have difficulty choosing an appropriate website for a specific assignment and providing a rationale for their choice
Recommended from our members
Examining Design and Inter-Rater Reliability of a Rubric Measuring Research Quality across Multiple Disciplines
The paper presents a rubric to help evaluate the quality of research projects. The rubric was applied in a competition across a variety of disciplines during a two-day research symposium at one institution in the southwest region of the United States of America. It was collaboratively designed by a faculty committee at the institution and was administered to 204 undergraduate, master, and doctoral oral presentations by approximately 167 different evaluators. No training or norming of the rubric was given to 147 of the evaluators prior to the competition. The findings of the inter-rater reliability analysis reveal substantial agreement among the judges, which contradicts literature describing the fact that formal norming must occur prior to seeing substantial levels of inter-rater reliability. By presenting the rubric along with the methodology used in its design and evaluation, it is hoped that others will find this to be a useful tool for evaluating documents and for teaching research methods. Accessed 15,405 times on https://pareonline.net from May 29, 2009 to December 31, 2019. For downloads from January 1, 2020 forward, please click on the PlumX Metrics link to the right
Recommended from our members