12 research outputs found
A multidimensional measurement approach and analysis of children's motivation for reading, attributional style, and reading achievement
The present study investigates the usefulness of a multi-method approach to the measurement of
reading motivation and achievement. A sample of 127 elementary and middle-school children
aged 10 to 14 responded to measures of motivation, attributions, and achievement both
longitudinally and in a challenging reading context. Novel measures of motivation and
attributions were constructed, validated, and utilized to examine the relationship between
~ motivation, attributions, and achievement over a one-year period (Study I). The impact of
classroom contexts and instructional practices was also explored through a study of the influence
of topic interest and challenge on motivation, attributions, and persistence (Study II), as well as
through interviews with children regarding motivation and reading in the classroom (Study III).
Creation and validation of novel measures of motivation and attributions supported the use of a
self-report measure of motivation in situation-specific contexts, and confirmed a three-factor
structure of attributions for reading performance in both hypothetical and situation-specific
contexts. A one-year follow up study of children's motivation and reading achievement
demonstrated declines in all components of motivation beginning at age 10 through 12, and
particularly strong decreases in motivation with the transition to middle school. Past perceived
competence for reading predicted current achievement after controlling for past achievement,
and showed the strongest relationships with reading-related skills in both elementary and middle
school. Motivation and attributions were strongly related, and children with higher motivation
Fulmer III
displayed more adaptive attributions for reading success and failure. In the context of a
developmentally inappropriate challenging reading task, children's motivation for reading,
especially in terms of perceived competence, was threatened. However, interest in the story
buffered some ofthe negative impacts of challenge, sustaining children's motivation, adaptive
attributions, and reading persistence. Finally, children's responses during interviews outlined
several emotions, perceptions, and aspects of reading tasks and contexts that influence reading
motivation and achievement. Findings revealed that children with comparable motivation and
achievement profiles respond in a similar way to particular reading situations, such as excessive
challenge, but also that motivation is dynamic and individualistic and can change over time and
across contexts. Overall, the present study outlines the importance of motivation and adaptive
attributions for reading success, and the necessity of integrating various methodologies to study
the dynamic construct of achievement motivation
Educational Developer Professional Development Map (EDPDM): A Tool for Educational Developers to Articulate Their Mentoring Network
To optimize their success and effectiveness, educational developers benefit from identifying and cultivating a constellation of collaborators, resources, and mentors. This article describes the development of the Educational Developer Professional Development Map (EDPDM), a tool to help educational developers identify, articulate, and visualize their own mentoring networks. The authors discuss strategies educational developers in various career stages and institutional contexts can use to examine and evaluate their mentoring networks. Reflection questions are offered to promote further personal observation and future action to strengthen the mentoring network
How to Create an Accessibility Resource Index for Teaching and Learning
Step 1: Establish a Development TeamStep 2: Students as Partners (SaP) ApproachStep 3: Visioning and Needs AssessmentStep 4: Environmental ScanStep 5: Review Resources and Identify GapsStep 6: Fill GapsStep 7: Categorize ResourcesStep 8: Create ARIStep 9: User Interface/Experience (UI/UX) TestingStep 10: Communication and Education on ARIStep 11: Ongoing Evaluation and Review of ARIIn this how-to guide, we share the process and guidelines for institutions to create a centralized Accessibility Resource Index (ARI). The goal of ARI is to compile and categorize internal and external resources to help instructors, students, and staff better understand accessibility and accommodations and find answers to their questions. There are 11 steps outlined in this guide, which will take you from creating a development team through the ongoing review and evaluation of ARI. The how-to guide is available as a PDF and as a Microsoft Word .docx
Comment créer un index des ressources en matière d'accessibilité pour l'enseignement et l'apprentissage
Étape 1 : Mise en place d'une équipe de développementÉtape 2 : Approche "Étudiants en tant que partenaires" (SaP)Étape 3 : Détermination de la vision et évaluation des besoinsÉtape 4 : Analyse de l'environnementÉtape 5 : Examiner les ressources et identifier les lacunesÉtape 6 : Combler les lacunesÉtape 7 : Classer les ressources par catégoriesÉtape 8 : Création de l'ARIÉtape 9 : Test de l'interface/expérience utilisateur (UI/UX)Étape 10 : Communication et éducation sur l'IRAÉtape 11 : Évaluation et examen continus de l'ARIDans ce guide pratique, nous partageons le processus et les lignes directrices permettant aux institutions de créer un "Index des ressources en matière d'accessibilité" (ARI, prononcé ah-ree). L'objectif d'une ARI est d'aider les enseignants, les étudiants et le personnel à mieux comprendre l'accessibilité et les aménagements et à trouver des réponses à leurs questions
Development of a Tool to Assess Interrelated Experimental Design in Introductory Biology
Designing experiments and applying the process of science are core competencies for many introductory courses and course-based undergraduate research experiences (CUREs). However, experimental design is a complex process that challenges many introductory students. We describe the development of a tool to assess interrelated experimental design (TIED) in an introductory biology lab course. We describe the interrater reliability of the tool, its effectiveness in detecting variability and growth in experimental-design skills, and its adaptability for use in various contexts. The final tool contained five components, each with multiple criteria in the form of a checklist such that a high-quality response—in which students align the different components of their experimental design—satisfies all criteria. The tool showed excellent interrater reliability and captured the full range of introductory-student skill levels, with few students hitting the assessment ceiling or floor. The scoring tool detected growth in student skills from the beginning to the end of the semester, with significant differences between pre- and post-assessment scores for the Total Score and for the Data Collection and Observations component scores. This authentic assessment task and scoring tool provide meaningful feedback to instructors about the strengths, gaps, and growth in introductory students’ experimental-design skills and can be scored reliably by multiple instructors. The TIED can also be adapted to a number of experimental-design prompts and learning objectives, and therefore can be useful for a variety of introductory courses and CUREs