16,913 research outputs found

    A Tripartite Framework for Leadership Evaluation

    Get PDF
    The Tripartite Framework for Leadership Evaluation provides a comprehensive examination of the leadership evaluation landscape and makes key recommendations about how the field of leadership evaluation should proceed. The chief concern addressed by this working paper is the use of student outcome data as a measurement of leadership effectiveness. A second concern in our work with urban leaders is the absence or surface treatment of race and equity in nearly all evaluation instruments or processes. Finally, we call for an overhaul of the conventional cycle of inquiry, which is based largely on needs analysis and leader deficits, and incomplete use of evidence to support recurring short cycles within the larger yearly cycle of inquiry

    The Road Ahead for State Assessments

    Get PDF
    The adoption of the Common Core State Standards offers an opportunity to make significant improvements to the large-scale statewide student assessments that exist today, and the two US DOE-funded assessment consortia -- the Partnership for the Assessment of Readiness for College and Careers (PARCC) and the SMARTER Balanced Assessment Consortium (SBAC) -- are making big strides forward. But to take full advantage of this opportunity the states must focus squarely on making assessments both fair and accurate.A new report commissioned by the Rennie Center for Education Research & Policy and Policy Analysis for California Education (PACE), The Road Ahead for State Assessments, offers a blueprint for strengthening assessment policy, pointing out how new technologies are opening up new possibilities for fairer, more accurate evaluations of what students know and are able to do. Not all of the promises can yet be delivered, but the report provides a clear set of assessment-policy recommendations. The Road Ahead for State Assessments includes three papers on assessment policy.The first, by Mark Reckase of Michigan State University, provides an overview of computer adaptive assessment. Computer adaptive assessment is an established technology that offers detailed information on where students are on a learning continuum rather than a summary judgment about whether or not they have reached an arbitrary standard of "proficiency" or "readiness." Computer adaptivity will support the fair and accurate assessment of English learners (ELs) and lead to a serious engagement with the multiple dimensions of "readiness" for college and careers.The second and third papers give specific attention to two areas in which we know that current assessments are inadequate: assessments in science and assessments for English learners.In science, paper-and-pencil, multiple choice tests provide only weak and superficial information about students' knowledge and skills -- most specifically about their abilities to think scientifically and actually do science. In their paper, Chris Dede and Jody Clarke-Midura of Harvard University illustrate the potential for richer, more authentic assessments of students' scientific understanding with a case study of a virtual performance assessment now under development at Harvard. With regard to English learners, administering tests in English to students who are learning the language, or to speakers of non-standard dialects, inevitably confounds students' content knowledge with their fluency in Standard English, to the detriment of many students. In his paper, Robert Linquanti of WestEd reviews key problems in the assessment of ELs, and identifies the essential features of an assessment system equipped to provide fair and accurate measures of their academic performance.The report's contributors offer deeply informed recommendations for assessment policy, but three are especially urgent.Build a system that ensures continued development and increased reliance on computer adaptive testing. Computer adaptive assessment provides the essential foundation for a system that can produce fair and accurate measurement of English learners' knowledge and of all students' knowledge and skills in science and other subjects. Developing computer adaptive assessments is a necessary intermediate step toward a system that makes assessment more authentic by tightly linking its tasks and instructional activities and ultimately embedding assessment in instruction. It is vital for both consortia to keep these goals in mind, even in light of current technological and resource constraints.Integrate the development of new assessments with assessments of English language proficiency (ELP). The next generation of ELP assessments should take into consideration an English learners' specific level of proficiency in English. They will need to be based on ELP standards that sufficiently specify the target academic language competencies that English learners need to progress in and gain mastery of the Common Core Standards. One of the report's authors, Robert Linquanti, states: "Acknowledging and overcoming the challenges involved in fairly and accurately assessing ELs is integral and not peripheral to the task of developing an assessment system that serves all students well. Treating the assessment of ELs as a separate problem -- or, worse yet, as one that can be left for later -- calls into question the basic legitimacy of assessment systems that drive high-stakes decisions about students, teachers, and schools." Include virtual performance assessments as part of comprehensive state assessment systems. Virtual performance assessments have considerable promise for measuring students' inquiry and problem-solving skills in science and in other subject areas, because authentic assessment can be closely tied to or even embedded in instruction. The simulation of authentic practices in settings similar to the real world opens the way to assessment of students' deeper learning and their mastery of 21st century skills across the curriculum. We are just setting out on the road toward assessments that ensure fair and accurate measurement of performance for all students, and support for sustained improvements in teaching and learning. Developing assessments that realize these goals will take time, resources and long-term policy commitment. PARCC and SBAC are taking the essential first steps down a long road, and new technologies have begun to illuminate what's possible. This report seeks to keep policymakers' attention focused on the road ahead, to ensure that the choices they make now move us further toward the goal of college and career success for all students. This publication was released at an event on May 16, 2011

    What Every Economist Should Know about the Evaluation of Teaching: A Review of the Literature

    Get PDF
    Decades of research consistently show that student evaluations offer limited information on the effectiveness of teaching in economics. Such methods are at best valid for a relatively small set of factors that correlate with "good instruction." Even though some evidence exists that student evaluations are positively correlated with learning, it is clear that strong biases exist. Even though these limitations are well established in the literature and widely believed among faculty, the implementation of alternative or complimentary forms of assessment is notably lacking. The purpose of this paper is to review the current methods used to assess teaching. In the process, the paper proposes a research agenda for economists that aim to assess the reliability and validity of alternative processes such as peer review of teaching. The paper concludes with a number of recommendations for departments of economics that are serious about enhancing both formative as well as summative assessments of teaching.Teaching, Peer review of teaching, Student evaluation of teaching

    A Next Generation of Quality Assurance Models : On Phases, Levels and Circles in Policy Development

    Get PDF
    Quality assessment has been part of the feedback mechanisms of European higher education systems since around 1980. Due to internal dynamics, `erosionÂż of the effectiveness of firstgeneration quality assessment systems has led to loss of credibility (legitimacy) of these systems in the late 1990s. External dynamics also necessitate designing a next generation of quality assurance systems. They include notably a loss of transparency (hence, legitimacy) of the European higher education system through increased internationalisation (most notably through the Bologna process) which puts new, increased demands on institutional arrangements for quality assurance. In this paper, we first intend to schematise the developments of quality assurance in higher education by introducing a phase model of the effects of internal and external dynamics. Next, we will analyse this phase model from the perspective of argumentative policy inquiry. Finally, we will contrast policy developments in higher education with one other example, viz. environmental policy in the Netherlands. The conclusions of this comparison, as well as the new challenges set for quality assurance in higher education by the Bologna process, are the subject matter for the final section of our paper

    Web-based portfolio assessment : An open source solution for platform design

    Get PDF
    Summative assessments of student writing performance have been instrumental in the evaluation of student ability and analysis of educational programs. One method used to perform summative assessments of writing performance in post-secondary education is through the evaluation of student portfolios. Using an evidence-centered design approach, NJIT faculty researchers have developed rubrics to measure the acquired skills of students. Classroom instructors from the department meet periodically to score the students\u27 portfolios containing constructed responses to predetermined writing tasks. The paper-based assessments are then manually key-stroked into Microsoft Excel for storage, with the scores then analyzed in SPSS and SAS. This thesis presents the design and development of a web-based application created to enhance the portfolio assessment process and alleviate the key-stroking burden and introduction of error attendant to a paper-based portfolio scoring system. By enabling readers to rate portfolios in a communal environment in which scoring standards have been mutually established, the application ensures consistent assessment of all students in the writing program. Significantly, the application allows real-time monitoring of portfolio assessments to ensure consistency amongst readers and to immediately address portfolios requiring adjudication of discrepant scores. To ensure that the portfolio assessment platform met its full potential, both rapid prototyping and usability testing were included in the development of this application

    The use of innovation and practice profiles in the evaluation of curriculum implementation

    Get PDF
    Most generic curriculum reform efforts have to deal with a gap between the innovative aspirations of the initial designers and the daily reality of the intended audience of teachers. That tension is not alarming in itself. One might even say that without it no compelling reason for starting development work would exist. Unfortunately, many evaluation studies on the implementation and impact of curriculum development projects show that this discrepancy does not decrease over time. Apparently, not much improvement is made in detecting and reducing potential implementation problems.\ud This article presents some conceptual and instrumental guidelines for dealing with these problems, focusing on the use of `profilesÂż during evaluation of curriculum materials.\ud The paper starts with an introduction on the functions of exemplary curriculum materials and their possible representations, on the long road from original designersÂż ideas to effects of student learning. Next, we will explain the concepts of innovation and practice profiles. We will then provide guidelines for the development and use of such profiles, based on previous research experiences, and illustrated with some specific examples. Finally, we will reflect on the advantages and limitations of working with profiles

    Development and Validation of an Agricultural Literacy Instrument Using the National Agricultural Literacy Outcomes

    Get PDF
    This study was conducted to develop a standardized agricultural literacy assessment using the National Agricultural Literacy Outcomes (NALOs) as benchmarks. The need for such an assessment was born out of previous research, which found that despite numerous programs dedicated to improving agricultural literacy, many students and adults remain at low or very low levels of literacy. Low literacy levels lead to negative associations with the production and processing of food, clothing, and shelter, as well as misinformed public perceptions and policies. Agricultural literacy researchers recognized that the development of a standardized assessment for post-12th grade, or equivalent, could unify both research and program development efforts. The assessment was developed by forming two groups of experts. Teaching experts and agricultural content experts worked together in an iterative process. They crafted 45 questions using research methods and models. The 45 items were placed in an online survey to be tested for validity by a participant group. During the Fall 2018 semester, 515 Utah State University students between the ages of 18-23 years old participated in the online assessment. The participant data assisted in determining which questions were valid and reliable for determining agricultural literacy, as aligned to the NALO standards. Additional demographic information was also collected from participants. The demographic items asked students to self-report their level of exposure to agriculture and their self-perceived level of agricultural literacy. The study concluded that two separate 15-item Judd-Murray Agricultural Literacy Instruments (JMALI) were valid and reliable for determining agricultural proficiency levels based on the NALOs. Participant scores were reported as a single proficiency stage: exposure, factual literacy, or applicable proficiency. The study also determined that students who had a “great deal” or higher level of exposure to agriculture also had a strong, positive correlation with a “good” or higher level of agricultural literacy. Findings show participants who reported a “good” level of agricultural literacy shared a positive correlation with either performing at a factual literacy (middle) or applicable proficiency (highest) level on the assessment. The results suggest JMALI instruments have the potential to assist in improving current agricultural education endeavors by providing a critical tool for determining the agricultural literacy proficiency stages of adult populations

    An evaluation of outcome as the main requierment for improving the quality of teacher education institution

    Get PDF
    The research aims to reveal (1) the indicators of the outcome, (2) the outcome of teacher education institution, and (3) related aspects of the outcome of teacher education institution. This study employed the quantitative approach and supported by qualitative approach. The population 1,558 graduates of the Faculty of Engineering, Yogyakarta State University from 2001 to 2010. The sampling technique used in this research was purposive sampling technique by taking the graduates who pursued the profession as a teacher at the vocational high school. The calculation of an adequate sample size was determined by Nomogram Harry King with an error rate of 5%. Based on Nomogram Harry King, the number of sample used was 296 people or 19% of the population. The results of this research are as follows. (1) The indicators used to reveal the outcome of education in LPTK include: work appraisal, work motivation, career development, competence in teaching-learning process, school administration, contribution to school development, creativity and innovation, subject-matter mastery, teaching media skill, teaching strategy skill, evaluation and assessment. (2) LPTK graduates are able to teach productive subject matter very well. The competence of: subject-matter mastery, teaching media, teaching strategy, as well as evaluation and assessment is categorized very good. Furthermore, the graduates carry out their duties in vocational high school very well. The ability to handle school administration and contribution to school development aspect are mostly categorized very good, while the creativity and innovation are mostly categorized good. Work motivation of the graduates is categorized very good, while the career development and work appraisal are mostly categorized good. The advantages possessed by LPTK graduates are subject-matter mastery and work motivation. (3) The evaluation results of related aspects of the outcomes show that: (a) the LPTK inputs on curriculum and educational staff aspect are mostly categorized very good, however student quality and facility should be improved; (b) the LPTK process including: teaching-learning process in the classroom, industrial internship, and educational practicum is categorized very good; (c) the LPTK output shows that GPA average is in the range of 3.01 to 3.25 and the length of study is in the range of 4.51 to 5 years
    • …
    corecore