130,940 research outputs found

    Standardized Testing: What is it Good For? A Case Study in Connecticut

    Get PDF
    The case study was developed in an attempt to shed more light on the debate of standardized testing. The goal of the study was to find evidence to support whether or not standardized testing is worth doing in public secondary schools. To investigate this question, the state standardized math test scores of three Connecticut public high schools were analyzed. The average math scores over thirteen years were observed and statistical analysis was performed to see if any significant differences existed between the three schools. Tests were performed before and after the change in standardized test. The graduation rates of the schools were observed and compared to the trend of the CAPT mean math scores over time. This analysis was then supplemented with responses from a survey distributed to Connecticut high school math teachers to take into consideration the educators’ views of standardized testing. Both the quantitative and qualitative data had conflicting results. The standardized test scores appeared to improve over time, while the teachers found their teaching and student learning was interfered with the testing. Following the analyses, future implications of using standardized testing and how it may affect the transition to the Common Core Standards is discussed. -

    Aspects of accountability and assessment in the Netherlands

    Get PDF
    This article describes aspects of test-based accountability in the Netherlands. It provides a description of the design of the Educational system in the Netherlands, it gives a short introduction to the role of the Dutch Inspectorate of Education in the accountability of schools and describes different assessments that are used as sources of information in the accountability system. For each assessment, the primary function in education and its role in the accountability system are discussed. Finally, the factors that can potentially influence the validity of the accountability indicators and the strong and weak points of the current system are identified and some directions are presented of potential developments of this system

    Development and Evaluation of the Nebraska Assessment of Computing Knowledge

    Get PDF
    One way to increase the quality of computing education research is to increase the quality of the measurement tools that are available to researchers, especially measures of students’ knowledge and skills. This paper represents a step toward increasing the number of available thoroughly-evaluated tests that can be used in computing education research by evaluating the psychometric properties of a multiple-choice test designed to differentiate undergraduate students in terms of their mastery of foundational computing concepts. Classical test theory and item response theory analyses are reported and indicate that the test is a reliable, psychometrically-sound instrument suitable for research with undergraduate students. Limitations and the importance of using standardized measures of learning in education research are discussed

    Closing the gap between software engineering education and industrial needs

    Get PDF
    According to different reports, many recent software engineering graduates often face difficulties when beginning their professional careers, due to misalignment of the skills learnt in their university education with what is needed in industry. To address that need, many studies have been conducted to align software engineering education with industry needs. To synthesize that body of knowledge, we present in this paper a systematic literature review (SLR) which summarizes the findings of 33 studies in this area. By doing a meta-analysis of all those studies and using data from 12 countries and over 4,000 data points, this study will enable educators and hiring managers to adapt their education / hiring efforts to best prepare the software engineering workforce

    Issues in Evaluating Health Department Web-Based Data Query Systems: Working Papers

    Get PDF
    Compiles papers on conceptual and methodological topics to consider in evaluating state health department systems that provide aggregate data online, such as taxonomy, logic models, indicators, and design. Includes surveys and examples of evaluations

    Beyond the happy sheets! Evaluating learning in information skills teaching

    Get PDF
    This paper reviews three years of data measuring students' immediate reactions to a computer-assisted learning package in information skills and reports on work in progress to establish a more comprehensive programme of evaluation which will assess the longer term impact on learning of both the courseware itself and the way the courseware is delivered to students. The GAELS courseware was developed in the late 1990s as part of a collaborative project between the Universities of Glasgow and Strathclyde, with funding from the Scottish Higher Education Funding Council. The courseware was designed to teach higher level information skills and was initially developed for use with postgraduate engineering students; it has subsequently been adapted for use with students in other subject areas, including biological and physical sciences, and has been embedded for several years now in workshop sessions undertaken with postgraduate and undergraduate students across the Faculties of Science and Engineering at the University of Strathclyde. The courseware is introduced at the start of the academic session and made available on the Web so that students can use it as needed during their course and project work. During the first year, the courseware was used in isolation from other teaching methods (although a librarian was present to support students), whilst in the second and third years it was integrated into more traditional workshop-style teaching sessions (led by a librarian). Following work described in Joint (2003), library staff now wish to assess the longer term impact on learning of both the courseware itself and the way the courseware is delivered to students. However, the existing evaluation data does not adequately support this type of assessment. Teaching sessions are routinely evaluated by means of simple feedback forms, with four questions answered using a five-point Likert scale, collected at the conclusion of each session. According to Fitzpatrick (1998), such feedback forms measure students' reactions and represent but the first level of evaluation. Learning, which can be defined as the extent to which a student changes attitudes, improves knowledge and/or increases skill as a result of exposure to the training, is the second level and is not being measured with these forms. A more comprehensive programme of evaluation, including logging usage of the courseware outside teaching sessions and follow-up of students several months after their introduction to the courseware, is now being established to support a more meaningful assessment of impact of the courseware on student learning

    Unifying an Introduction to Artificial Intelligence Course through Machine Learning Laboratory Experiences

    Full text link
    This paper presents work on a collaborative project funded by the National Science Foundation that incorporates machine learning as a unifying theme to teach fundamental concepts typically covered in the introductory Artificial Intelligence courses. The project involves the development of an adaptable framework for the presentation of core AI topics. This is accomplished through the development, implementation, and testing of a suite of adaptable, hands-on laboratory projects that can be closely integrated into the AI course. Through the design and implementation of learning systems that enhance commonly-deployed applications, our model acknowledges that intelligent systems are best taught through their application to challenging problems. The goals of the project are to (1) enhance the student learning experience in the AI course, (2) increase student interest and motivation to learn AI by providing a framework for the presentation of the major AI topics that emphasizes the strong connection between AI and computer science and engineering, and (3) highlight the bridge that machine learning provides between AI technology and modern software engineering
    • …
    corecore