3,390 research outputs found

    Research to Practice: Leveraging Concept Inventories in Statics Instruction

    Get PDF
    There are many common challenges with classroom assessment, especially in first-year large enrollment courses, including managing high quality assessment within time constraints, and promoting effective study strategies. This paper presents two studies: 1) using the CATS instrument to validate multiple-choice format exams for classroom assessment, and 2) using the CATS instrument as a measure of metacognitive growth over time. The first study focused on validation of instructor generated multiple choice exams because they are easier to administer, grade, and return for timely feedback, especially for large enrollment classes. The limitation of multiple choice exams, however, is that it is very difficult to construct questions to measure higher order content knowledge beyond recalling facts. A correlational study was used to compare multiple choice exam scores with relevant portions of the CATS assessment (taken within a week of one another). The results indicated a strong relationship between student performance on the CATS assessment and instructor generated exams, which infers that both assessments were measuring similar content areas. The second study focused on a metacognition, more specifically, on students’ ability to self-assess the extent of their own knowledge. In this study students were asked to rank their confidence for each CATS item on a 1 (not at all confident) to 4 (very confident) Likert-type scale. With the 4-point scale, there was no neutral option provided; students were forced to identify some degree of confident or not confident. A regression analysis was used to compare the relationship between performance and confidence for pre, post, and delayed-post assessments. Results suggested that the students’ self-knowledge of their performance improved over time

    Physical-technical prior competencies of engineering students

    Get PDF

    Integrated Testlets and the Immediate Feedback Assessment Technique

    Get PDF
    The increased use of multiple-choice (MC) questions in introductory-level physics final exams is largely hindered by reservations about its ability to test the broad cognitive domain that is routinely accessed with typical constructed-response (CR) questions. Thus, there is a need to explore ways in which MC questions can be utilized pedagogically more like CR questions while maintaining their attendant procedural advantages. we describe how an answer-until-correct MC response format allows for the construction of multiple-choice examinations designed to operate much as a hybrid between standard MC and CR testing. With this tool - the immediate feedback assessment technique (IF-AT) - students gain complete knowledge of the correct answer for each question during the examination, and can use such information for solving subsequent test items. This feature allows for the creation of a new type of context-dependent item sets; the "integrated testlet". In an integrated testlet certain items are purposefully inter-dependent and are thus presented in a particular order. Such integrated testlets represent a proxy of typical CR questions, but with a straightforward and uniform marking scheme that also allows for granting partial credit for proximal knowledge. We present a case study of an IF-AT-scored midterm and final examination for an introductory physics course, and discuss specific testlets with varying degrees of integration. In total, the items are found to allow for excellent discrimination, with a mean item-total correlation measure for the combined 45 items of the two examinations of rˉ′=0.41±0.13\bar{r}'=0.41\pm 0.13 (mean ±\pm standard deviation) and a final examination test reliability of α=0.82\alpha=0.82 (n=25n=25 items). Furthermore, partial credit is shown to be allocated in a discriminating and valid manner in these examinations.Comment: 13 pages. 7 figures. Accepted to the American Journal of Physics (August 2013

    Development and Uses of Upper-division Conceptual Assessment

    Get PDF
    The use of validated conceptual assessments alongside more standard course exams has become standard practice for the introductory courses in many physics departments. These assessments provide a more standard measure of certain learning goals, allowing for comparisons of student learning across instructors, semesters, and institutions. Researchers at the University of Colorado Boulder have developed several similar assessments designed to target the more advanced physics content of upper-division classical mechanics, electrostatics, quantum mechanics, and electrodynamics. Here, we synthesize the existing research on our upper-division assessments and discuss some of the barriers and challenges associated with developing, validating, and implementing these assessments as well as some of the strategies we have used to overcome these barriers.Comment: 12 pages, 5 figures, submitted to the Phys. Rev. ST - PER Focused collection on Upper-division PE

    Difficulty as a concept inventory design consideration: An exploratory study of the concept assessment tool for statics (CATS)

    Get PDF
    The ability for engineering students to apply mathematic, scientific and engineering knowledge to real-life problems depends greatly on developing deep conceptual knowledge that structures and relates the meaning of underlying principles. Concept inventories have emerged as a class of tests typically developed for use in higher education science and engineering courses. Concept Inventories (CIs) are multiple-choice tests that are designed to assess students\u27 conceptual understanding within a specific content domain. For example, the CI explored within this study, the Concept Assessment Tool for Statics (CATS) is intended to measure students\u27 understanding of the concepts underlying the domain of engineering statics. High quality, reliable CIs may be used for formative and summative assessment, and help address the need for measures of conceptual understanding. Evidence of test validity is often found through calculation of psychometric parameters. Prior research has applied multiple theoretical measurement models including classical test theory and item response theory to find psychometric evidence that characterize student performance on CATS. Common to these approaches is the calculation of item difficulty, a parameter that is used to distinguish which items are more difficult than others. The purpose of this dissertation study is to provide context and description of what makes some CI items more difficult than others within the content area of statics, based on students\u27 reasoning in response to CATS items. Specifically, the research question guiding this study is: how does student reasoning in response to CATS items explain variance in item difficulty across test items? Think-aloud interviews were conducted in combination with a content analysis of selected CATS items. Thematic analysis was performed on interview transcripts and CATS development and evaluation documentation. Two themes emerged as possible explanations for why some CATS items are more difficult than others: (1) a Direction of Problem Solving theme describes the direction of reasoning required or used to respond to CATS items, and may also provide some description of students\u27 reasoning in response to determinant and indeterminant multiple-choice problems; and (2) a Distractor Attractiveness theme describes problematic reasoning that is targeted and observed as argumentation for incorrect CATS responses. The findings from this study hold implications for the interpretation of CATS performance and the consideration of difficulty in concept inventory design. Specifically, findings from this study suggest that item difficulty may be associated with complexity, relating to theories of cognitive load. Complexity as it contributes to item difficulty is not solely dependent on the content of the concept inventory item, but also may be due to the item design and context of the test question

    What is the Role of Legal Systems in Financial Intermediation? Theory and Evidence

    Get PDF
    We develop a theory and empirical test of how the legal system affects the relationship between venture capitalists and entrepreneurs. The theory uses a double moral hazard framework to show how optimal contracts and investor actions depend on the quality of the legal system. The empirical evidence is based on a sample of European venture capital deals. The main results are that with better legal protection, investors give more non-contractible support and demand more downside protection. These predictions are supported by the empirical analysis. Using a new empirical approach of comparing two sets of fixed-effect regressions, we also find that the investor’s legal system is more important than that of the company in determining investor behavior.Financial Intermediation;Law and Finance;Corporate Governance;Venture Capital

    Changes in students’ problem-solving strategies in a course that includes context-rich, multifaceted problems

    Get PDF
    Most students struggle when faced with complex and open-ended tasks because the strategies taught in schools and universities simply require finding and applying the correct formulae or strategy to answer well-structured, algorithmic problems. For students to develop their ability to solve ill-structured problems, they must first believe that standardized procedural approaches will not always be sufficient for solving engineering and scientific challenges. In this paper we document the range of beliefs university students have about problem solving. Students enrolled in a physics course submitted a written reflection both at the start and the end of the course on how they solve problems. We coded approximately 500 of these reflections for the presence of different problem-solving approaches. At the start of the semester over 50% of the students mention in written reflections that they use Rolodex equation matching, i.e., they solve problems by searching for equations that have the same variables as the knowns and unknowns. We then describe the extent to which students’ beliefs about physics problem solving change by the end of a semester-long course that emphasized problem solving via context-rich, multifaceted problems. The frequency of strategies such as the Rolodex method reduces only slightly by the end of the semester. However, there is an increase in students describing more expansive strategies within their reflections. In particular there is a large increase in describing the use of diagrams, and thinking about concepts first
    • …
    corecore