1,489 research outputs found

    How physics instruction impacts students' beliefs about learning physics: A meta-analysis of 24 studies

    Get PDF
    In this meta-analysis, we synthesize the results of 24 studies using the Colorado Learning Attitudes about Science Survey (CLASS) and the Maryland Physics Expectations Survey (MPEX) to answer several questions: (1) How does physics instruction impact students' beliefs? (2) When do physics majors develop expert-like beliefs? and (3) How do students' beliefs impact their learning of physics? We report that in typical physics classes, students' beliefs deteriorate or at best stay the same. There are a few types of interventions, including an explicit focus on model-building and/or developing expert- like beliefs that lead to significant improvements in beliefs. Further, small courses and those for elementary education and non-science majors also result in improved beliefs. However, because the available data oversamples certain types of classes, it is unclear whether these improvements are actually due to the interventions, or due to the small class size, or student population typical of the kinds of classes in which these interventions are most often used. Physics majors tend to enter their undergraduate education with more expert-like beliefs than non-majors and these beliefs remain relatively stable throughout their undergraduate careers. Thus, typical physics courses appear to be selecting students who already have strong beliefs, rather than supporting students in developing strong beliefs. There is a small correlation between students' incoming beliefs about physics and their gains on conceptual mechanics surveys. This suggests that students with more expert-like incoming beliefs may learn more in their physics courses, but this finding should be further explored and replicated. Some unanswered questions remain. To answer these questions, we advocate several specific types of future studies.Comment: 30 pages. Accepted to Phys Rev ST-PE

    A Runtime Verification and Validation Framework for Self-Adaptive Software

    Get PDF
    The concepts that make self-adaptive software attractive also make it more difficult for users to gain confidence that these systems will consistently meet their goals under uncertain context. To improve user confidence in self-adaptive behavior, machine-readable conceptual models have been developed to instrument the adaption behavior of the target software system and primary feedback loop. By comparing these machine-readable models to the self-adaptive system, runtime verification and validation may be introduced as another method to increase confidence in self-adaptive systems; however, the existing conceptual models do not provide the semantics needed to institute this runtime verification or validation. This research confirms that the introduction of runtime verification and validation for self-adaptive systems requires the expansion of existing conceptual models with quality of service metrics, a hierarchy of goals, and states with temporal transitions. Based on this expanded semantics, runtime verification and validation was introduced as a second-level feedback loop to improve the performance of the primary feedback loop and quantitatively measure the quality of service achieved in a state-based, self-adaptive system. A web-based purchasing application running in a cloud-based environment was the focus of experimentation. In order to meet changing customer purchasing demand, the self-adaptive system monitored external context changes and increased or decreased available application servers. The runtime verification and validation system operated as a second-level feedback loop to monitor quality of service goals based on internal context, and corrected self-adaptive behavior when goals are violated. Two competing quality of service goals were introduced to maintain customer satisfaction while minimizing cost. The research demonstrated that the addition of a second-level runtime verification and validation feedback loop did quantitatively improve self-adaptive system performance even with simple, static monitoring rules

    Research-based assessment affordances and constraints: Perceptions of physics faculty

    Get PDF
    To help faculty use research-based materials in a more significant way, we learn about their perceived needs and desires and use this information to suggest ways for the Physics Education Research community to address these needs. When research-based resources are well aligned with the perceived needs of faculty, faculty members will more readily take them up. We used phenomenographic interviews of ordinary physics faculty and department chairs to identify four families of issues that faculty have around research-based assessments (RBA). First, many faculty are interested in using RBAs but have practical needs around how to do so: how to find them, which ones there are, and how to administer them. They want help addressing these needs. Second, at the same time, many faculty think that RBAs are limited and don't measure many of the things they care about, or aren't applicable in their classes. They want assessments to measure skills, perceptions, and specific concepts. Third, many faculty want to turn to communities of other faculty and experts to help them interpret their assessment results and suggest other ways to do assessment. They want to norm their assessment results by comparing to others and interacting with faculty from other schools to learn about how they do assessment. Fourth, many faculty consider their courses in the broader contexts of accountability and their departments. They want help with assessment in these broader contexts. We also discuss how faculty members role in their department and type of institution influence their perceived wants and needs around assessment.Comment: submitted to Physical Review Special Topics - Physics Education Researc

    Resource Letter RBAI-1: Research-Based Assessment Instruments in Physics and Astronomy

    Get PDF
    Citation: Madsen, A., McKagan, S. B., & Sayre, E. C. (2017). Resource Letter RBAI-1: Research-Based Assessment Instruments in Physics and Astronomy. American Journal of Physics, 85(4), 245-264. doi:10.1119/1.4977416This resource letter provides a guide to Research-Based Assessment Instruments (RBAIs) of physics and astronomy content. These are standardized assessments that were rigorously developed and revised using student ideas and interviews, expert input, and statistical analyses. RBAIs have had a major impact on physics and astronomy education reform by providing a universal and convincing measure of student understanding that instructors can use to assess and improve the effectiveness of their teaching. In this resource letter, we present an overview of all content RBAIs in physics and astronomy by topic, research validation, instructional level, format, and themes, to help faculty find the best assessment for their course. More details about each RBAI available in physics and astronomy are available at PhysPort: physport. org/assessments. (C) 2017 American Association of Physics Teachers
    • …
    corecore