20,585 research outputs found

    The Case for Dynamic Models of Learners' Ontologies in Physics

    Full text link
    In a series of well-known papers, Chi and Slotta (Chi, 1992; Chi & Slotta, 1993; Chi, Slotta & de Leeuw, 1994; Slotta, Chi & Joram, 1995; Chi, 2005; Slotta & Chi, 2006) have contended that a reason for students' difficulties in learning physics is that they think about concepts as things rather than as processes, and that there is a significant barrier between these two ontological categories. We contest this view, arguing that expert and novice reasoning often and productively traverses ontological categories. We cite examples from everyday, classroom, and professional contexts to illustrate this. We agree with Chi and Slotta that instruction should attend to learners' ontologies; but we find these ontologies are better understood as dynamic and context-dependent, rather than as static constraints. To promote one ontological description in physics instruction, as suggested by Slotta and Chi, could undermine novices' access to productive cognitive resources they bring to their studies and inhibit their transition to the dynamic ontological flexibility required of experts.Comment: The Journal of the Learning Sciences (In Press

    Optimal design of single-tuned passive filters using response surface methodology

    Get PDF
    This paper presents an approach based on Response Surface Methodology (RSM) to find the optimal parameters of the single-tuned passive filters for harmonic mitigation. The main advantages of RSM can be underlined as easy implementation and effective computation. Using RSM, the single-tuned harmonic filter is designed to minimize voltage total harmonic distortion (THDV) and current total harmonic distortion (THDI). Power factor (PF) is also incorporated in the design procedure as a constraint. To show the validity of the proposed approach, RSM and Classical Direct Search (Grid Search) methods are evaluated for a typical industrial power system

    Investigating the role of model-based reasoning while troubleshooting an electric circuit

    Full text link
    We explore the overlap of two nationally-recognized learning outcomes for physics lab courses, namely, the ability to model experimental systems and the ability to troubleshoot a malfunctioning apparatus. Modeling and troubleshooting are both nonlinear, recursive processes that involve using models to inform revisions to an apparatus. To probe the overlap of modeling and troubleshooting, we collected audiovisual data from think-aloud activities in which eight pairs of students from two institutions attempted to diagnose and repair a malfunctioning electrical circuit. We characterize the cognitive tasks and model-based reasoning that students employed during this activity. In doing so, we demonstrate that troubleshooting engages students in the core scientific practice of modeling.Comment: 20 pages, 6 figures, 4 tables; Submitted to Physical Review PE

    Analytic Framework for Students' Use of Mathematics in Upper-Division Physics

    Full text link
    Many students in upper-division physics courses struggle with the mathematically sophisticated tools and techniques that are required for advanced physics content. We have developed an analytical framework to assist instructors and researchers in characterizing students' difficulties with specific mathematical tools when solving the long and complex problems that are characteristic of upper-division. In this paper, we present this framework, including its motivation and development. We also describe an application of the framework to investigations of student difficulties with direct integration in electricity and magnetism (i.e., Coulomb's Law) and approximation methods in classical mechanics (i.e., Taylor series). These investigations provide examples of the types of difficulties encountered by advanced physics students, as well as the utility of the framework for both researchers and instructors.Comment: 17 pages, 4 figures, 3 tables, in Phys. Rev. - PE

    Characterizing lab instructors' self-reported learning goals to inform development of an experimental modeling skills assessment

    Full text link
    The ability to develop, use, and refine models of experimental systems is a nationally recognized learning outcome for undergraduate physics lab courses. However, no assessments of students' model-based reasoning exist for upper-division labs. This study is the first step toward development of modeling assessments for optics and electronics labs. In order to identify test objectives that are likely relevant across many institutional contexts, we interviewed 35 lab instructors about the ways they incorporate modeling in their course learning goals and activities. The study design was informed by the Modeling Framework for Experimental Physics. This framework conceptualizes modeling as consisting of multiple subtasks: making measurements, constructing system models, comparing data to predictions, proposing causes for discrepancies, and enacting revisions to models or apparatus. We found that each modeling subtask was identified by multiple instructors as an important learning outcome for their course. Based on these results, we argue that test objectives should include probing students' competence with most modeling subtasks, and test items should be designed to elicit students' justifications for choosing particular modeling pathways. In addition to discussing these and other implications for assessment, we also identify future areas of research related to the role of modeling in optics and electronics labs.Comment: 24 pages, 2 figures, 5 tables; submitted to Phys. Rev. PE

    How \u3ci\u3eDaubert\u3c/i\u3e and its Progeny Have Failed Criminalistics Evidence and a Few Things the Judiciary Could Do About It.

    Get PDF
    Part I documents how courts have failed to faithfully apply Daubert’s criteria for scientific validity to this type of evidence. It describes how ambiguities and flaws in the terminology adopted in Daubert combinedwith the opaqueness of forensic-science publications and standards have been exploited to shield some test methods from critical judicial analysis. Simply desisting from these avoidance strategies would be an improvement. Part II notes how part of the U.S. Supreme Court’s opinion in Kumho Tire Co. v. Carmichael has enabled courts to lower the bar for what is presented as scientific evidence by mistakenly maintaining that there is no difference between that evidence and other expert testimony that need not be scientifically validated. It suggests that a version of Rule 702 that explicitly insists on more rigorous validation of evidence that is promoted or understood as being “scientific” would be workable and more clearly compatible with the rule’s common law roots. Part III sketches various meanings of the terms “reliability” and “validity” in science and statistics, on the one hand, and in the rules and opinions on the admissibility of expert evidence, on the other. It discusses the two-part definition of “validity” in the PCAST report and the proposed criteria for demonstrating scientific validity of subjective pattern-matching testimony. It contends that if “validity” means that a procedure (even a highly subjective one) for making measurements and drawing inferences is fit for its intended use, then whether test results that have higher error rates than the ones selected in the report might nevertheless assist fact finders who are also appropriately informed of the evidence’s probative value must be evaluated. Finally, Part IV articulates two distinct approaches to informing judges or jurors of the import of similarities in features: the traditional one in which examiners opine on the truth and falsity of source hypotheses and a more finely grained one in which criminalists report only on the strength of the evidence. It suggests that the rules for admitting scientific evidence need to be flexible enough to accommodate the latter, likelihood-based testimony when it has a satisfactory empirically established basis

    An Investigation of Students\u27 Use and Understanding of Evaluation Strategies

    Get PDF
    One expected outcome of physics instruction is that students develop quantitative reasoning skills, including evaluation of problem solutions. To investigate students’ use of evaluation strategies, we developed and administered tasks prompting students to check the validity of a given expression. We collected written (N\u3e673) and interview (N=31) data at the introductory, sophomore, and junior levels. Tasks were administered in three different physics contexts: the velocity of a block at the bottom of an incline with friction, the electric field due to three point charges of equal magnitude, and the final velocities of two masses in an elastic collision. Responses were analyzed using modified grounded theory and phenomenology. In these three contexts, we explored different facets of students’ use and understanding of evaluation strategies. First, we document and analyze the various evaluation strategies students use when prompted, comparing to canonical strategies. Second, we describe how the identified strategies relate to prior work, with particular emphasis on how a strategy we describe as grouping relates to the phenomenon of chunking as described in cognitive science. Finally, we examine how the prevalence of these strategies varies across different levels of the physics curriculum. From our quantitative data, we found that while all the surveyed student populations drew from the same set of evaluation strategies, the percentage of students who used sophisticated evaluation strategies was higher in the sophomore and junior/senior student populations than in the first-year population. From our case studies of two pair interviews (one pair of first years, and one pair of juniors), we found that that while evaluating an expression, both juniors and first-years performed similar actions. However, while the first-year students focused on computation and checked for arithmetic consistency with the laws of physics, juniors checked for computational correctness and probed whether the equation accurately described the physical world and obeyed the laws of physics. Our case studies suggest that a key difference between expert and novice evaluation is that experts extract physical meaning from their result and make sense of them by comparing them to other representations of laws of physics, and real-life experience. We conclude with remarks including implications for classroom instruction as well as suggestions for future work
    corecore