230,726 research outputs found

    Does New Teacher Support Affect Student Achievement? Some Early Research Findings

    Get PDF
    We understand the importance of having qualified, effective teachers in every classroom. We have learned from many research studies, particularly those of William Sanders and his colleagues in Tennessee, that students who are taught by effective teachers (defined by Sanders as those whose students consistently post gains in student achievement scores) for several years in a row will experience the benefits throughout the rest of their school careers and beyond. After three years with the most effective teachers, students show achievement gains significantly higher than those of students with the least effective teachers.We can reasonably hypothesize that more experienced teachers will exceed the effectiveness of recently inducted beginning teachers. Further, as is now widely recognized in most states, new teachers need and benefit from support during their induction period. Support during the new teachers' first year or two may be just as important to their effectiveness as their pre-service training, their state certification, and their subject matter skills. To justify assigning resources to provide support for novice teachers, legislators and school district administrators need to be convinced that such support is associated with educational outcomes beyond participant satisfaction. Researchers have shown that induction and mentoring programs may have a positive effect on teacher retention. However, few studies demonstrate any connection between new teacher induction and student achievement, the outcome that is probably of most interest to parents, educators, and legislators. Perhaps the main reason for this is that such studies are diffi cult to conduct. First, it is hard to obtain the necessary data. Many schools and districts do not maintain databases connecting student test scores to teachers. Many states do not test students in all grade levels annually, and tests are changed frequently, making it diffi cult to compare performance from year to year. Also, induction programs vary, and many factors contribute to changes in student achievement besides the kinds of support beginning teachers receive. These include school variables, family, economic status, and social issues; other kinds of support such as teacher aides, subject-matter specialists, tutoring; teaching to the test; language issues; and students' health and mood at the time of the testing. Finally, not all educators agree on the validity of using standardized test scores to measure student learning.Imposing an experimental design on treatment and subjects would address all of these issues, except the last. However, the most challenging aspect of this field is often securing access to a suitable control or comparison group of any sort, much less one meeting the standards of an experimental design. These dilemmas force compromises that can make interpretation more difficult

    Sonocrystallisation of Lactose in an Aqueous System

    Get PDF
    Although research on sonocrystallisation of lactose has been reported in the literature (yield and crystal size), the effect of ultrasound variables on nucleation and growth rate of lactose have not been studied. In this study, lactose crystallisation with ultrasound was compared with mechanical agitation using the induction time method at 22 °C. Ultrasound had a significant effect in reducing induction times and narrowing the metastable zone width but had no effect on individual crystal growth rate or morphology. A rapid decrease in induction time was observed up to 0.46 Wg-1 power density. Sonication up to 3 min decreased the induction time, but no further reduction was observed beyond 3 min. It was not possible to generate the nucleation rates achieved by sonication using agitation alone. 1 min sonication at 0.46 Wg1 power density followed by continuous stirring was found to be the optimum under the experimental conditions tested

    Beyond Q-Resolution and Prenex Form: A Proof System for Quantified Constraint Satisfaction

    Get PDF
    We consider the quantified constraint satisfaction problem (QCSP) which is to decide, given a structure and a first-order sentence (not assumed here to be in prenex form) built from conjunction and quantification, whether or not the sentence is true on the structure. We present a proof system for certifying the falsity of QCSP instances and develop its basic theory; for instance, we provide an algorithmic interpretation of its behavior. Our proof system places the established Q-resolution proof system in a broader context, and also allows us to derive QCSP tractability results

    Beyond Covariation: Cues to Causal Structure

    Get PDF
    Causal induction has two components: learning about the structure of causal models and learning about causal strength and other quantitative parameters. This chapter argues for several interconnected theses. First, people represent causal knowledge qualitatively, in terms of causal structure; quantitative knowledge is derivative. Second, people use a variety of cues to infer causal structure aside from statistical data (e.g. temporal order, intervention, coherence with prior knowledge). Third, once a structural model is hypothesized, subsequent statistical data are used to confirm, refute, or elaborate the model. Fourth, people are limited in the number and complexity of causal models that they can hold in mind to test, but they can separately learn and then integrate simple models, and revise models by adding and removing single links. Finally, current computational models of learning need further development before they can be applied to human learning

    Phase Clocks for Transient Fault Repair

    Full text link
    Phase clocks are synchronization tools that implement a form of logical time in distributed systems. For systems tolerating transient faults by self-repair of damaged data, phase clocks can enable reasoning about the progress of distributed repair procedures. This paper presents a phase clock algorithm suited to the model of transient memory faults in asynchronous systems with read/write registers. The algorithm is self-stabilizing and guarantees accuracy of phase clocks within O(k) time following an initial state that is k-faulty. Composition theorems show how the algorithm can be used for the timing of distributed procedures that repair system outputs.Comment: 22 pages, LaTe

    Completeness of Flat Coalgebraic Fixpoint Logics

    Full text link
    Modal fixpoint logics traditionally play a central role in computer science, in particular in artificial intelligence and concurrency. The mu-calculus and its relatives are among the most expressive logics of this type. However, popular fixpoint logics tend to trade expressivity for simplicity and readability, and in fact often live within the single variable fragment of the mu-calculus. The family of such flat fixpoint logics includes, e.g., LTL, CTL, and the logic of common knowledge. Extending this notion to the generic semantic framework of coalgebraic logic enables covering a wide range of logics beyond the standard mu-calculus including, e.g., flat fragments of the graded mu-calculus and the alternating-time mu-calculus (such as alternating-time temporal logic ATL), as well as probabilistic and monotone fixpoint logics. We give a generic proof of completeness of the Kozen-Park axiomatization for such flat coalgebraic fixpoint logics.Comment: Short version appeared in Proc. 21st International Conference on Concurrency Theory, CONCUR 2010, Vol. 6269 of Lecture Notes in Computer Science, Springer, 2010, pp. 524-53

    Where do statistical models come from? Revisiting the problem of specification

    Full text link
    R. A. Fisher founded modern statistical inference in 1922 and identified its fundamental problems to be: specification, estimation and distribution. Since then the problem of statistical model specification has received scant attention in the statistics literature. The paper traces the history of statistical model specification, focusing primarily on pioneers like Fisher, Neyman, and more recently Lehmann and Cox, and attempts a synthesis of their views in the context of the Probabilistic Reduction (PR) approach. As argued by Lehmann [11], a major stumbling block for a general approach to statistical model specification has been the delineation of the appropriate role for substantive subject matter information. The PR approach demarcates the interrelated but complemenatry roles of substantive and statistical information summarized ab initio in the form of a structural and a statistical model, respectively. In an attempt to preserve the integrity of both sources of information, as well as to ensure the reliability of their fusing, a purely probabilistic construal of statistical models is advocated. This probabilistic construal is then used to shed light on a number of issues relating to specification, including the role of preliminary data analysis, structural vs. statistical models, model specification vs. model selection, statistical vs. substantive adequacy and model validation.Comment: Published at http://dx.doi.org/10.1214/074921706000000419 in the IMS Lecture Notes--Monograph Series (http://www.imstat.org/publications/lecnotes.htm) by the Institute of Mathematical Statistics (http://www.imstat.org
    • …
    corecore