966 research outputs found

    School Climate Development Survey

    Get PDF
    Over the last twenty-five years the Consortium on Chicago School Research has engaged in systematic study of more than 400 Chicago Public Schools to determine organizational traits that are related to improvement in student learning. This effort was designed to help explain widely divergent levels of student success between very similar schools in the Chicago system. Initial discussions with educators at all levels, reviews of previous research, pilot studies, and field studies led to the identification of five school contextual factors – the 5Essential Supports – determined to be critical to school success: (1) effective leaders, (2) collaborative teachers, (3) involved families, (4) supportive environment, and (5) ambitious instruction. The framework of the 5Essential Supports served as a theoretical basis for a survey effort designed to measures and report on facets of school culture that could then be used by school leaders and practitioners to guide school improvement efforts. Research related to the 5Essential Supports consistently demonstrates a strong relationship between the presence of these supports and gains in student achievement. Led by Dr. James McMillan and Dr. Charol Shakeshaft from VCU’s School of Education, the purpose of this MERC study was (1) to develop a shortened version of the 5Essentials staff climate survey for the Metropolitan Educational Research Consortium schools, (2) to pilot test the new survey with teachers and administrators, and (3) to determine effective methods of dissemination to support schools use f the survey data for school improvement purposes. The piloting and validation phase of the study demonstrated that the core constructs underlying the 5Essentials maintained high levels of validity and reliability in the shortened version. MERC also piloted and received feedback from school leaders on formats for reporting school climate results

    Study of the relative efficiency of four tuberculin skin tests

    Get PDF
    Although it was as long ago as 1890 that Robert Koch introduced Old Tuberculin, only within comparatively recent years has the practical application of this great discovery been realised to the full. There have been many reasons to account for this tardiness in development. On the one hand the supporters of the then orthodox teaching, if not openly antagonistic to Koch, were quietly cynical as to whether anything would come of this new discovery, while on the other hand many hailed it as the cure for a scourge which for centuries had taken its toll of the civilised world. However, with the passage of years there has been consolidated from the systematised scientific examination of Old Tuberculin, a vast amount of evidence to demonstrate the essential value this substance,, leaving us eternally indebted to Koch for his discovery. From those studies, not only has our understanding of the clinical course of Tuberculosis been clarified but we have been given a product that will prove of immense value in the future eradication of Tuberculosis from civilisation. A short historical resume is given of the evolution of the Tuberculin Skin Tests commonly in use to-day. An attempt is made by the study of the reactions from those tests on 150 tuberculosis contacts to assess the relative efficiency of each test. Of the Mantoux (1 mgm.) positive reactors 87% were Vollmer positive 45% were Pirquet positive 68% were Moro positive. Of the Mantoux (0.1 mgm.) positive re-actors - 93% were Vollmer positive 49% were Pirquet positive 74% were Moro positive. A method of combined Vollmer and Mantoux testing is suggested with in view the simplification of technique and the elimination of unnecessary pain

    Epistemic normativity in Kant's “Second Analogy”

    Get PDF

    Moral experience:Perception or emotion?

    Get PDF

    An empirical investigation of issues relating to software immigrants

    Get PDF
    This thesis focuses on the issue of people in software maintenance and, in particular, on software immigrants – developers who are joining maintenance teams to work with large unfamiliar software systems. By means of a structured literature review this thesis identifies a lack of empirical literature in Software Maintenance in general and an even more distinct lack of papers examining the role of People in Software Maintenance. Whilst there is existing work examining what maintenance programmers do the vast majority of it is from a managerial perspective, looking at the goals of maintenance programers rather than their day-to-day activities. To help remedy this gap in the research a series of interviews with maintenance programmers were undertaken across a variety of different companies. Four key results were identified: maintainers specialise; companies do not provide adequate system training; external sources of information about the system are not guaranteed to be available; even when they are available they are not considered trustworthy. These results combine together to form a very challenging picture for software immigrants. Software immigrants are maintainers who are new to working with a system, although they are not normally new to programming. Although there is literature on software immigrants and the activities they undertake, there is no comparative literature. That is, literature that examines and compares different ways for software immigrants to learn about the system they have to maintain. Furthermore, a common feature of software immigrants learning patterns is the existence and use of mentors to impart system knowledge. However, as the interviews show, often mentors are not available which makes examining alternative ways of building a software immigrants level-of-understanding about the system they must maintain all the more important. As a result the final piece of work in this thesis is the design, running and results of a controlled laboratory experiment comparing different, work based, approaches to developing a level-of-understanding about a system. Two approaches were compared, one where subjects actively worked and altered the code while a second group took a passive ‘hands-off’ approach. The end result showed no difference in the level-of-understanding gained between the subjects who performed the active task and those that performed the passive task. This means that there is no benefit to taking a hands-off approach to building a level-of-understanding about new code in the hostile environment identified from the literature and interviews and software immigrants should start working with the code, fulfilling maintenance requests as soon as possible

    II. Prefacios

    Get PDF

    LDPC codes from semipartial geometries

    Get PDF
    A binary low-density parity-check (LDPC) code is a linear block code that is defined by a sparse parity-check matrix H, that is H has a low density of 1’s. LDPC codes were originally presented by Gallager in his doctoral dissertation [9], but largely overlooked for the next 35 years. A notable exception was [29], in which Tanner introduced a graphical representation for LDPC codes, now known as Tanner graphs. However, interest in these codes has greatly increased since 1996 with the publication of [22] and other papers, since it has been realised that LDPC codes are capable of achieving near-optimal performance when decoded using iterative decoding algorithms. LDPC codes can be constructed randomly by using a computer algorithm to generate a suitable matrix H. However, it is also possible to construct LDPC codes explicitly using various incidence structures in discrete mathematics. For example, LDPC codes can be constructed based on the points and lines of finite geometries: there are many examples in the literature (see for example [18, 28]). These constructed codes can possess certain advantages over randomly-generated codes. For example they may provide more efficient encoding algorithms than randomly-generated codes. Furthermore it can be easier to understand and determine the properties of such codes because of the underlying structure. LDPC codes have been constructed based on incidence structures known as partial geometries [16]. The aim of this research is to provide examples of new codes constructed based on structures known as semipartial geometries (SPGs), which are generalisations of partial geometries. Since the commencement of this thesis [19] was published, which showed that codes could be constructed from semipartial geometries and provided some examples and basic results. By necessity this thesis contains a number of results from that paper. However, it should be noted that the scope of [19] is fairly limited and that the overlap between the current thesis and [19] is consequently small. [19] also contains a number of errors, some of which have been noted and corrected in this thesis

    A question of trust: can we build an evidence base to gain trust in systematic review automation technologies?

    Get PDF
    Background Although many aspects of systematic reviews use computational tools, systematic reviewers have been reluctant to adopt machine learning tools. Discussion We discuss that the potential reason for the slow adoption of machine learning tools into systematic reviews is multifactorial. We focus on the current absence of trust in automation and set-up challenges as major barriers to adoption. It is important that reviews produced using automation tools are considered non-inferior or superior to current practice. However, this standard will likely not be sufficient to lead to widespread adoption. As with many technologies, it is important that reviewers see “others” in the review community using automation tools. Adoption will also be slow if the automation tools are not compatible with workflows and tasks currently used to produce reviews. Many automation tools being developed for systematic reviews mimic classification problems. Therefore, the evidence that these automation tools are non-inferior or superior can be presented using methods similar to diagnostic test evaluations, i.e., precision and recall compared to a human reviewer. However, the assessment of automation tools does present unique challenges for investigators and systematic reviewers, including the need to clarify which metrics are of interest to the systematic review community and the unique documentation challenges for reproducible software experiments. Conclusion We discuss adoption barriers with the goal of providing tool developers with guidance as to how to design and report such evaluations and for end users to assess their validity. Further, we discuss approaches to formatting and announcing publicly available datasets suitable for assessment of automation technologies and tools. Making these resources available will increase trust that tools are non-inferior or superior to current practice. Finally, we identify that, even with evidence that automation tools are non-inferior or superior to current practice, substantial set-up challenges remain for main stream integration of automation into the systematic review process
    • 

    corecore