966 research outputs found
School Climate Development Survey
Over the last twenty-five years the Consortium on Chicago School Research has engaged in systematic study of more than 400 Chicago Public Schools to determine organizational traits that are related to improvement in student learning. This effort was designed to help explain widely divergent levels of student success between very similar schools in the Chicago system. Initial discussions with educators at all levels, reviews of previous research, pilot studies, and field studies led to the identification of five school contextual factors â the 5Essential Supports â determined to be critical to school success: (1) effective leaders, (2) collaborative teachers, (3) involved families, (4) supportive environment, and (5) ambitious instruction. The framework of the 5Essential Supports served as a theoretical basis for a survey effort designed to measures and report on facets of school culture that could then be used by school leaders and practitioners to guide school improvement efforts. Research related to the 5Essential Supports consistently demonstrates a strong relationship between the presence of these supports and gains in student achievement.
Led by Dr. James McMillan and Dr. Charol Shakeshaft from VCUâs School of Education, the purpose of this MERC study was (1) to develop a shortened version of the 5Essentials staff climate survey for the Metropolitan Educational Research Consortium schools, (2) to pilot test the new survey with teachers and administrators, and (3) to determine effective methods of dissemination to support schools use f the survey data for school improvement purposes. The piloting and validation phase of the study demonstrated that the core constructs underlying the 5Essentials maintained high levels of validity and reliability in the shortened version. MERC also piloted and received feedback from school leaders on formats for reporting school climate results
Study of the relative efficiency of four tuberculin skin tests
Although it was as long ago as 1890 that
Robert Koch introduced Old Tuberculin, only within
comparatively recent years has the practical
application of this great discovery been realised
to the full.
There have been many reasons to account
for this tardiness in development. On the one hand
the supporters of the then orthodox teaching, if
not openly antagonistic to Koch, were quietly
cynical as to whether anything would come of this
new discovery, while on the other hand many hailed
it as the cure for a scourge which for centuries had
taken its toll of the civilised world.
However, with the passage of years there
has been consolidated from the systematised
scientific examination of Old Tuberculin, a vast
amount of evidence to demonstrate the essential value
this substance,, leaving us eternally indebted to
Koch for his discovery. From those studies, not only
has our understanding of the clinical course of
Tuberculosis been clarified but we have been given
a product that will prove of immense value in the
future eradication of Tuberculosis from civilisation.
A short historical resume is given of the
evolution of the Tuberculin Skin Tests commonly in use
to-day.
An attempt is made by the study of the
reactions from those tests on 150 tuberculosis contacts
to assess the relative efficiency of each test.
Of the Mantoux (1 mgm.) positive reactors
87% were Vollmer positive
45% were Pirquet positive
68% were Moro positive.
Of the Mantoux (0.1 mgm.) positive re-actors -
93% were Vollmer positive
49% were Pirquet positive
74% were Moro positive.
A method of combined Vollmer and Mantoux
testing is suggested with in view the simplification
of technique and the elimination of unnecessary pain
An empirical investigation of issues relating to software immigrants
This thesis focuses on the issue of people in software maintenance and, in particular, on software immigrants â developers who are joining maintenance teams to work with large unfamiliar software systems. By means of a structured literature review this thesis identifies a lack of empirical literature in Software Maintenance in general and an even more distinct lack of papers examining the role of People in Software Maintenance. Whilst there is existing work examining what maintenance programmers do the vast majority of it is from a managerial perspective, looking at the goals of maintenance programers rather than their day-to-day activities. To help remedy this gap in the research a series of interviews with maintenance programmers were undertaken across a variety of different companies. Four key results were identified: maintainers specialise; companies do not provide adequate system training; external sources of information about the system are not guaranteed to be available; even when they are available they are not considered trustworthy. These results combine together to form a very challenging picture for software immigrants. Software immigrants are maintainers who are new to working with a system, although they are not normally new to programming. Although there is literature on software immigrants and the activities they undertake, there is no comparative literature. That is, literature that examines and compares different ways for software immigrants to learn about the system they have to maintain. Furthermore, a common feature of software immigrants learning patterns is the existence and use of mentors to impart system knowledge. However, as the interviews show, often mentors are not available which makes examining alternative ways of building a software immigrants level-of-understanding about the system they must maintain all the more important. As a result the final piece of work in this thesis is the design, running and results of a controlled laboratory experiment comparing different, work based, approaches to developing a level-of-understanding about a system. Two approaches were compared, one where subjects actively worked and altered the code while a second group took a passive âhands-offâ approach. The end result showed no difference in the level-of-understanding gained between the subjects who performed the active task and those that performed the passive task. This means that there is no benefit to taking a hands-off approach to building a level-of-understanding about new code in the hostile environment identified from the literature and interviews and software immigrants should start working with the code, fulfilling maintenance requests as soon as possible
Recommended from our members
Normativity and Representation in Kant's Theory of Cognition
This dissertation examines various aspects of normativity and representation as they figure in Kantâs theory of cognition. In particular, I argue that Kant holds that certain forms of representational content constitutively depend on normative constraint. This applies to all of the kinds of content that can be captured by concepts (viz. âkindâ-properties, and the objective temporal structures that correspond to the âcategoriesâ). Since we perceptually represent objects as exhibiting these features, even the activities that produce perceptions must be normatively constrained. Nevertheless, representation per se does not depend on normative constraint: Kant holds that non-human animals can represent objects, suggesting that he endorses forms of ânon-conceptual contentâ that donât depend on normative constraint.
Chapter 1 explores the preconditions for representing objective temporal sequence, as outlined in the Second Analogy. I argue that Kantâs notion of the ânecessitation of the subjective order of perceptionsâ must be understood as a form of normative necessity, so representations of objective temporal sequence constitutively depend on normativity.
Chapter 2 continues the discussion of the Second Analogy by exploring the connection between causation and lawfulness. I argue that Kant holds that the concept of contains the notion of lawful connection. He therefore has sound reasons for asserting the Strong Causal Principle (that every event is produced according to a universal causal law) on the basis of the Second Analogyâs argument.
Chapter 3 examines the role of schemata in Kantâs theory of cognition. Assuming that schemata are rules for synthesis of the imagination, I argue that they should be understood as akin to maxims: mentally represented rules that bring our activities into contact with intersubjective normative standards. I argue that, by bringing synthesis under normative constraint, schemata enable intuitions to represent their objects as bearing âkindâ-properties.
Chapter 4 discharges the assumption that schemata are rules for synthesis of imagination, through close reading and criticism of alternative interpretations.
Chapter 5 examines Kantâs views about animal minds and what they tell us about his theory of human cognition. I argue that he genuinely credits animals with intuitions of objects. Nevertheless, there are still good motivations for thinking that all human intuitions are produced by the understanding, and that it makes human and animal intuitions different in kind.
The Conclusion brings together material from the preceding five chapters to discuss the extent to which Kant endorses a ânormative theory of representationâ.AHRC
Leverhulme Trus
LDPC codes from semipartial geometries
A binary low-density parity-check (LDPC) code is a linear block code that is defined by a sparse parity-check matrix H, that is H has a low density of 1âs. LDPC codes were originally presented by Gallager in his doctoral dissertation [9], but largely overlooked for the next 35 years. A notable exception was [29], in which Tanner introduced a graphical representation for LDPC codes, now known as Tanner graphs. However, interest in these codes has greatly increased since 1996 with the publication of [22] and other papers, since it has been realised that LDPC codes are capable of achieving near-optimal performance when decoded using iterative decoding algorithms.
LDPC codes can be constructed randomly by using a computer algorithm to generate a suitable matrix H. However, it is also possible to construct LDPC codes explicitly using various incidence structures in discrete mathematics. For example, LDPC codes can be constructed based on the points and lines of finite geometries: there are many examples in the literature (see for example [18, 28]). These constructed codes can possess certain advantages over randomly-generated codes. For example they may provide more efficient encoding algorithms than randomly-generated codes. Furthermore it can be easier to understand and determine the properties of such codes because of the underlying structure.
LDPC codes have been constructed based on incidence structures known as partial geometries [16]. The aim of this research is to provide examples of new codes constructed based on structures known as semipartial geometries (SPGs), which are generalisations of partial geometries.
Since the commencement of this thesis [19] was published, which showed that codes could be constructed from semipartial geometries and provided some examples and basic results. By necessity this thesis contains a number of results from that paper. However, it should be noted that the scope of [19] is fairly limited and that the overlap between the current thesis and [19] is consequently small. [19] also contains a number of errors, some of which have been noted and corrected in this thesis
A question of trust: can we build an evidence base to gain trust in systematic review automation technologies?
Background Although many aspects of systematic reviews use computational tools, systematic reviewers have been reluctant to adopt machine learning tools.
Discussion We discuss that the potential reason for the slow adoption of machine learning tools into systematic reviews is multifactorial. We focus on the current absence of trust in automation and set-up challenges as major barriers to adoption. It is important that reviews produced using automation tools are considered non-inferior or superior to current practice. However, this standard will likely not be sufficient to lead to widespread adoption. As with many technologies, it is important that reviewers see âothersâ in the review community using automation tools. Adoption will also be slow if the automation tools are not compatible with workflows and tasks currently used to produce reviews. Many automation tools being developed for systematic reviews mimic classification problems. Therefore, the evidence that these automation tools are non-inferior or superior can be presented using methods similar to diagnostic test evaluations, i.e., precision and recall compared to a human reviewer. However, the assessment of automation tools does present unique challenges for investigators and systematic reviewers, including the need to clarify which metrics are of interest to the systematic review community and the unique documentation challenges for reproducible software experiments.
Conclusion We discuss adoption barriers with the goal of providing tool developers with guidance as to how to design and report such evaluations and for end users to assess their validity. Further, we discuss approaches to formatting and announcing publicly available datasets suitable for assessment of automation technologies and tools. Making these resources available will increase trust that tools are non-inferior or superior to current practice. Finally, we identify that, even with evidence that automation tools are non-inferior or superior to current practice, substantial set-up challenges remain for main stream integration of automation into the systematic review process
- âŠ