13,429 research outputs found

    An exploration of the language within Ofsted reports and their influence on primary school performance in mathematics: a mixed methods critical discourse analysis

    Get PDF
    This thesis contributes to the understanding of the language of Ofsted reports, their similarity to one another and associations between different terms used within ‘areas for improvement’ sections and subsequent outcomes for pupils. The research responds to concerns from serving headteachers that Ofsted reports are overly similar, do not capture the unique story of their school, and are unhelpful for improvement. In seeking to answer ‘how similar are Ofsted reports’ the study uses two tools, a plagiarism detection software (Turnitin) and a discourse analysis tool (NVivo) to identify trends within and across a large corpus of reports. The approach is based on critical discourse analysis (Van Dijk, 2009; Fairclough, 1989) but shaped in the form of practitioner enquiry seeking power in the form of impact on pupils and practitioners, rather than a more traditional, sociological application of the method. The research found that in 2017, primary school section 5 Ofsted reports had more than half of their content exactly duplicated within other primary school inspection reports published that same year. Discourse analysis showed the quality assurance process overrode variables such as inspector designation, gender, or team size, leading to three distinct patterns of duplication: block duplication, self-referencing, and template writing. The most unique part of a report was found to be the ‘area for improvement’ section, which was tracked to externally verified outcomes for pupils using terms linked to ‘mathematics’. Those required to improve mathematics in their areas for improvement improved progress and attainment in mathematics significantly more than national rates. These findings indicate that there was a positive correlation between the inspection reporting process and a beneficial impact on pupil outcomes in mathematics, and that the significant similarity of one report to another had no bearing on the usefulness of the report for school improvement purposes within this corpus

    Identifying and responding to people with mild learning disabilities in the probation service

    Get PDF
    It has long been recognised that, like many other individuals, people with learningdisabilities find their way into the criminal justice system. This fact is not disputed. Whathas been disputed, however, is the extent to which those with learning disabilities arerepresented within the various agencies of the criminal justice system and the ways inwhich the criminal justice system (and society) should address this. Recently, social andlegislative confusion over the best way to deal with offenders with learning disabilities andmental health problems has meant that the waters have become even more muddied.Despite current government uncertainty concerning the best way to support offenders withlearning disabilities, the probation service is likely to continue to play a key role in thesupervision of such offenders. The three studies contained herein aim to clarify the extentto which those with learning disabilities are represented in the probation service, toexamine the effectiveness of probation for them and to explore some of the ways in whichprobation could be adapted to fit their needs.Study 1 and study 2 showed that around 10% of offenders on probation in Kent appearedto have an IQ below 75, putting them in the bottom 5% of the general population. Study 3was designed to assess some of the support needs of those with learning disabilities in theprobation service, finding that many of the materials used by the probation service arelikely to be too complex for those with learning disabilities to use effectively. To addressthis, a model for service provision is tentatively suggested. This is based on the findings ofthe three studies and a pragmatic assessment of what the probation service is likely to becapable of achieving in the near future

    Consent and the Construction of the Volunteer: Institutional Settings of Experimental Research on Human Beings in Britain during the Cold War

    Get PDF
    This study challenges the primacy of consent in the history of human experimentation and argues that privileging the cultural frameworks adds nuance to our understanding of the construction of the volunteer in the period 1945 to 1970. Historians and bio-ethicists have argued that medical ethics codes have marked out the parameters of using people as subjects in medical scientific research and that the consent of the subjects was fundamental to their status as volunteers. However, the temporality of the creation of medical ethics codes means that they need to be understood within their historical context. That medical ethics codes arose from a specific historical context rather than a concerted and conscious determination to safeguard the well-being of subjects needs to be acknowledged. The British context of human experimentation is under-researched and there has been even less focus on the cultural frameworks within which experiments took place. This study demonstrates, through a close analysis of the Medical Research Council's Common Cold Research Unit (CCRU) and the government's military research facility, the Chemical Defence Experimental Establishment, Porton Down (Porton), that the `volunteer' in human experiments was a subjective entity whose identity was specific to the institution which recruited and made use of the subject. By examining representations of volunteers in the British press, the rhetoric of the government's collectivist agenda becomes evident and this fed into the institutional construction of the volunteer at the CCRU. In contrast, discussions between Porton scientists, staff members, and government officials demonstrate that the use of military personnel in secret chemical warfare experiments was far more complex. Conflicting interests of the military, the government and the scientific imperative affected how the military volunteer was perceived

    Développement d’un système intelligent de reconnaissance automatisée pour la caractérisation des états de surface de la chaussée en temps réel par une approche multicapteurs

    Get PDF
    Le rôle d’un service dédié à l’analyse de la météo routière est d’émettre des prévisions et des avertissements aux usagers quant à l’état de la chaussée, permettant ainsi d’anticiper les conditions de circulations dangereuses, notamment en période hivernale. Il est donc important de définir l’état de chaussée en tout temps. L’objectif de ce projet est donc de développer un système de détection multicapteurs automatisée pour la caractérisation en temps réel des états de surface de la chaussée (neige, glace, humide, sec). Ce mémoire se focalise donc sur le développement d’une méthode de fusion de données images et sons par apprentissage profond basée sur la théorie de Dempster-Shafer. Les mesures directes pour l’acquisition des données qui ont servi à l’entrainement du modèle de fusion ont été effectuées à l’aide de deux capteurs à faible coût disponibles dans le commerce. Le premier capteur est une caméra pour enregistrer des vidéos de la surface de la route. Le second capteur est un microphone pour enregistrer le bruit de l’interaction pneu-chaussée qui caractérise chaque état de surface. La finalité de ce système est de pouvoir fonctionner sur un nano-ordinateur pour l’acquisition, le traitement et la diffusion de l’information en temps réel afin d’avertir les services d’entretien routier ainsi que les usagers de la route. De façon précise, le système se présente comme suit :1) une architecture d’apprentissage profond classifiant chaque état de surface à partir des images issues de la vidéo sous forme de probabilités ; 2) une architecture d’apprentissage profond classifiant chaque état de surface à partir du son sous forme de probabilités ; 3) les probabilités issues de chaque architecture ont été ensuite introduites dans le modèle de fusion pour obtenir la décision finale. Afin que le système soit léger et moins coûteux, il a été développé à partir d’architectures alliant légèreté et précision à savoir Squeeznet pour les images et M5 pour le son. Lors de la validation, le système a démontré une bonne performance pour la détection des états surface avec notamment 87,9 % pour la glace noire et 97 % pour la neige fondante

    Strategies for Early Learners

    Get PDF
    Welcome to learning about how to effectively plan curriculum for young children. This textbook will address: • Developing curriculum through the planning cycle • Theories that inform what we know about how children learn and the best ways for teachers to support learning • The three components of developmentally appropriate practice • Importance and value of play and intentional teaching • Different models of curriculum • Process of lesson planning (documenting planned experiences for children) • Physical, temporal, and social environments that set the stage for children’s learning • Appropriate guidance techniques to support children’s behaviors as the self-regulation abilities mature. • Planning for preschool-aged children in specific domains including o Physical development o Language and literacy o Math o Science o Creative (the visual and performing arts) o Diversity (social science and history) o Health and safety • Making children’s learning visible through documentation and assessmenthttps://scholar.utc.edu/open-textbooks/1001/thumbnail.jp

    Learning disentangled speech representations

    Get PDF
    A variety of informational factors are contained within the speech signal and a single short recording of speech reveals much more than the spoken words. The best method to extract and represent informational factors from the speech signal ultimately depends on which informational factors are desired and how they will be used. In addition, sometimes methods will capture more than one informational factor at the same time such as speaker identity, spoken content, and speaker prosody. The goal of this dissertation is to explore different ways to deconstruct the speech signal into abstract representations that can be learned and later reused in various speech technology tasks. This task of deconstructing, also known as disentanglement, is a form of distributed representation learning. As a general approach to disentanglement, there are some guiding principles that elaborate what a learned representation should contain as well as how it should function. In particular, learned representations should contain all of the requisite information in a more compact manner, be interpretable, remove nuisance factors of irrelevant information, be useful in downstream tasks, and independent of the task at hand. The learned representations should also be able to answer counter-factual questions. In some cases, learned speech representations can be re-assembled in different ways according to the requirements of downstream applications. For example, in a voice conversion task, the speech content is retained while the speaker identity is changed. And in a content-privacy task, some targeted content may be concealed without affecting how surrounding words sound. While there is no single-best method to disentangle all types of factors, some end-to-end approaches demonstrate a promising degree of generalization to diverse speech tasks. This thesis explores a variety of use-cases for disentangled representations including phone recognition, speaker diarization, linguistic code-switching, voice conversion, and content-based privacy masking. Speech representations can also be utilised for automatically assessing the quality and authenticity of speech, such as automatic MOS ratings or detecting deep fakes. The meaning of the term "disentanglement" is not well defined in previous work, and it has acquired several meanings depending on the domain (e.g. image vs. speech). Sometimes the term "disentanglement" is used interchangeably with the term "factorization". This thesis proposes that disentanglement of speech is distinct, and offers a viewpoint of disentanglement that can be considered both theoretically and practically

    Towards a sociology of conspiracy theories: An investigation into conspiratorial thinking on Dönmes

    Get PDF
    This thesis investigates the social and political significance of conspiracy theories, which has been an academically neglected topic despite its historical relevance. The academic literature focuses on the methodology, social significance and political impacts of these theories in a secluded manner and lacks empirical analyses. In response, this research provides a comprehensive theoretical framework for conspiracy theories by considering their methodology, political impacts and social significance in the light of empirical data. Theoretically, the thesis uses Adorno's semi-erudition theory along with Girardian approach. It proposes that conspiracy theories are methodologically semi-erudite narratives, i.e. they are biased in favour of a belief and use reason only to prove it. It suggests that conspiracy theories appear in times of power vacuum and provide semi-erudite cognitive maps that relieve alienation and ontological insecurities of people and groups. In so doing, they enforce social control over their audience due to their essentialist, closed-to-interpretation narratives. In order to verify the theory, the study analyses empirically the social and political significance of conspiracy theories about the Dönme community in Turkey. The analysis comprises interviews with conspiracy theorists, conspiracy theory readers and political parties, alongside a frame analysis of the popular conspiracy theory books on Dönmes. These confirm the theoretical framework by showing that the conspiracy theories are fed by the ontological insecurities of Turkish society. Hence, conspiracy theorists, most readers and some political parties respond to their own ontological insecurities and political frustrations through scapegoating Dönmes. Consequently, this work shows that conspiracy theories are important symptoms of society, which, while relieving ontological insecurities, do not provide politically prolific narratives

    Image classification over unknown and anomalous domains

    Get PDF
    A longstanding goal in computer vision research is to develop methods that are simultaneously applicable to a broad range of prediction problems. In contrast to this, models often perform best when they are specialized to some task or data type. This thesis investigates the challenges of learning models that generalize well over multiple unknown or anomalous modes and domains in data, and presents new solutions for learning robustly in this setting. Initial investigations focus on normalization for distributions that contain multiple sources (e.g. images in different styles like cartoons or photos). Experiments demonstrate the extent to which existing modules, batch normalization in particular, struggle with such heterogeneous data, and a new solution is proposed that can better handle data from multiple visual modes, using differing sample statistics for each. While ideas to counter the overspecialization of models have been formulated in sub-disciplines of transfer learning, e.g. multi-domain and multi-task learning, these usually rely on the existence of meta information, such as task or domain labels. Relaxing this assumption gives rise to a new transfer learning setting, called latent domain learning in this thesis, in which training and inference are carried out over data from multiple visual domains, without domain-level annotations. Customized solutions are required for this, as the performance of standard models degrades: a new data augmentation technique that interpolates between latent domains in an unsupervised way is presented, alongside a dedicated module that sparsely accounts for hidden domains in data, without requiring domain labels to do so. In addition, the thesis studies the problem of classifying previously unseen or anomalous modes in data, a fundamental problem in one-class learning, and anomaly detection in particular. While recent ideas have been focused on developing self-supervised solutions for the one-class setting, in this thesis new methods based on transfer learning are formulated. Extensive experimental evidence demonstrates that a transfer-based perspective benefits new problems that have recently been proposed in anomaly detection literature, in particular challenging semantic detection tasks
    • …
    corecore