10,089 research outputs found

    An exploration of the language within Ofsted reports and their influence on primary school performance in mathematics: a mixed methods critical discourse analysis

    Get PDF
    This thesis contributes to the understanding of the language of Ofsted reports, their similarity to one another and associations between different terms used within ‘areas for improvement’ sections and subsequent outcomes for pupils. The research responds to concerns from serving headteachers that Ofsted reports are overly similar, do not capture the unique story of their school, and are unhelpful for improvement. In seeking to answer ‘how similar are Ofsted reports’ the study uses two tools, a plagiarism detection software (Turnitin) and a discourse analysis tool (NVivo) to identify trends within and across a large corpus of reports. The approach is based on critical discourse analysis (Van Dijk, 2009; Fairclough, 1989) but shaped in the form of practitioner enquiry seeking power in the form of impact on pupils and practitioners, rather than a more traditional, sociological application of the method. The research found that in 2017, primary school section 5 Ofsted reports had more than half of their content exactly duplicated within other primary school inspection reports published that same year. Discourse analysis showed the quality assurance process overrode variables such as inspector designation, gender, or team size, leading to three distinct patterns of duplication: block duplication, self-referencing, and template writing. The most unique part of a report was found to be the ‘area for improvement’ section, which was tracked to externally verified outcomes for pupils using terms linked to ‘mathematics’. Those required to improve mathematics in their areas for improvement improved progress and attainment in mathematics significantly more than national rates. These findings indicate that there was a positive correlation between the inspection reporting process and a beneficial impact on pupil outcomes in mathematics, and that the significant similarity of one report to another had no bearing on the usefulness of the report for school improvement purposes within this corpus

    Evaluation of image quality and reconstruction parameters in recent PET-CT and PET-MR systems

    No full text
    In this PhD dissertation, we propose to evaluate the impact of using different PET isotopes for the National Electrical Manufacturers Association (NEMA) tests performance evaluation of the GE Signa integrated PET/MR. The methods were divided into three closely related categories: NEMA performance measurements, system modelling and evaluation of the image quality of the state-of-the-art of clinical PET scanners. NEMA performance measurements for characterizing spatial resolution, sensitivity, image quality, the accuracy of attenuation and scatter corrections, and noise equivalent count rate (NECR) were performed using clinically relevant and commercially available radioisotopes. Then we modelled the GE Signa integrated PET/MR system using a realistic GATE Monte Carlo simulation and validated it with the result of the NEMA measurements (sensitivity and NECR). Next, the effect of the 3T MR field on the positron range was evaluated for F-18, C-11, O-15, N-13, Ga-68 and Rb-82. Finally, to evaluate the image quality of the state-of-the-art clinical PET scanners, a noise reduction study was performed using a Bayesian Penalized-Likelihood reconstruction algorithm on a time-of-flight PET/CT scanner to investigate whether and to what extent noise can be reduced. The outcome of this thesis will allow clinicians to reduce the PET dose which is especially relevant for young patients. Besides, the Monte Carlo simulation platform for PET/MR developed for this thesis will allow physicists and engineers to better understand and design integrated PET/MR systems

    Learning disentangled speech representations

    Get PDF
    A variety of informational factors are contained within the speech signal and a single short recording of speech reveals much more than the spoken words. The best method to extract and represent informational factors from the speech signal ultimately depends on which informational factors are desired and how they will be used. In addition, sometimes methods will capture more than one informational factor at the same time such as speaker identity, spoken content, and speaker prosody. The goal of this dissertation is to explore different ways to deconstruct the speech signal into abstract representations that can be learned and later reused in various speech technology tasks. This task of deconstructing, also known as disentanglement, is a form of distributed representation learning. As a general approach to disentanglement, there are some guiding principles that elaborate what a learned representation should contain as well as how it should function. In particular, learned representations should contain all of the requisite information in a more compact manner, be interpretable, remove nuisance factors of irrelevant information, be useful in downstream tasks, and independent of the task at hand. The learned representations should also be able to answer counter-factual questions. In some cases, learned speech representations can be re-assembled in different ways according to the requirements of downstream applications. For example, in a voice conversion task, the speech content is retained while the speaker identity is changed. And in a content-privacy task, some targeted content may be concealed without affecting how surrounding words sound. While there is no single-best method to disentangle all types of factors, some end-to-end approaches demonstrate a promising degree of generalization to diverse speech tasks. This thesis explores a variety of use-cases for disentangled representations including phone recognition, speaker diarization, linguistic code-switching, voice conversion, and content-based privacy masking. Speech representations can also be utilised for automatically assessing the quality and authenticity of speech, such as automatic MOS ratings or detecting deep fakes. The meaning of the term "disentanglement" is not well defined in previous work, and it has acquired several meanings depending on the domain (e.g. image vs. speech). Sometimes the term "disentanglement" is used interchangeably with the term "factorization". This thesis proposes that disentanglement of speech is distinct, and offers a viewpoint of disentanglement that can be considered both theoretically and practically

    A framework for the Comparative analysis of text summarization techniques

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics, specialization in Data ScienceWe see that with the boom of information technology and IOT (Internet of things), the size of information which is basically data is increasing at an alarming rate. This information can always be harnessed and if channeled into the right direction, we can always find meaningful information. But the problem is this data is not always numerical and there would be problems where the data would be completely textual, and some meaning has to be derived from it. If one would have to go through these texts manually, it would take hours or even days to get a concise and meaningful information out of the text. This is where a need for an automatic summarizer arises easing manual intervention, reducing time and cost but at the same time retaining the key information held by these texts. In the recent years, new methods and approaches have been developed which would help us to do so. These approaches are implemented in lot of domains, for example, Search engines provide snippets as document previews, while news websites produce shortened descriptions of news subjects, usually as headlines, to make surfing easier. Broadly speaking, there are mainly two ways of text summarization – extractive and abstractive summarization. Extractive summarization is the approach in which important sections of the whole text are filtered out to form the condensed form of the text. While the abstractive summarization is the approach in which the text as a whole is interpreted and examined and after discerning the meaning of the text, sentences are generated by the model itself describing the important points in a concise way

    A “spatially just” transition? A critical review of regional equity in decarbonisation pathways

    Get PDF
    Spatial justice is a theoretical framework that is increasingly used to examine questions of equity in the low carbon transition (LCT) from a geographical perspective. We conducted a semi-systematic review to define a ‘spatially just’ low carbon transition, considering how spatial dimensions are explicitly or implicitly presented in assessments of the LCT, and the policy and governance approaches that could embed spatial justice. A sample of 75 academic articles was thematically coded. Spatial justice involves the fair distribution of both benefits and burdens associated with LCTs, and this often creates problems of equity given the geographic gap between regions that ‘win and lose’. The studies point to a research gap in exploring fairness implications that go beyond the employment impacts of transition. Acceptance of the LCT is shown to be contingent on perceptions of justice, particularly whether the most responsible and capable actors are taking action. There is similar concern that the LCT may not address, or may reproduce, existing patterns of injustice. This is particularly the case in terms of spatially inequitable land uses and where historic planning policy has had lasting socioeconomic impacts. Policy challenges to making LCTs more spatially just included administrative fragmentation across spatial scales and the lack of coordination in net zero policy. We identify that future transition policymaking could benefit from using spatially targeted interventions, and in adopting a whole systems approach. In this recognition of the multiple economic vulnerabilities of different regions, LCT policymaking can become both more effective and, critically, more just

    Science and corporeal religion: a feminist materialist reconsideration of gender/sex diversity in religiosity

    Full text link
    This dissertation develops a feminist materialist interpretation of the role the neuroendocrine system plays in the development of gender/sex differences in religion. Data emerging from psychology, sociology, and cognitive science have continually indicated that women are more religious than men, in various senses of those contested terms, but the factors contributing to these findings are little understood and disciplinary perspectives are often unhelpfully siloed. Previous scholarship has tended to highlight socio-cultural factors while ignoring biological factors or to focus on biological factors while relying on problematic and unsubstantiated gender stereotypes. Addressing gender/sex difference is vital for understanding religion and how we study it. This dissertation interprets this difference by means of a multidisciplinary theoretical and methodological approach. This approach builds upon insights from the cognitive and evolutionary science of religion, affect theory and affective neuroscience, and social neuroendocrinology, and it is rooted in the foundational insights of feminist materialism, including that cultural and micro-sociological forces are inseparable from biological materiality. The dissertation shows how a better way of understanding gender/sex differences in religion emerges through focusing on the co-construction of biological materiality and cultural meanings. This includes deploying a gene-culture co-evolutionary explanation of ultrasociality and an understanding of the biology of performativity to argue that religious behavior and temperaments emerge from the enactment and hormonal underpinnings of six affective adaptive desires: the desires for (1) bonding and attachment, (2) communal mythos, (3) deliverance from suffering, (4) purpose, (5) understanding, and (6) reliable leadership. By hypothesizing the patterns of hormonal release and activation associated with ritualized affects—primarily considering oxytocin, testosterone, vasopressin, estrogen, dopamine, and serotonin—the dissertation theorizes four dimensions of religious temperament: (1) nurturant religiosity, (2) ecstatic religiosity, (3) protective/hierarchical religiosity, and (4) antagonistic religiosity. This dissertation conceptualizes hormones as chemical messengers that enable the diversity emerging from the imbrication of physical materiality and socio-cultural forces. In doing so, it demonstrates how hormonal aspects of gender/sex and culturally constructed aspects of gender/sex are always already intertwined in their influence on religiosity. This theoretical framework sheds light on both the diversity and the noticeable patterns observed in gender/sex differences in religious behaviors and affects. This problematizes the terms of the “women are more religious than men” while putting in place a more adequate framework for interpreting the variety of ways it appears in human lives

    CITIES: Energetic Efficiency, Sustainability; Infrastructures, Energy and the Environment; Mobility and IoT; Governance and Citizenship

    Get PDF
    This book collects important contributions on smart cities. This book was created in collaboration with the ICSC-CITIES2020, held in San José (Costa Rica) in 2020. This book collects articles on: energetic efficiency and sustainability; infrastructures, energy and the environment; mobility and IoT; governance and citizenship

    Command and Persuade

    Get PDF
    Why, when we have been largely socialized into good behavior, are there more laws that govern our behavior than ever before? Levels of violent crime have been in a steady decline for centuries—for millennia, even. Over the past five hundred years, homicide rates have decreased a hundred-fold. We live in a time that is more orderly and peaceful than ever before in human history. Why, then, does fear of crime dominate modern politics? Why, when we have been largely socialized into good behavior, are there more laws that govern our behavior than ever before? In Command and Persuade, Peter Baldwin examines the evolution of the state's role in crime and punishment over three thousand years. Baldwin explains that the involvement of the state in law enforcement and crime prevention is relatively recent. In ancient Greece, those struck by lightning were assumed to have been punished by Zeus. In the Hebrew Bible, God was judge, jury, and prosecutor when Cain killed Abel. As the state's power as lawgiver grew, more laws governed behavior than ever before; the sum total of prohibited behavior has grown continuously. At the same time, as family, community, and church exerted their influences, we have become better behaved and more law-abiding. Even as the state stands as the socializer of last resort, it also defines through law the terrain on which we are schooled into acceptable behavior. This title is also available in an Open Access edition
    • 

    corecore