35 research outputs found

    To what extent are Teaching Assistants really managed?: ‘I was thrown in the deep end, really; I just had to more or less get on with it’

    Get PDF
    The main aim of this research was to secure a better understanding of how Local Authorities (LAs), Senior Leadership Teams (SLTs) and teachers in state schools perceive their responsibilities for the deployment, leadership and management of teaching assistants (TAs). Current research in the field - some of which has been highly influential on policy – has largely focussed on aspects of TA performance and pupil attainment. Importantly, we have chosen to investigate how TAs and SLTs themselves describe their experiences of management. A total of 71 teaching assistants, together with teachers, senior leaders in primary schools and LA advisors across two Local Authorities, were surveyed. Based on 55 questionnaire responses, 11 interviews and a focus group (n=5) we found evidence of a dislocation of management priorities for effective TA deployment. What emerged was a strong sense of ‘otherness’ felt by many TAs, who believed themselves to be dissociated from their own management. We conclude that TAs make up a workforce that appears to be closely managed but which is in fact often poorly led, resulting in feelings of detachment

    Laboratory comparison of low-cost particulate matter sensors to measure transient events of pollution

    Get PDF
    Airborne particulate matter (PM) exposure has been identified as a key environmental risk factor, associated especially with diseases of the respiratory and cardiovascular system and with almost 9 million premature deaths per year. Low-cost optical sensors for PM measurement are desirable for monitoring exposure closer to the personal level and particularly suited for developing spatiotemporally dense city sensor networks. However, questions remain over the accuracy and reliability of the data they produce, particularly regarding the influence of environmental parameters such as humidity and temperature, and with varying PM sources and concentration profiles. In this study, eight units each of five different models of commercially available low-cost optical PM sensors (40 individual sensors in total) were tested under controlled laboratory conditions, against higher-grade instruments for: lower limit of detection, response time, responses to sharp pollution spikes lasting <1 min , and the impact of differing humidity and PM source. All sensors detected the spikes generated with a varied range of performances depending on the model and presenting different sensitivity mainly to sources of pollution and to size distributions with a lesser impact of humidity. The sensitivity to particle size distribution indicates that the sensors may provide additional information to PM mass concentrations. It is concluded that improved performance in field monitoring campaigns, including tracking sources of pollution, could be achieved by using a combination of some of the different models to take advantage of the additional information made available by their differential response

    Laboratory comparison of low-cost particulate matter sensors to measure transient events of pollution—part B—particle number concentrations

    Get PDF
    Low-cost Particulate Matter (PM) sensors offer an excellent opportunity to improve our knowledge about this type of pollution. Their size and cost, which support multi-node network deployment, along with their temporal resolution, enable them to report fine spatio-temporal resolution for a given area. These sensors have known issues across performance metrics. Generally, the literature focuses on the PM mass concentration reported by these sensors, but some models of sensors also report Particle Number Concentrations (PNCs) segregated into different PM size ranges. In this study, eight units each of Alphasense OPC-R1, Plantower PMS5003 and Sensirion SPS30 have been exposed, under controlled conditions, to short-lived peaks of PM generated using two different combustion sources of PM, exposing the sensors’ to different particle size distributions to quantify and better understand the low-cost sensors performance across a range of relevant environmental ranges. The PNCs reported by the sensors were analysed to characterise sensor-reported particle size distribution, to determine whether sensor-reported PNCs can follow the transient variations of PM observed by the reference instruments and to determine the relative impact of different variables on the performances of the sensors. This study shows that the Alphasense OPC-R1 reported at least five size ranges independently from each other, that the Sensirion SPS30 reported two size ranges independently from each other and that all the size ranges reported by the Plantower PMS5003 were not independent of each other. It demonstrates that all sensors tested here could track the fine temporal variation of PNCs, that the Alphasense OPC-R1 could closely follow the variations of size distribution between the two sources of PM, and it shows that particle size distribution and composition are more impactful on sensor measurements than relative humidity

    De-black-boxing health AI: demonstrating reproducible machine learning computable phenotypes using the N3C-RECOVER Long COVID model in the All of Us data repository

    Get PDF
    Machine learning (ML)-driven computable phenotypes are among the most challenging to share and reproduce. Despite this difficulty, the urgent public health considerations around Long COVID make it especially important to ensure the rigor and reproducibility of Long COVID phenotyping algorithms such that they can be made available to a broad audience of researchers. As part of the NIH Researching COVID to Enhance Recovery (RECOVER) Initiative, researchers with the National COVID Cohort Collaborative (N3C) devised and trained an ML-based phenotype to identify patients highly probable to have Long COVID. Supported by RECOVER, N3C and NIH’s All of Us study partnered to reproduce the output of N3C’s trained model in the All of Us data enclave, demonstrating model extensibility in multiple environments. This case study in ML-based phenotype reuse illustrates how open-source software best practices and cross-site collaboration can de-black-box phenotyping algorithms, prevent unnecessary rework, and promote open science in informatics

    Field deployment of low power high performance nodes

    No full text
    When deploying a sensor network into a harsh environment the need for high levels of fault tolerance and maximising the usage of available resources become extremely important. This has been achieved by implementing a highly fault tolerant system based on our Gumsense boards. These combine an ARM-based Linux system with an MSP430 for sensing and power-control. It also allows for dynamic schedule modifications based on the available power and can be synchronised with other systems without relying on direct communication, autonomous behaviour in case of total communications failure is also supported. A deployment on Vatnajökull, the largest ice-cap in Europe, has provided a longterm test for the systems and revealed strengths and weaknesses in the design decisions

    Issues of robustness and high dimensionality in cluster analysis

    No full text
    Finite mixture models are being increasingly used to model the distributions of a wide variety of random phenomena. While normal mixture models are often used to cluster data sets of continuous multivariate data, a more robust clustering can be obtained by considering the t mixture model-based approach. Mixtures of factor analyzers enable model-based density estimation to be undertaken for high-dimensional data where the number of observations n is very large relative to their dimension p. As the approach using the multivariate normal family of distributions is sensitive to outliers, it is more robust to adopt the multivariate t family for the component error and factor distributions. The computational aspects associated with robustness and high dimensionality in these approaches to cluster analysis are discussed and illustrated

    A sample and data management system for ”CT-based X-ray histology

    No full text
    While setting up a facility for X-ray histology (XRH) [1] in Southampton the challenge arose of how to manage samples, data and associated metadata. This is challenging as the same sample can be analysed multiple times in different states and different modalities including micro-computed tomography (ΌCT), conventional (2D) thin section-based histology/whole-slide imaging or immunohistochemistry. A fresh piece of tissue may be imaged, frozen, imaged, embedded in paraffin wax, imaged, and sectioned and processed using histological techniques, all of which needs to be included attached in the sample record. Moreover, sample and data management system needed to be user-friendly. These requirements have led to the development of XRHMS, a management system to keep track of all samples within the facility, their data and their metadata, as summarised in Figure 1. Metadata of interest about the sample includes details about the tissue type and origin, preparation, storage requirements and current location. Figure 1 Information included in XRHMSThe combination of storing raw / processed image data and metadata requires careful architectural decisions, and is built on previous work in this area [2] and uses a database for the metadata and file system for the ”CT data. The system can store data from multiple different acquisition systems and digitised histological slides, and is designed to be easily extensible. The XRHMS provides full tracking and accounting of all samples/data/processes and enables cross-linking between related datasets.The XRHMS also forms the basis for automation of data processing, with preview images and slices generated automatically, as well as performing basic pre-processing, with additional features planned. The system can generate summary reports about the samples containing information about the scans performed and related images. These provide a useful overview of the scan and data interpretation guidance for people not familiar with the technique or viewing 3D datasets.<br/
    corecore