309 research outputs found

    Child observation and emotional discomfort: the experience of trainee psychologists

    Get PDF
    Young Child Observation (YCO) is a foundational component of psychoanalytic training in many parts of the world and has been adapted for various training courses in psychology, psychotherapy, education and social work. While the professional benefits of YCO are established, the experience of observers conducting observations outside of traditional psychoanalytic training settings is under-researched. YCO observers experience significant emotional discomfort; however, this has not been well documented, nor has its impact on observers and their professional development. This study addresses that gap by analysing the emotional discomfort experienced by 10 postgraduate psychology students from a single university, who completed a seven-week YCO and wrote self-reflective reports on their personal experience. Participant reports and notes from each completed observation were analysed using Reflective Thematic Analysis. Three main themes were identified: Managing the Observer Role, The Struggle for Belonging, and Countertransference. Participants reported a range of experiences eliciting emotional discomfort, which, in the course of individual and supervision group reflection, led to personal and professional development. Findings from this study indicate that a short YCO enriches the quality of professional psychological training, even when this training is not explicitly psychoanalytic in nature

    Towards a unified approach to formal risk of bias assessments for causal and descriptive inference

    Full text link
    Statistics is sometimes described as the science of reasoning under uncertainty. Statistical models provide one view of this uncertainty, but what is frequently neglected is the invisible portion of uncertainty: that assumed not to exist once a model has been fitted to some data. Systematic errors, i.e. bias, in data relative to some model and inferential goal can seriously undermine research conclusions, and qualitative and quantitative techniques have been created across several disciplines to quantify and generally appraise such potential biases. Perhaps best known are so-called risk of bias assessment instruments used to investigate the likely quality of randomised controlled trials in medical research. However, the logic of assessing the risks caused by various types of systematic error to statistical arguments applies far more widely. This logic applies even when statistical adjustment strategies for potential biases are used, as these frequently make assumptions (e.g. data missing at random) that can never be guaranteed in finite samples. Mounting concern about such situations can be seen in the increasing calls for greater consideration of biases caused by nonprobability sampling in descriptive inference (i.e. survey sampling), and the statistical generalisability of in-sample causal effect estimates in causal inference; both of which relate to the consideration of model-based and wider uncertainty when presenting research conclusions from models. Given that model-based adjustments are never perfect, we argue that qualitative risk of bias reporting frameworks for both descriptive and causal inferential arguments should be further developed and made mandatory by journals and funders. It is only through clear statements of the limits to statistical arguments that consumers of research can fully judge their value for any specific application.Comment: 12 page

    Automated classification metrics for energy modelling of residential buildings in the UK with open algorithms

    Get PDF
    Estimating residential building energy use across large spatial extents is vital for identifying and testing effective strategies to reduce carbon emissions and improve urban sustainability. This task is underpinned by the availability of accurate models of building stock from which appropriate parameters may be extracted. For example, the form of a building, such as whether it is detached, semi-detached, terraced etc and its shape may be used as part of a typology for defining its likely energy use. When these details are combined with information on building construction materials or glazing ratio, it can be used to infer the heat transfer characteristics of different properties. However, these data are not readily available for energy modelling or urban simulation. Although this is not a problem when the geographic scope corresponds to a small area and can be hand-collected, such manual approaches cannot be easily applied at the city or national scale. In this paper, we demonstrate an approach that can automatically extract this information at the city scale using off-the-shelf products supplied by a National Mapping Agency. We present two novel techniques to create this knowledge directly from input geometry. The first technique is used to identify built form based upon the physical relationships between buildings. The second technique is used to determine a more refined internal/external wall measurement and ratio. The second technique has greater metric accuracy and can also be used to address problems identified in extracting the built form. A case study is presented for the City of Nottingham in the United Kingdom using two data products provided by the Ordnance Survey of Great Britain (OSGB): MasterMap and AddressBase. This is followed by a discussion of a new categorisation approach for housing form for urban energy assessment

    Descriptive inference using large, unrepresentative nonprobability samples: an introduction for ecologists

    Get PDF
    Biodiversity monitoring usually involves drawing inferences about some variable of interest across a defined landscape from observations made at a sample of locations within that landscape. If the variable of interest differs between sampled and non-sampled locations, and no mitigating action is taken, then the sample is unrepresentative and inferences drawn from it will be biased. It is possible to adjust unrepresentative samples so that they more closely resemble the wider landscape in terms of “auxiliary variables”. A good auxiliary variable is a common cause of sample inclusion and the variable of interest, and if it explains an appreciable portion of the variance in both, then inferences drawn from the adjusted sample will be closer to the truth. We applied six types of survey sample adjustment—subsampling, quasi-randomisation, poststratification, superpopulation modelling, a “doubly robust” procedure, and multilevel regression and poststratification—to a simple two-part biodiversity monitoring problem. The first part was to estimate mean occupancy of the plant Calluna vulgaris in Great Britain in two time-periods (1987-1999 and 2010-2019); the second was to estimate the difference between the two (i.e. the trend). We estimated the means and trend using large, but (originally) unrepresentative, samples from a citizen science dataset. Compared to the unadjusted estimates, the means and trends estimated using most adjustment methods were more accurate, although standard uncertainty intervals generally did not cover the true values. Completely unbiased inference is not possible from an unrepresentative sample without knowing and having data on all relevant auxiliary variables. Adjustments can reduce the bias if auxiliary variables are available and selected carefully, but the potential for residual bias should be acknowledged and reported
    • …
    corecore