91 research outputs found

    Reconceptualising the media audience: towards an ethnography of audiences

    Get PDF

    Industrial conflict and the mass media

    Get PDF

    'It's a film' : medium specificity as textual gesture in Red road and The unloved

    Get PDF
    British cinema has long been intertwined with television. The buzzwords of the transition to digital media, 'convergence' and 'multi-platform delivery', have particular histories in the British context which can be grasped only through an understanding of the cultural, historical and institutional peculiarities of the British film and television industries. Central to this understanding must be two comparisons: first, the relative stability of television in the duopoly period (at its core, the licence-funded BBC) in contrast to the repeated boom and bust of the many different financial/industrial combinations which have comprised the film industry; and second, the cultural and historical connotations of 'film' and 'television'. All readers of this journal will be familiar – possibly over-familiar – with the notion that 'British cinema is alive and well and living on television'. At the end of the first decade of the twenty-first century, when 'the end of medium specificity' is much trumpeted, it might be useful to return to the historical imbrication of British film and television, to explore both the possibility that medium specificity may be more nationally specific than much contemporary theorisation suggests, and to consider some of the relationships between film and television manifest at a textual level in two recent films, Red Road (2006) and The Unloved (2009)

    Advanced Technology Large-Aperture Space Telescope (ATLAST): A Technology Roadmap for the Next Decade

    Full text link
    The Advanced Technology Large-Aperture Space Telescope (ATLAST) is a set of mission concepts for the next generation of UVOIR space observatory with a primary aperture diameter in the 8-m to 16-m range that will allow us to perform some of the most challenging observations to answer some of our most compelling questions, including "Is there life elsewhere in the Galaxy?" We have identified two different telescope architectures, but with similar optical designs, that span the range in viable technologies. The architectures are a telescope with a monolithic primary mirror and two variations of a telescope with a large segmented primary mirror. This approach provides us with several pathways to realizing the mission, which will be narrowed to one as our technology development progresses. The concepts invoke heritage from HST and JWST design, but also take significant departures from these designs to minimize complexity, mass, or both. Our report provides details on the mission concepts, shows the extraordinary scientific progress they would enable, and describes the most important technology development items. These are the mirrors, the detectors, and the high-contrast imaging technologies, whether internal to the observatory, or using an external occulter. Experience with JWST has shown that determined competitors, motivated by the development contracts and flight opportunities of the new observatory, are capable of achieving huge advances in technical and operational performance while keeping construction costs on the same scale as prior great observatories.Comment: 22 pages, RFI submitted to Astro2010 Decadal Committe

    Study protocol: Comparison of different risk prediction modelling approaches for COVID-19 related death using the OpenSAFELY platform

    Get PDF
    On March 11th 2020, the World Health Organization characterised COVID-19 as a pandemic. Responses to containing the spread of the virus have relied heavily on policies involving restricting contact between people. Evolving policies regarding shielding and individual choices about restricting social contact will rely heavily on perceived risk of poor outcomes from COVID-19. In order to make informed decisions, both individual and collective, good predictive models are required.   For outcomes related to an infectious disease, the performance of any risk prediction model will depend heavily on the underlying prevalence of infection in the population of interest. Incorporating measures of how this changes over time may result in important improvements in prediction model performance.  This protocol reports details of a planned study to explore the extent to which incorporating time-varying measures of infection burden over time improves the quality of risk prediction models for COVID-19 death in a large population of adult patients in England. To achieve this aim, we will compare the performance of different modelling approaches to risk prediction, including static cohort approaches typically used in chronic disease settings and landmarking approaches incorporating time-varying measures of infection prevalence and policy change, using COVID-19 related deaths data linked to longitudinal primary care electronic health records data within the OpenSAFELY secure analytics platform.</ns4:p

    Perspectives on in situ Sensors for Ocean Acidification Research

    Get PDF
    As ocean acidification (OA) sensor technology develops and improves, in situ deployment of such sensors is becoming more widespread. However, the scientific value of these data depends on the development and application of best practices for calibration, validation, and quality assurance as well as on further development and optimization of the measurement technologies themselves. Here, we summarize the results of a 2-day workshop on OA sensor best practices held in February 2018, in Victoria, British Columbia, Canada, drawing on the collective experience and perspectives of the participants. The workshop on in situ Sensors for OA Research was organized around three basic questions: 1) What are the factors limiting the precision, accuracy and reliability of sensor data? 2) What can we do to facilitate the quality assurance/quality control (QA/QC) process and optimize the utility of these data? and 3) What sort of data or metadata are needed for these data to be most useful to future users? A synthesis of the discussion of these questions among workshop participants and conclusions drawn is presented in this paper

    Predicting COVID-19 related death using the OpenSAFELY platform

    Get PDF
    AbstractObjectivesTo compare approaches for obtaining relative and absolute estimates of risk of 28-day COVID-19 mortality for adults in the general population of England in the context of changing levels of circulating infection.DesignThree designs were compared. (A) case-cohort which does not explicitly account for the time-changing prevalence of COVID-19 infection, (B) 28-day landmarking, a series of sequential overlapping sub-studies incorporating time-updating proxy measures of the prevalence of infection, and (C) daily landmarking. Regression models were fitted to predict 28-day COVID-19 mortality.SettingWorking on behalf of NHS England, we used clinical data from adult patients from all regions of England held in the TPP SystmOne electronic health record system, linked to Office for National Statistics (ONS) mortality data, using the OpenSAFELY platform.ParticipantsEligible participants were adults aged 18 or over, registered at a general practice using TPP software on 1st March 2020 with recorded sex, postcode and ethnicity. 11,972,947 individuals were included, and 7,999 participants experienced a COVID-19 related death. The study period lasted 100 days, ending 8th June 2020.PredictorsA range of demographic characteristics and comorbidities were used as potential predictors. Local infection prevalence was estimated with three proxies: modelled based on local prevalence and other key factors; rate of A&amp;E COVID-19 related attendances; and rate of suspected COVID-19 cases in primary care.Main outcome measuresCOVID-19 related death.ResultsAll models discriminated well between patients who did and did not experience COVID-19 related death, with C-statistics ranging from 0.92-0.94. Accurate estimates of absolute risk required data on local infection prevalence, with modelled estimates providing the best performance.ConclusionsReliable estimates of absolute risk need to incorporate changing local prevalence of infection. Simple models can provide very good discrimination and may simplify implementation of risk prediction tools in practice.</jats:sec

    Evaluation of individual and ensemble probabilistic forecasts of COVID-19 mortality in the United States

    Get PDF
    Short-term probabilistic forecasts of the trajectory of the COVID-19 pandemic in the United States have served as a visible and important communication channel between the scientific modeling community and both the general public and decision-makers. Forecasting models provide specific, quantitative, and evaluable predictions that inform short-term decisions such as healthcare staffing needs, school closures, and allocation of medical supplies. Starting in April 2020, the US COVID-19 Forecast Hub (https://covid19forecasthub.org/) collected, disseminated, and synthesized tens of millions of specific predictions from more than 90 different academic, industry, and independent research groups. A multimodel ensemble forecast that combined predictions from dozens of groups every week provided the most consistently accurate probabilistic forecasts of incident deaths due to COVID-19 at the state and national level from April 2020 through October 2021. The performance of 27 individual models that submitted complete forecasts of COVID-19 deaths consistently throughout this year showed high variability in forecast skill across time, geospatial units, and forecast horizons. Two-thirds of the models evaluated showed better accuracy than a naïve baseline model. Forecast accuracy degraded as models made predictions further into the future, with probabilistic error at a 20-wk horizon three to five times larger than when predicting at a 1-wk horizon. This project underscores the role that collaboration and active coordination between governmental public-health agencies, academic modeling teams, and industry partners can play in developing modern modeling capabilities to support local, state, and federal response to outbreaks

    Search for dark matter produced in association with bottom or top quarks in √s = 13 TeV pp collisions with the ATLAS detector

    Get PDF
    A search for weakly interacting massive particle dark matter produced in association with bottom or top quarks is presented. Final states containing third-generation quarks and miss- ing transverse momentum are considered. The analysis uses 36.1 fb−1 of proton–proton collision data recorded by the ATLAS experiment at √s = 13 TeV in 2015 and 2016. No significant excess of events above the estimated backgrounds is observed. The results are in- terpreted in the framework of simplified models of spin-0 dark-matter mediators. For colour- neutral spin-0 mediators produced in association with top quarks and decaying into a pair of dark-matter particles, mediator masses below 50 GeV are excluded assuming a dark-matter candidate mass of 1 GeV and unitary couplings. For scalar and pseudoscalar mediators produced in association with bottom quarks, the search sets limits on the production cross- section of 300 times the predicted rate for mediators with masses between 10 and 50 GeV and assuming a dark-matter mass of 1 GeV and unitary coupling. Constraints on colour- charged scalar simplified models are also presented. Assuming a dark-matter particle mass of 35 GeV, mediator particles with mass below 1.1 TeV are excluded for couplings yielding a dark-matter relic density consistent with measurements
    corecore