983 research outputs found

    (G)hosting television: Ghostwatch and its medium

    Get PDF
    This article’s subject is Ghostwatch (BBC, 1992), a drama broadcast on Halloween night of 1992 which adopted the rhetoric of live non-fiction programming, and attracted controversy and ultimately censure from the Broadcasting Standards Council. In what follows, we argue that Ghostwatch must be understood as a televisually-specific artwork and artefact. We discuss the programme’s ludic relationship with some key features of television during what Ellis (2000) has termed its era of ‘availability’, principally liveness, mass simultaneous viewing, and the flow of the television super-text. We trace the programme’s television-specific historicity whilst acknowledging its allusions and debts to other media (most notably film and radio). We explore the sophisticated ways in which Ghostwatch’s visual grammar and vocabulary and deployment of ‘broadcast talk’ (Scannell 1991) variously ape, comment upon and subvert the rhetoric of factual programming, and the ends to which these strategies are put. We hope that these arguments collectively demonstrate the aesthetic and historical significance of Ghostwatch and identify its relationship to its medium and that medium’s history. We offer the programme as an historically-reflexive artefact, and as an exemplary instance of the work of art in television’s age of broadcasting, liveness and co-presence

    Combining Information from Two Surveys to Estimate County-Level Prevalence Rates of Cancer Risk Factors and Screening

    Get PDF
    Cancer surveillance requires estimates of the prevalence of cancer risk factors and screening for small areas such as counties. Two popular data sources are the Behavioral Risk Factor Surveillance System (BRFSS), a telephone survey conducted by state agencies, and the National Health Interview Survey (NHIS), an area probability sample survey conducted through face-to-face interviews. Both data sources have advantages and disadvantages. The BRFSS is a larger survey, and almost every county is included in the survey; but it has lower response rates as is typical with telephone surveys, and it does not include subjects who live in households with no telephones. On the other hand, the NHIS is a smaller survey, with the majority of counties not included; but it includes both telephone and non-telephone households and has higher response rates. A preliminary analysis shows that the distributions of cancer screening and risk factors are different for telephone and non-telephone households. Thus, information from the two surveys may be combined to address both nonresponse and noncoverage errors. A hierarchical Bayesian approach that combines information from both surveys is used to construct county-level estimates. The proposed model incorporates potential noncoverage and nonresponse biases in the BRFSS as well as complex sample design features of both surveys. A Markov Chain Monte Carlo method is used to simulate draws from the joint posterior distribution of unknown quantities in the model based on the design-based direct estimates and county-level covariates. Yearly prevalence estimates at the county level for 49 states, as well as for the entire state of Alaska and the District of Columbia, are developed for six outcomes using BRFSS and NHIS data from the years 1997-2000. The outcomes include smoking and use of common cancer screening procedures. The NHIS/BRFSS combined county-level estimates are substantially different from those based on BRFSS alone

    The Conflict of Generations

    Get PDF

    Gastrointestinal bleeding after liver transplantation

    Get PDF
    To investigate the causes of gastrointestinal bleeding (GIB) and its impact on patient and graft survival after orthotopic liver transplantation (OLTx), the first 1000 consecutive OLTx using tacrolimus were studied. Our patient population consisted of 834 adults. The bleeding episodes of patients with GIB (n=74) were analyzed, and patients without GIB (n=760) were used as controls. The mean age, gender, and United Network for Organ Sharing status were similar in both groups. Endoscopy was done in 73 patients with GIB and yielded a diagnosis in 60 patients (82.2%): 39 with a single, and 21 with multiple GIB episodes. In the remaining 13 patients (17.8%), the bleeding source was not identified. Of 92 GIB episodes with endoscopic diagnoses, ulcers (n=25) were the most common cause of bleeding, followed by enteritis (n=24), portal hypertensive lesions (n=15), Roux-en-Y bleeds, and other miscellaneous events (n=28). The majority (73%) of the GIB episodes occurred during the first postoperative trimester. The patient and graft survival rates were statistically lower in the GIB group compared with the control group. The adjusted relative risk of mortality and graft failure was increased by bleeding. In summary, the cumulative incidence of GIB was 8.9%. Endoscopy identified the source of GIB in most cases. Ulcers were the most common cause of GIB after OLTx. The onset of GIB after OLTx was an indicator of decreased patient and graft survival

    Second trimester inflammatory and metabolic markers in women delivering preterm with and without preeclampsia.

    Get PDF
    ObjectiveInflammatory and metabolic pathways are implicated in preterm birth and preeclampsia. However, studies rarely compare second trimester inflammatory and metabolic markers between women who deliver preterm with and without preeclampsia.Study designA sample of 129 women (43 with preeclampsia) with preterm delivery was obtained from an existing population-based birth cohort. Banked second trimester serum samples were assayed for 267 inflammatory and metabolic markers. Backwards-stepwise logistic regression models were used to calculate odds ratios.ResultsHigher 5-α-pregnan-3β,20α-diol disulfate, and lower 1-linoleoylglycerophosphoethanolamine and octadecanedioate, predicted increased odds of preeclampsia.ConclusionsAmong women with preterm births, those who developed preeclampsia differed with respect metabolic markers. These findings point to potential etiologic underpinnings for preeclampsia as a precursor to preterm birth

    Comparing benefits from many possible computed tomography lung cancer screening programs: Extrapolating from the National Lung Screening Trial using comparative modeling

    Get PDF
    Background: The National Lung Screening Trial (NLST) demonstrated that in current and former smokers aged 55 to 74 years, with at least 30 pack-years of cigarette smoking history and who had quit smoking no more than 15 years ago, 3 annual computed tomography (CT) screens reduced lung cancer-specific mortality by 20% relative to 3 annual chest X-ray screens. We compared the benefits achievable with 576 lung cancer screening programs that varied CT screen number and frequency, ages of screening, and eligibility based on smoking. Methods and Findings: We used five independent microsimulation models with lung cancer natural history parameters previously calibrated to the NLST to simulate life histories of the US cohort born in 1950 under all 576 programs. 'Efficient' (within model) programs prevented the greatest number of lung cancer deaths, compared to no screening, for a given number of CT screens. Among 120 'consensus efficient' (identified as efficient across models) programs, the average starting age was 55 years, the stopping age was 80 or 85 years, the average minimum pack-years was 27, and the maximum years since quitting was 20. Among consensus efficient programs, 11% to 40% of the cohort was screened, and 153 to 846 lung cancer deaths were averted per 100,000 people. In all models, annual screening based on age and smoking eligibility in NLST was not efficient; continuing screening to age 80 or 85 years was more efficient. Conclusions: Consensus results from five models identified a set of efficient screening programs that include annual CT lung cancer screening using criteria like NLST eligibility but extended to older ages. Guidelines for screening should also consider harms of screening and individual patient characteristics

    Rapid automatic segmentation of abnormal tissue in late gadolinium enhancement cardiovascular magnetic resonance images for improved management of long-standing persistent atrial fibrillation

    Get PDF
    Background: Atrial fibrillation (AF) is the most common heart rhythm disorder. In order for late Gd enhancement cardiovascular magnetic resonance (LGE CMR) to ameliorate the AF management, the ready availability of the accurate enhancement segmentation is required. However, the computer-aided segmentation of enhancement in LGE CMR of AF is still an open question. Additionally, the number of centres that have reported successful application of LGE CMR to guide clinical AF strategies remains low, while the debate on LGE CMR’s diagnostic ability for AF still holds. The aim of this study is to propose a method that reliably distinguishes enhanced (abnormal) from non-enhanced (healthy) tissue within the left atrial wall of (pre-ablation and 3 months post-ablation) LGE CMR data-sets from long-standing persistent AF patients studied at our centre. Methods: Enhancement segmentation was achieved by employing thresholds benchmarked against the statistics of the whole left atrial blood-pool (LABP). The test-set cross-validation mechanism was applied to determine the input feature representation and algorithm that best predict enhancement threshold levels. Results: Global normalized intensity threshold levels T PRE = 1 1/4 and T POST = 1 5/8 were found to segment enhancement in data-sets acquired pre-ablation and at 3 months post-ablation, respectively. The segmentation results were corroborated by using visual inspection of LGE CMR brightness levels and one endocardial bipolar voltage map. The measured extent of pre-ablation fibrosis fell within the normal range for the specific arrhythmia phenotype. 3D volume renderings of segmented post-ablation enhancement emulated the expected ablation lesion patterns. By comparing our technique with other related approaches that proposed different threshold levels (although they also relied on reference regions from within the LABP) for segmenting enhancement in LGE CMR data-sets of AF patients, we illustrated that the cut-off levels employed by other centres may not be usable for clinical studies performed in our centre. Conclusions: The proposed technique has great potential for successful employment in the AF management within our centre. It provides a highly desirable validation of the LGE CMR technique for AF studies. Inter-centre differences in the CMR acquisition protocol and image analysis strategy inevitably impede the selection of a universally optimal algorithm for segmentation of enhancement in AF studies
    corecore