78 research outputs found

    Promoting the Everyday: Pro-Sharia Advocacy and Public Relations in Ontario, Canada’s “Sharia Debate”

    Get PDF
    Why, in the midst of public debates related to religion, are unrepresentative orthodox perspectives often positioned as illustrative of a religious tradition? How can more representative voices be encouraged? Political theorist Anne Phillips (2007) suggests that facilitating multi-voiced individual engagements effectively dismantles the monopolies of the most conservative that tend to privilege maleness. In this paper, with reference to the 2003–2005 faith-based arbitration debate in Ontario, Canada, I show how, in practice, Phillips’ approach is unwieldy and does not work well in a sound-bite-necessitating culture. Instead, I argue that the “Sharia Debate” served as a catalyst for mainstream conservative Muslim groups in Ontario to develop public relations apparatuses that better facilitate the perspectives of everyday religious conservatives in the public sphere

    Rethinking Canadian Discourses of “Reasonable Accommodation”

    Get PDF
    This article maps the repercussions of the use of reasonable accommodation, a recent framework referenced inside and outside Canadian courtrooms to respond to religiously framed differences. Drawing on three cases from Ontario and Quebec, we trace how the notion of reasonable accommodation—now invoked by the media and in public discourse—has moved beyond its initial legal moorings. After outlining the cases, we critique the framework with attention to its tendency to create theological arbitrators who assess reasonableness, and for how it rigidifies ‘our values’ in hierarchical ways. We propose an alternative model that focuses on navigation and negotiation and that emphasizes belonging, inclusion and lived religion

    Bolide Airbursts as a Seismic Source for the 2018 Mars InSight Mission

    Get PDF
    Abstract In 2018, NASA will launch InSight, a single-station suite of geophysical instru- ments, designed to characterise the martian interior. We investigate the seismo-acoustic sig- nal generated by a bolide entering the martian atmosphere and exploding in a terminal air- burst, and assess this phenomenon as a potential observable for the SEIS seismic payload. Terrestrial analogue data from four recent events are used to identify diagnostic airburst characteristics in both the time and frequency domain. In order to estimate a potential number of detectable events for InSight, we first model the impactor source population from observations made on the Earth, scaled for planetary radius, entry velocity and source density. We go on to calculate a range of potential airbursts from the larger incident impactor population. We estimate there to be ∼ 1000 events of this nature per year on Mars. To then derive a detectable number of airbursts for InSight, we scale this number according to atmospheric attenuation, air-to-ground coupling inefficiencies and by instrument capability for SEIS. We predict between 10–200 detectable events per year for InSight

    Integrated multi-level quality control for proteomic profiling studies using mass spectrometry

    Get PDF
    BACKGROUND: Proteomic profiling using mass spectrometry (MS) is one of the most promising methods for the analysis of complex biological samples such as urine, serum and tissue for biomarker discovery. Such experiments are often conducted using MALDI-TOF (matrix-assisted laser desorption/ionisation time-of-flight) and SELDI-TOF (surface-enhanced laser desorption/ionisation time-of-flight) MS. Using such profiling methods it is possible to identify changes in protein expression that differentiate disease states and individual proteins or patterns that may be useful as potential biomarkers. However, the incorporation of quality control (QC) processes that allow the identification of low quality spectra reliably and hence allow the removal of such data before further analysis is often overlooked. In this paper we describe rigorous methods for the assessment of quality of spectral data. These procedures are presented in a user-friendly, web-based program. The data obtained post-QC is then examined using variance components analysis to quantify the amount of variance due to some of the factors in the experimental design. RESULTS: Using data from a SELDI profiling study of serum from patients with different levels of renal function, we show how the algorithms described in this paper may be used to detect systematic variability within and between sample replicates, pooled samples and SELDI chips and spots. Manual inspection of those spectral data that were identified as being of poor quality confirmed the efficacy of the algorithms. Variance components analysis demonstrated the relatively small amount of technical variance attributable to day of profile generation and experimental array. CONCLUSION: Using the techniques described in this paper it is possible to reliably detect poor quality data within proteomic profiling experiments undertaken by MS. The removal of these spectra at the initial stages of the analysis substantially improves the confidence of putative biomarker identification and allows inter-experimental comparisons to be carried out with greater confidence

    Erratum to: Methods for evaluating medical tests and biomarkers

    Get PDF
    [This corrects the article DOI: 10.1186/s41512-016-0001-y.]

    COVID-19 trajectories among 57 million adults in England: a cohort study using electronic health records

    Get PDF
    BACKGROUND: Updatable estimates of COVID-19 onset, progression, and trajectories underpin pandemic mitigation efforts. To identify and characterise disease trajectories, we aimed to define and validate ten COVID-19 phenotypes from nationwide linked electronic health records (EHR) using an extensible framework. METHODS: In this cohort study, we used eight linked National Health Service (NHS) datasets for people in England alive on Jan 23, 2020. Data on COVID-19 testing, vaccination, primary and secondary care records, and death registrations were collected until Nov 30, 2021. We defined ten COVID-19 phenotypes reflecting clinically relevant stages of disease severity and encompassing five categories: positive SARS-CoV-2 test, primary care diagnosis, hospital admission, ventilation modality (four phenotypes), and death (three phenotypes). We constructed patient trajectories illustrating transition frequency and duration between phenotypes. Analyses were stratified by pandemic waves and vaccination status. FINDINGS: Among 57 032 174 individuals included in the cohort, 13 990 423 COVID-19 events were identified in 7 244 925 individuals, equating to an infection rate of 12·7% during the study period. Of 7 244 925 individuals, 460 737 (6·4%) were admitted to hospital and 158 020 (2·2%) died. Of 460 737 individuals who were admitted to hospital, 48 847 (10·6%) were admitted to the intensive care unit (ICU), 69 090 (15·0%) received non-invasive ventilation, and 25 928 (5·6%) received invasive ventilation. Among 384 135 patients who were admitted to hospital but did not require ventilation, mortality was higher in wave 1 (23 485 [30·4%] of 77 202 patients) than wave 2 (44 220 [23·1%] of 191 528 patients), but remained unchanged for patients admitted to the ICU. Mortality was highest among patients who received ventilatory support outside of the ICU in wave 1 (2569 [50·7%] of 5063 patients). 15 486 (9·8%) of 158 020 COVID-19-related deaths occurred within 28 days of the first COVID-19 event without a COVID-19 diagnoses on the death certificate. 10 884 (6·9%) of 158 020 deaths were identified exclusively from mortality data with no previous COVID-19 phenotype recorded. We observed longer patient trajectories in wave 2 than wave 1. INTERPRETATION: Our analyses illustrate the wide spectrum of disease trajectories as shown by differences in incidence, survival, and clinical pathways. We have provided a modular analytical framework that can be used to monitor the impact of the pandemic and generate evidence of clinical and policy relevance using multiple EHR sources. FUNDING: British Heart Foundation Data Science Centre, led by Health Data Research UK

    The role of networks to overcome large-scale challenges in tomography : the non-clinical tomography users research network

    Get PDF
    Our ability to visualize and quantify the internal structures of objects via computed tomography (CT) has fundamentally transformed science. As tomographic tools have become more broadly accessible, researchers across diverse disciplines have embraced the ability to investigate the 3D structure-function relationships of an enormous array of items. Whether studying organismal biology, animal models for human health, iterative manufacturing techniques, experimental medical devices, engineering structures, geological and planetary samples, prehistoric artifacts, or fossilized organisms, computed tomography has led to extensive methodological and basic sciences advances and is now a core element in science, technology, engineering, and mathematics (STEM) research and outreach toolkits. Tomorrow's scientific progress is built upon today's innovations. In our data-rich world, this requires access not only to publications but also to supporting data. Reliance on proprietary technologies, combined with the varied objectives of diverse research groups, has resulted in a fragmented tomography-imaging landscape, one that is functional at the individual lab level yet lacks the standardization needed to support efficient and equitable exchange and reuse of data. Developing standards and pipelines for the creation of new and future data, which can also be applied to existing datasets is a challenge that becomes increasingly difficult as the amount and diversity of legacy data grows. Global networks of CT users have proved an effective approach to addressing this kind of multifaceted challenge across a range of fields. Here we describe ongoing efforts to address barriers to recently proposed FAIR (Findability, Accessibility, Interoperability, Reuse) and open science principles by assembling interested parties from research and education communities, industry, publishers, and data repositories to approach these issues jointly in a focused, efficient, and practical way. By outlining the benefits of networks, generally, and drawing on examples from efforts by the Non-Clinical Tomography Users Research Network (NoCTURN), specifically, we illustrate how standardization of data and metadata for reuse can foster interdisciplinary collaborations and create new opportunities for future-looking, large-scale data initiatives

    Evidence synthesis to inform model-based cost-effectiveness evaluations of diagnostic tests: a methodological systematic review of health technology assessments

    Get PDF
    Background: Evaluations of diagnostic tests are challenging because of the indirect nature of their impact on patient outcomes. Model-based health economic evaluations of tests allow different types of evidence from various sources to be incorporated and enable cost-effectiveness estimates to be made beyond the duration of available study data. To parameterize a health-economic model fully, all the ways a test impacts on patient health must be quantified, including but not limited to diagnostic test accuracy. Methods: We assessed all UK NIHR HTA reports published May 2009-July 2015. Reports were included if they evaluated a diagnostic test, included a model-based health economic evaluation and included a systematic review and meta-analysis of test accuracy. From each eligible report we extracted information on the following topics: 1) what evidence aside from test accuracy was searched for and synthesised, 2) which methods were used to synthesise test accuracy evidence and how did the results inform the economic model, 3) how/whether threshold effects were explored, 4) how the potential dependency between multiple tests in a pathway was accounted for, and 5) for evaluations of tests targeted at the primary care setting, how evidence from differing healthcare settings was incorporated. Results: The bivariate or HSROC model was implemented in 20/22 reports that met all inclusion criteria. Test accuracy data for health economic modelling was obtained from meta-analyses completely in four reports, partially in fourteen reports and not at all in four reports. Only 2/7 reports that used a quantitative test gave clear threshold recommendations. All 22 reports explored the effect of uncertainty in accuracy parameters but most of those that used multiple tests did not allow for dependence between test results. 7/22 tests were potentially suitable for primary care but the majority found limited evidence on test accuracy in primary care settings. Conclusions: The uptake of appropriate meta-analysis methods for synthesising evidence on diagnostic test accuracy in UK NIHR HTAs has improved in recent years. Future research should focus on other evidence requirements for cost-effectiveness assessment, threshold effects for quantitative tests and the impact of multiple diagnostic tests

    Erratum to: Methods for evaluating medical tests and biomarkers

    Get PDF
    [This corrects the article DOI: 10.1186/s41512-016-0001-y.]
    corecore