296 research outputs found

    (Special Section, Hymns Beyond the Congregation II): Spiritual Concert-Fundraisers, Singing Conventions, and Cherokee Language Learning Academies: Vernacular Southern Hymnbooks in Noncongregational Settings

    Get PDF
    Noncongregational settings were integral to hymnody in the postbellum settler colonial context of the southern United States during the late nineteenth and early twentieth centuries. The incorporation of hymn singing into a wide range of noncongregational settings served Black, white, and Native populations in navigating unsettled racial dynamics during this period across the US South and its diasporas. This essay features three case studies examining hymn collections intended or repurposed for a range of noncongregational uses: spiritual collections connected with the performing ensembles of black institutions, a shape-note songbook that attempted to bridge singing convention and congregational contexts, and a Cherokee-language hymnal being repurposed today for community singing facilitating language learning. Features of these music books’ bibliographic forms, and elements of their music stylistic contents, facilitated their use in communal settings. We argue that taking noncongregational contexts seriously helps to unpack hymns’ connections to race and place, reveal relationships between hymnbooks’ music genre affiliations and formats and their musical-religious functions, and illuminate latent pedagogical and research opportunities. Our case studies expand the temporality associated with noncongregational hymn singing and highlight the value of bibliography as a methodological approach to assessing hymn singing’s diverse contexts

    Flexible Bayesian methods for archaeological dating.

    Get PDF
    Statistical models for the calibration of both independent and related groups of radiocarbon determinations are now well established and there exists a number of software packages such as BCal, OxCal and CALIB that can perform the necessary calculations to implement them. When devising new statistical models it is important to understand the motivations and needs of the archaeologists. When researchers select samples for radiocarbon dating, they are often not interested in when a specific plant or animal died. Instead, they want to use the radiocarbon evidence to help them to learn about the dates of other events, which cannot be dated directly but which are of greater historical or archaeological significance (e.g. the founding of a site). Our initial research focuses on formulating prior distributions that reliably represent a priori information relating to the rate of deposition of dateable material within an archaeological time period or phase. In archaeology, a phase is defined to be a collection of excavated material (context or layers) bounded early and late by events that are of archaeological importance. Current software for estimating boundary dates only allows for one possible type of a priori distribution, which assumes that material suitable for dating was deposited at a uniform rate between the start and end points of the phase. Although this model has been useful for many real problems, researchers have become increasingly aware of its limitations. We therefore propose a family of alternative prior models (with properties tailored to particular problems within archaeological research) which includes the uniform as a special case and allows for more realistic and robust modelling of the deposition process. We illustrate, via two case studies, the difference in archaeological conclusions drawn from the data when implementing both uniform and non-uniform prior deposition models. The second area of research, that we take the first steps towards tackling, is spatiotemporal modelling of archaeological calibration problems. This area of research is of particular interest to those studying the response of plants and animals, including humans, to climate change. In archaeological problems our temporal information typically arises from radiocarbon dating, which leads to estimated rather than exactly known calendar dates. Many of these problems have some form of spatial structure yet it is very rare that the spatial structure is formally accounted for. The combination of temporal uncertainty and spatial structure means that we cannot use standard models to tackle archaeological problems of this kind. Alongside this, our knowledge of past landscapes is generally very poor as they were often very different from modern ones; this limits the amount of spatial detail that can be included in the modelling. In this thesis we aim to make reliable inferences in spatio-temporal problems by carefully devising a model that takes account of the temporal uncertainty as well as incorporating spatial structure, to provide probabilistic solutions to the questions posed. We illustrate the properties of both the conventional models and the spatio-temporal models using a case study relating to the radiocarbon evidence for the Late glacial reoccupation of NW Europe

    CT ​Evaluation ​by ​Artificial ​Intelligence ​for ​Atherosclerosis, Stenosis and Vascular ​Morphology ​(CLARIFY): ​a ​Multi-Center, International Study

    Get PDF
    Background: Atherosclerosis evaluation by coronary computed tomography angiography (CCTA) is promising for coronary artery disease (CAD) risk stratification, but time consuming and requires high expertise. Artificial Intelligence (AI) applied to CCTA for comprehensive CAD assessment may overcome these limitations. We hypothesized AI aided analysis allows for rapid, accurate evaluation of vessel morphology and stenosis. Methods: This was a multi-site study of 232 patients undergoing CCTA. Studies were analyzed by FDA-cleared software service that performs AI-driven coronary artery segmentation and labeling, lumen and vessel wall determination, plaque quantification and characterization with comparison to ground truth of consensus by three L3 readers. CCTAs were analyzed for: % maximal diameter stenosis, plaque volume and composition, presence of high-risk plaque and Coronary Artery Disease Reporting & Data System (CAD-RADS) category. Results: AI performance was excellent for accuracy, sensitivity, specificity, positive predictive value and negative predictive value as follows: >70% stenosis: 99.7%, 90.9%, 99.8%, 93.3%, 99.9%, respectively; >50% stenosis: 94.8%, 80.0%, 97.0, 80.0%, 97.0%, respectively. Bland-Altman plots depict agreement between expert reader and AI determined maximal diameter stenosis for per-vessel (mean difference -0.8%; 95% CI 13.8% to -15.3%) and per-patient (mean difference -2.3%; 95% CI 15.8% to -20.4%). L3 and AI agreed within one CAD-RADS category in 228/232 (98.3%) exams per-patient and 923/924 (99.9%) vessels on a per-vessel basis. There was a wide range of atherosclerosis in the coronary artery territories assessed by AI when stratified by CAD-RADS distribution. Conclusions: AI-aided approach to CCTA interpretation determines coronary stenosis and CAD-RADS category in close agreement with consensus of L3 expert readers. There was a wide range of atherosclerosis identified through AI.info:eu-repo/semantics/publishedVersio

    CT ​EvaLuation ​by ​ARtificial ​Intelligence ​For ​Atherosclerosis, Stenosis and Vascular ​MorphologY ​(CLARIFY): ​A ​Multi-center, international study

    Get PDF
    Copyright © 2021 The Authors. Published by Elsevier Inc. All rights reserved.BACKGROUND: Atherosclerosis evaluation by coronary computed tomography angiography (CCTA) is promising for coronary artery disease (CAD) risk stratification, but time consuming and requires high expertise. Artificial Intelligence (AI) applied to CCTA for comprehensive CAD assessment may overcome these limitations. We hypothesized AI aided analysis allows for rapid, accurate evaluation of vessel morphology and stenosis. METHODS: This was a multi-site study of 232 patients undergoing CCTA. Studies were analyzed by FDA-cleared software service that performs AI-driven coronary artery segmentation and labeling, lumen and vessel wall determination, plaque quantification and characterization with comparison to ground truth of consensus by three L3 readers. CCTAs were analyzed for: % maximal diameter stenosis, plaque volume and composition, presence of high-risk plaque and Coronary Artery Disease Reporting & Data System (CAD-RADS) category. RESULTS: AI performance was excellent for accuracy, sensitivity, specificity, positive predictive value and negative predictive value as follows: >70% stenosis: 99.7%, 90.9%, 99.8%, 93.3%, 99.9%, respectively; >50% stenosis: 94.8%, 80.0%, 97.0, 80.0%, 97.0%, respectively. Bland-Altman plots depict agreement between expert reader and AI determined maximal diameter stenosis for per-vessel (mean difference -0.8%; 95% CI 13.8% to -15.3%) and per-patient (mean difference -2.3%; 95% CI 15.8% to -20.4%). L3 and AI agreed within one CAD-RADS category in 228/232 (98.3%) exams per-patient and 923/924 (99.9%) vessels on a per-vessel basis. There was a wide range of atherosclerosis in the coronary artery territories assessed by AI when stratified by CAD-RADS distribution. CONCLUSIONS: AI-aided approach to CCTA interpretation determines coronary stenosis and CAD-RADS category in close agreement with consensus of L3 expert readers. There was a wide range of atherosclerosis identified through AI.proofpublishe

    Reduction in downstream test utilization following introduction of coronary computed tomography in a cardiology practice

    Get PDF
    To compare utilization of non-invasive ischemic testing, invasive coronary angiography (ICA), and percutaneous coronary intervention (PCI) procedures before and after introduction of 64-slice multi-detector row coronary computed tomographic angiography (CCTA) in a large urban primary and consultative cardiology practice. We utilized a review of electronic medical records (NotesMD®) and the electronic practice management system (Megawest®) encompassing a 4-year period from 2004 to 2007 to determine the number of exercise treadmill (TME), supine bicycle exercise echocardiography (SBE), single photon emission computed tomography (SPECT) myocardial perfusion stress imaging (MPI), coronary calcium score (CCS), CCTA, ICA, and PCI procedures performed annually. Test utilization in the 2 years prior to and 2 years following availability of CCTA were compared. Over the 4-year period reviewed, the annual utilization of ICA decreased 45% (2,083 procedures in 2004 vs. 1,150 procedures in 2007, P < 0.01) and the percentage of ICA cases requiring PCI increased (19% in 2004 vs. 28% in 2007, P < 0.001). SPECT MPI decreased 19% (3,223 in 2004 vs. 2,614 in 2007 P < 0.02) and exercise stress treadmill testing decreased 49% (471 in 2004 vs. 241 in 2007 P < 0.02). Over the same period, there were no significant changes in measures of practice volume (office and hospital) or the annual incidence of PCI (405 cases in 2004 vs. 326 cases in 2007) but a higher percentage of patients with significant disease undergoing PCI 19% in 2004 vs. 29% in 2007 P < 0.01. Implementation of CCTA resulted in a significant decrease in ICA and a corresponding significant increase in the percentage of ICA cases requiring PCI, indicating that CCTA resulted in more accurate referral for ICA. The reduction in unnecessary ICA is associated with avoidance of potential morbidity and mortality associated with invasive diagnostic testing, reduction of downstream SPECT MPI and TME as well as substantial savings in health care dollars

    Bayesian Analysis of Radiocarbon Dates

    Get PDF
    If radiocarbon measurements are to be used at all for chronological purposes, we have to use statistical methods for calibration. The most widely used method of calibration can be seen as a simple application of Bayesian statistics, which uses both the information from the new measurement and information from the 14C calibration curve. In most dating applications, however, we have larger numbers of 14C measurements and we wish to relate those to events in the past. Bayesian statistics provides a coherent framework in which such analysis can be performed and is becoming a core element in many 14C dating projects. This article gives an overview of the main model components used in chronological analysis, their mathematical formulation, and examples of how such analyses can be performed using the latest version of the OxCal software (v4). Many such models can be put together, in a modular fashion, from simple elements, with defined constraints and groupings. In other cases, the commonly used "uniform phase" models might not be appropriate, and ramped, exponential, or normal distributions of events might be more useful. When considering analyses of these kinds, it is useful to be able run simulations on synthetic data. Methods for performing such tests are discussed here along with other methods of diagnosing possible problems with statistical models of this kind

    a CLARIFY trial sub-study

    Get PDF
    Publisher Copyright: © 2022Background: The difference between expert level (L3) reader and artificial intelligence (AI) performance for quantifying coronary plaque and plaque components is unknown. Objective: This study evaluates the interobserver variability among expert readers for quantifying the volume of coronary plaque and plaque components on coronary computed tomographic angiography (CCTA) using an artificial intelligence enabled quantitative CCTA analysis software as a reference (AI-QCT). Methods: This study uses CCTA imaging obtained from 232 patients enrolled in the CLARIFY (CT EvaLuation by ARtificial Intelligence For Atherosclerosis, Stenosis and Vascular MorphologY) study. Readers quantified overall plaque volume and the % breakdown of noncalcified plaque (NCP) and calcified plaque (CP) on a per vessel basis. Readers categorized high risk plaque (HRP) based on the presence of low-attenuation-noncalcified plaque (LA-NCP) and positive remodeling (PR; ≥1.10). All CCTAs were analyzed by an FDA-cleared software service that performs AI-driven plaque characterization and quantification (AI-QCT) for comparison to L3 readers. Reader generated analyses were compared among readers and to AI-QCT generated analyses. Results: When evaluating plaque volume on a per vessel basis, expert readers achieved moderate to high interobserver consistency with an intra-class correlation coefficient of 0.78 for a single reader score and 0.91 for mean scores. There was a moderate trend between readers 1, 2, and 3 and AI with spearman coefficients of 0.70, 0.68 and 0.74, respectively. There was high discordance between readers and AI plaque component analyses. When quantifying %NCP v. %CP, readers 1, 2, and 3 achieved a weighted kappa coefficient of 0.23, 0.34 and 0.24, respectively, compared to AI with a spearman coefficient of 0.38, 0.51, and 0.60, respectively. The intra-class correlation coefficient among readers for plaque composition assessment was 0.68. With respect to HRP, readers 1, 2, and 3 achieved a weighted kappa coefficient of 0.22, 0.26, and 0.17, respectively, and a spearman coefficient of 0.36, 0.35, and 0.44, respectively. Conclusion: Expert readers performed moderately well quantifying total plaque volumes with high consistency. However, there was both significant interobserver variability and high discordance with AI-QCT when quantifying plaque composition.publishersversionpublishe
    corecore