43 research outputs found

    Simlandscape, a design and research support system for local planning, based on the scenario method and Parcel-Based GIS

    Get PDF
    Many authors mention gaps between planning and reality (Salet, 2000; Wheeler, 2002), modelling and reality (Parker, 2003), and between modelling and planning (Clark, 2003). The first gap refers to inadequate planning models and instruments and the second one to the yet inadequate simulation models. The last before mentioned gap refers to cultural and ontological differences between these fields. There seems to be a kind of hate-love relationship: there is a promise of synergy, but also a considerable communication problem. Inside planning there is an ongoing debate on what qualities are important, how to deal with stakeholders and how to implement plans. This debate stretches from the functionalistic modernism to identity oriented comprehensive new regionalism. Planning concepts are in essence instruments for governance and therefore developed for and focused on control and intervention of specific aspects of spatial development. The root of planning is about the creation of the future and not about future research. This focus is one of the reasons why so many regional plans fail to be implemented. Modelling is focused on system behaviour; it is focused on scientific future research. Through its scientific approach and still inapt models, modelling however generates results that many planners do not recognize as practical from their daily perspective. They mistrust the models and find their grid based maps primal. Planning and modelling are complementary and therefore in principle synergetic. Modelling could provide planning, its context and moneylender, with a powerful evaluation tool. For this to happen however planning has to be more open to landscape as an autonomous system and must develop consistent (scenario) approaches. Now, planning models are mostly not adequate for interactive scenario development and simulation. And modelling has, next to improving performace, to pay more attention to practical planning issues (spatial quality and practise data) and language (catographic products and scales). This way they could make a beautiful couple, provided they work on themselves. What is required is a kind of intermediate or integrative scenario and typology approach. Simlandscape is a methodological toolbox for land use planning. It includes research and development, evaluation and monitoring of panoramic land use scenarios. It has been specifically developed to do the before mentioned job. Simlandscape was the object of a recently finished R&D project. It is designed to accommodate future research and interactive scenario development (explorative interactive planning) on a local and regional scale. The toolbox is based on an ontological transformation model of how landscape changes. Key elements are that Simlandscape is parcel based and actor and object orientated. The innovative aspects of Simlandscape have to do with the effect of the key elements of the model – an integration of land property and –exploitation in a landscape layer model in combination with a cadastral data model - for the comprehensiveness of the tool with respect to research activities, plan phases, qualities and stakeholders.

    Software Citation Checklist for Developers

    Get PDF
    This document provides a minimal, generic checklist that developers of software (either open or closed source) used in research can use to ensure they are following good practice around software citation. This will help developers get credit for the software they create, and improve transparency, reproducibility, and reuse

    The effect of scan and patient parameters on the diagnostic performance of AI for detecting coronary stenosis on coronary CT angiography

    Get PDF
    Publisher Copyright: © 2022 The AuthorsObjectives: To determine whether coronary computed tomography angiography (CCTA) scanning, scan preparation, contrast, and patient based parameters influence the diagnostic performance of an artificial intelligence (AI) based analysis software for identifying coronary lesions with ≥50% stenosis. Background: CCTA is a noninvasive imaging modality that provides diagnostic and prognostic benefit to patients with coronary artery disease (CAD). The use of AI enabled quantitative CCTA (AI-QCT) analysis software enhances our diagnostic and prognostic ability, however, it is currently unclear whether software performance is influenced by CCTA scanning parameters. Methods: CCTA and quantitative coronary CT (QCT) data from 303 stable patients (64 ± 10 years, 71% male) from the derivation arm of the CREDENCE Trial were retrospectively analyzed using an FDA-cleared cloud-based software that performs AI-enabled coronary segmentation, lumen and vessel wall determination, plaque quantification and characterization, and stenosis determination. The algorithm's diagnostic performance measures (sensitivity, specificity, and accuracy) for detecting coronary lesions of ≥50% stenosis were determined based on concordance with QCA measurements and subsequently compared across scanning parameters (including scanner vendor, model, single vs dual source, tube voltage, dose length product, gating technique, timing method), scan preparation technique (use of beta blocker, use and dose of nitroglycerin), contrast administration parameters (contrast type, infusion rate, iodine concentration, contrast volume) and patient parameters (heart rate and BMI). Results: Within the patient cohort, 13% demonstrated ≥50% stenosis in 3 vessel territories, 21% in 2 vessel territories, 35% in 1 vessel territory while 32% had 400 mg/ml 95.2%; p = 0.0287) in the context of low injection flow rates. On a per patient basis there were no significant differences in AI diagnostic performance measures across all measured scanner, scan technique, patient preparation, contrast, and individual patient parameters. Conclusion: The diagnostic performance of AI-QCT analysis software for detecting moderate to high grade stenosis are unaffected by commonly used CCTA scanning parameters and across a range of common scanning, scanner, contrast and patient variables. Condensed abstract: An AI-enabled quantitative CCTA (AI-QCT) analysis software has been validated as an effective tool for the identification, quantification and characterization of coronary plaque and stenosis through comparison to blinded expert readers and quantitative coronary angiography. However, it is unclear whether CCTA screening parameters related to scanner parameters, scan technique, contrast volume and rate, radiation dose, or a patient's BMI or heart rate at time of scan affect the software's diagnostic measures for detection of moderate to high grade stenosis. AI performance measures were unaffected across a broad range of commonly encountered scanner, patient preparation, scan technique, intravenous contrast and patient parameters.publishersversionpublishe

    EurOP2E – the European Open Platform for Prescribing Education, a consensus study among clinical pharmacology and therapeutics teachers

    Get PDF
    Purpose Sharing and developing digital educational resources and open educational resources has been proposed as a way to harmonize and improve clinical pharmacology and therapeutics (CPT) education in European medical schools. Previous research, however, has shown that there are barriers to the adoption and implementation of open educational resources. The aim of this study was to determine perceived opportunities and barriers to the use and creation of open educational resources among European CPT teachers and possible solutions for these barriers. Methods CPT teachers of British and EU medical schools completed an online survey. Opportunities and challenges were identified by thematic analyses and subsequently discussed in an international consensus meeting. Results Data from 99 CPT teachers from 95 medical schools were analysed. Thirty teachers (30.3%) shared or collaboratively produced digital educational resources. All teachers foresaw opportunities in the more active use of open educational resources, including improving the quality of their teaching. The challenges reported were language barriers, local differences, lack of time, technological issues, difficulties with quality management, and copyright restrictions. Practical solutions for these challenges were discussed and include a peer review system, clear indexing, and use of copyright licenses that permit adaptation of resources. Conclusion Key challenges to making greater use of CPT open educational resources are a limited applicability of such resources due to language and local differences and quality concerns. These challenges may be resolved by relatively simple measures, such as allowing adaptation and translation of resources and a peer review system

    The effect of scan and patient parameters on the diagnostic performance of AI for detecting coronary stenosis on coronary CT angiography

    Get PDF
    Objectives: To determine whether coronary computed tomography angiography (CCTA) scanning, scan preparation, contrast, and patient based parameters influence the diagnostic performance of an artificial intelligence (AI) based analysis software for identifying coronary lesions with ≥50% stenosis. Background: CCTA is a noninvasive imaging modality that provides diagnostic and prognostic benefit to patients with coronary artery disease (CAD). The use of AI enabled quantitative CCTA (AI-QCT) analysis software enhances our diagnostic and prognostic ability, however, it is currently unclear whether software performance is influenced by CCTA scanning parameters. Methods: CCTA and quantitative coronary CT (QCT) data from 303 stable patients (64 ± 10 years, 71% male) from the derivation arm of the CREDENCE Trial were retrospectively analyzed using an FDA-cleared cloud-based software that performs AI-enabled coronary segmentation, lumen and vessel wall determination, plaque quantification and characterization, and stenosis determination. The algorithm\u27s diagnostic performance measures (sensitivity, specificity, and accuracy) for detecting coronary lesions of ≥50% stenosis were determined based on concordance with QCA measurements and subsequently compared across scanning parameters (including scanner vendor, model, single vs dual source, tube voltage, dose length product, gating technique, timing method), scan preparation technique (use of beta blocker, use and dose of nitroglycerin), contrast administration parameters (contrast type, infusion rate, iodine concentration, contrast volume) and patient parameters (heart rate and BMI). Results: Within the patient cohort, 13% demonstrated ≥50% stenosis in 3 vessel territories, 21% in 2 vessel territories, 35% in 1 vessel territory while 32% had \u3c50% stenosis in all vessel territories evaluated by QCA. Average AI analysis time was 10.3 ± 2.7 min. On a per vessel basis, there were significant differences only in sensitivity for ≥50% stenosis based on contrast type (iso-osmolar 70.0% vs non isoosmolar 92.1% p = 0.0345) and iodine concentration (\u3c350 mg/ml 70.0%, 350-369 mg/ml 90.0%, 370-400 mg/ml 90.0%, \u3e400 mg/ml 95.2%; p = 0.0287) in the context of low injection flow rates. On a per patient basis there were no significant differences in AI diagnostic performance measures across all measured scanner, scan technique, patient preparation, contrast, and individual patient parameters. Conclusion: The diagnostic performance of AI-QCT analysis software for detecting moderate to high grade stenosis are unaffected by commonly used CCTA scanning parameters and across a range of common scanning, scanner, contrast and patient variables. Condensed abstract: An AI-enabled quantitative CCTA (AI-QCT) analysis software has been validated as an effective tool for the identification, quantification and characterization of coronary plaque and stenosis through comparison to blinded expert readers and quantitative coronary angiography. However, it is unclear whether CCTA screening parameters related to scanner parameters, scan technique, contrast volume and rate, radiation dose, or a patient\u27s BMI or heart rate at time of scan affect the software\u27s diagnostic measures for detection of moderate to high grade stenosis. AI performance measures were unaffected across a broad range of commonly encountered scanner, patient preparation, scan technique, intravenous contrast and patient parameters

    Relationship of age, atherosclerosis and angiographic stenosis using artificial intelligence

    Get PDF
    Objective: The study evaluates the relationship of coronary stenosis, atherosclerotic plaque characteristics (APCs) and age using artificial intelligence enabled quantitative coronary computed tomographic angiography (AI-QCT). Methods: This is a post-hoc analysis of data from 303 subjects enrolled in the CREDENCE (Computed TomogRaphic Evaluation of Atherosclerotic Determinants of Myocardial IsChEmia) trial who were referred for invasive coronary angiography and subsequently underwent coronary computed tomographic angiography (CCTA). In this study, a blinded core laboratory analysing quantitative coronary angiography images classified lesions as obstructive (≥50%) or non-obstructive (\u3c50%) while AI software quantified APCs including plaque volume (PV), low-density non-calcified plaque (LD-NCP), non-calcified plaque (NCP), calcified plaque (CP), lesion length on a per-patient and per-lesion basis based on CCTA imaging. Plaque measurements were normalised for vessel volume and reported as % percent atheroma volume (%PAV) for all relevant plaque components. Data were subsequently stratified by age \u3c65 and ≥65 years. Results: The cohort was 64.4±10.2 years and 29% women. Overall, patients \u3e65 had more PV and CP than patients \u3c65. On a lesion level, patients \u3e65 had more CP than younger patients in both obstructive (29.2 mm3 vs 48.2 mm3; p\u3c0.04) and non-obstructive lesions (22.1 mm3 vs 49.4 mm3; p\u3c0.004) while younger patients had more %PAV (LD-NCP) (1.5% vs 0.7%; p\u3c0.038). Younger patients had more PV, LD-NCP, NCP and lesion lengths in obstructive compared with non-obstructive lesions. There were no differences observed between lesion types in older patients. Conclusion: AI-QCT identifies a unique APC signature that differs by age and degree of stenosis and provides a foundation for AI-guided age-based approaches to atherosclerosis identification, prevention and treatment

    Robust estimation of bacterial cell count from optical density

    Get PDF
    Optical density (OD) is widely used to estimate the density of cells in liquid culture, but cannot be compared between instruments without a standardized calibration protocol and is challenging to relate to actual cell count. We address this with an interlaboratory study comparing three simple, low-cost, and highly accessible OD calibration protocols across 244 laboratories, applied to eight strains of constitutive GFP-expressing E. coli. Based on our results, we recommend calibrating OD to estimated cell count using serial dilution of silica microspheres, which produces highly precise calibration (95.5% of residuals <1.2-fold), is easily assessed for quality control, also assesses instrument effective linear range, and can be combined with fluorescence calibration to obtain units of Molecules of Equivalent Fluorescein (MEFL) per cell, allowing direct comparison and data fusion with flow cytometry measurements: in our study, fluorescence per cell measurements showed only a 1.07-fold mean difference between plate reader and flow cytometry data

    Software Citation Checklist for Developers

    Get PDF
    This document provides a minimal, generic checklist that developers of software (either open or closed source) used in research can use to ensure they are following good practice around software citation. This will help developers get credit for the software they create, and improve transparency, reproducibility, and reuse
    corecore