72 research outputs found

    Does politics impact carbon emissions?

    Full text link
    "Do political variables influence long-term environmental transitions? The discussion on the determinants of the environmental performance of countries has been dominated by a focus on the Environmental Kuznets curve. This concept concentrated primarily on the role of economic factors, in particular per capita income levels. By contrast, we outline both conceptually and empirically how political factors can affect long-term carbon trajectories. Our findings from an error-correction model suggest that political factors are an important explanatory variable for carbon emissions in over 100 countries during the period 1970-2004. The results show that political capacity reduces carbon emission in OECD countries whereas political constraints, democracy and the Kyoto Protocol reduce long-term carbon emission in the group of all countries as well as in non-OECD countries." [author's abstract

    Medical Student Peer Support Initiatives for Step Exams

    Get PDF
    Medical Student Peer Support Initiatives for Step Exam

    Micro-CT imaging of Thiel-embalmed and iodine-stained human temporal bone for 3D modeling

    Get PDF
    Introduction This pilot study explores whether a human Thiel-embalmed temporal bone is suitable for generating an accurate and complete data set with micro-computed tomography (micro-CT) and whether solid iodine-staining improves visualization and facilitates segmentation of middle ear structures. Methods A temporal bone was used to verify the accuracy of the imaging by first digitally measuring the stapes on the tomography images and then physically under the microscope after removal from the temporal bone. All measurements were compared with literature values. The contralateral temporal bone was used to evaluate segmentation and three-dimensional (3D) modeling after iodine staining and micro-CT scanning. Results The digital and physical stapes measurements differed by 0.01–0.17 mm or 1–19%, respectively, but correlated well with the literature values. Soft tissue structures were visible in the unstained scan. However, iodine staining increased the contrast-to-noise ratio by a factor of 3.7 on average. The 3D model depicts all ossicles and soft tissue structures in detail, including the chorda tympani, which was not visible in the unstained scan. Conclusions Micro-CT imaging of a Thiel-embalmed temporal bone accurately represented the entire anatomy. Iodine staining considerably increased the contrast of soft tissues, simplified segmentation and enabled detailed 3D modeling of the middle ear

    Human dendritic cells process and present Listeria antigens for in vitro priming of autologous CD4+ T lymphocytes

    Get PDF
    The role of human dendritic cells (DC) in the immune response toward intracellularly growing Listeria was analyzed under in vitro conditions using several morphological and functional methods. DC incubated with Listeria innocua and L. monocytogenes, respectively, readily phagocytosed the bacteria. Listeria did not impair viability and immunogenic potential of human DC. Listerial antigens were found to be processed within the lysosomal compartment of DC and colocalized with major histocompatibility complex (MHC) class II molecules, as shown by fluorescence and transmission electron microscopy. DC challenged with apathogenic L. innocua were highly effective in priming autologous naïve T cells (mainly CD4+) in vitro. The T cells strongly proliferated in the presence of DC incubated with L. innocua, which could be significantly inhibited by anti-MHC II mAb. L. innocua-primed T cells were also successfully stimulated by DC harboring the pathogenic L. monocytogenes, either the wild-type strain EGD or the p60 reduced mutant strain RIII. From our results, we conclude that human DC infected with nonpathogenic intracellular bacteria are able to efficiently prime naïve T cells, which are then suitable for recognition of antigens derived from related virulent bacterial species. This in vitro human model provides an interesting tool for basic research in infectious immunology and possibly for a new immunotherap

    Development of the ChatGPT, Generative Artificial Intelligence and Natural Large Language Models for Accountable Reporting and Use (CANGARU) Guidelines

    Full text link
    The swift progress and ubiquitous adoption of Generative AI (GAI), Generative Pre-trained Transformers (GPTs), and large language models (LLMs) like ChatGPT, have spurred queries about their ethical application, use, and disclosure in scholarly research and scientific productions. A few publishers and journals have recently created their own sets of rules; however, the absence of a unified approach may lead to a 'Babel Tower Effect,' potentially resulting in confusion rather than desired standardization. In response to this, we present the ChatGPT, Generative Artificial Intelligence, and Natural Large Language Models for Accountable Reporting and Use Guidelines (CANGARU) initiative, with the aim of fostering a cross-disciplinary global inclusive consensus on the ethical use, disclosure, and proper reporting of GAI/GPT/LLM technologies in academia. The present protocol consists of four distinct parts: a) an ongoing systematic review of GAI/GPT/LLM applications to understand the linked ideas, findings, and reporting standards in scholarly research, and to formulate guidelines for its use and disclosure, b) a bibliometric analysis of existing author guidelines in journals that mention GAI/GPT/LLM, with the goal of evaluating existing guidelines, analyzing the disparity in their recommendations, and identifying common rules that can be brought into the Delphi consensus process, c) a Delphi survey to establish agreement on the items for the guidelines, ensuring principled GAI/GPT/LLM use, disclosure, and reporting in academia, and d) the subsequent development and dissemination of the finalized guidelines and their supplementary explanation and elaboration documents.Comment: 20 pages, 1 figure, protoco

    NeuroML: A Language for Describing Data Driven Models of Neurons and Networks with a High Degree of Biological Detail

    Get PDF
    Biologically detailed single neuron and network models are important for understanding how ion channels, synapses and anatomical connectivity underlie the complex electrical behavior of the brain. While neuronal simulators such as NEURON, GENESIS, MOOSE, NEST, and PSICS facilitate the development of these data-driven neuronal models, the specialized languages they employ are generally not interoperable, limiting model accessibility and preventing reuse of model components and cross-simulator validation. To overcome these problems we have used an Open Source software approach to develop NeuroML, a neuronal model description language based on XML (Extensible Markup Language). This enables these detailed models and their components to be defined in a standalone form, allowing them to be used across multiple simulators and archived in a standardized format. Here we describe the structure of NeuroML and demonstrate its scope by converting into NeuroML models of a number of different voltage- and ligand-gated conductances, models of electrical coupling, synaptic transmission and short-term plasticity, together with morphologically detailed models of individual neurons. We have also used these NeuroML-based components to develop an highly detailed cortical network model. NeuroML-based model descriptions were validated by demonstrating similar model behavior across five independently developed simulators. Although our results confirm that simulations run on different simulators converge, they reveal limits to model interoperability, by showing that for some models convergence only occurs at high levels of spatial and temporal discretisation, when the computational overhead is high. Our development of NeuroML as a common description language for biophysically detailed neuronal and network models enables interoperability across multiple simulation environments, thereby improving model transparency, accessibility and reuse in computational neuroscience

    Bibliometric Analysis of Academic Journal Recommendations and Requirements for Surgical and Anesthesiologic Adverse Events Reporting

    Get PDF
    BACKGROUND: Standards for reporting surgical adverse events (AEs) vary widely within the scientific literature. Failure to adequately capture AEs hinders efforts to measure the safety of healthcare delivery and improve the quality of care. The aim of the present study is to assess the prevalence and typology of perioperative AE reporting guidelines among surgery and anesthesiology journals. MATERIALS AND METHODS: In November 2021, three independent reviewers queried journal lists from the SCImago Journal & Country Rank (SJR) portal (www.scimagojr.com), a bibliometric indicator database for surgery and anesthesiology academic journals. Journal characteristics were summarized using SCImago, a bibliometric indicator database extracted from Scopus journal data. Quartile 1 (Q1) was considered the top quartile and Q4 bottom quartile based on the journal impact factor. Journal author guidelines were collected to determine whether AE reporting recommendations were included and, if so, the preferred reporting procedures. RESULTS: Of 1409 journals queried, 655 (46.5%) recommended surgical AE reporting. Journals most likely to recommend AE reporting were: by category surgery (59.1%), urology (53.3%), and anesthesia (52.3%); in top SJR quartiles (i.e. more influential); by region, based in Western Europe (49.8%), North America (49.3%), and the Middle East (48.3%). CONCLUSIONS: Surgery and anesthesiology journals do not consistently require or provide recommendations on perioperative AE reporting. Journal guidelines regarding AE reporting should be standardized and are needed to improve the quality of surgical AE reporting with the ultimate goal of improving patient morbidity and mortality
    corecore