309 research outputs found

    Dynamic Magnetic Resonance Imaging of Endoscopic Third Ventriculostomy Patency With Differently Acquired Fast Imaging With Steady-State Precission Sequences

    Full text link
    The aim of the study was to determine the possibilities of two differently acquired two-dimensional fast imaging with steady-state precession (FISP 2D) magnetic resonance sequences in estimation of the third ventricle floor fenestration patency after endoscopic third ventriculostomy (ETV) in the subjects with aqueductal stenosis/obstruction. Fifty eight subjects (37 males, 21 females, mean age 27 years) with previously successfully performed ETV underwent brain MRI on 1.5T MR imager 3-6 months after the procedure. Two different FISP 2D sequences (one included in the standard vendor provided software package, and the other, experimentally developed by our team) were performed respectively at two fixed slice positions: midsagittal and perpendicular to the ETV fenestration, and displayed in a closed-loop cinematographic format in order to estimate the patency. The ventricular volume reduction has been observed as well. Cerebrospinal fluid (CSF) flow through the ETV fenestration was observed in midsagittal plane with both FISP 2D sequences in 93.11% subjects, while in 6.89% subjects the dynamic CSF flow MRI was inconclusive. In the perpendicular plane CSF flow through the ETV fenestration was visible only by use of experimentally developed FISP 2D (TR30/FA70) sequence. Postoperative volume reduction of lateral and third ventricle was detected in 67.24% subjects. Though both FISP 2D sequences acquired in midsagittal plane may be used to estimate the effects of performed ETV, due to achieved higher CSF pulsatile flow sensitivity, only the use of FISP 2D (TR30/FA70) sequence enables the estimation of the treatment effect in perpendicular plane in the absence of phase-contrast sequences.

    Antioxidant and cytotoxic potential of selected plant species of the boraginaceae family

    Get PDF
    Antioxidant activity is one of the most important properties of plant extracts. Antioxidants from natural sources have been intensively studied in the last few decades. The antioxidant contents of medicinal plants may contribute to the protection of diseases. Bioactive components of plants have a potential role in chemoprevention and inhibition of different phases of the malignant transformation process. Therefore, plant extracts and essential oils are in the focus of research, and in recent decades have been tested on a large number of malignant cell lines. The aim of this study was to examine antioxidant and cytotoxic potential of selected plant species from the Boraginaceae family. Determination of antioxidant activity was performed by ammonium-thiocyanate method. Testing citotoxic activity was performed by MTT test on cancer cell lines: HEP 2c (human larynx carcinoma), RD (human cell line-rhabdomyosarcoma) and L2OB (mouse tumor fibroblast line). The best antioxidant activity showed ethanol, acetone and chloroform extracts of Anchusa officinalis, Echium vulgare and Echium italicum. The tested extracts showed an inhibitory effect on cancer cells, but chloroform and acetone extracts of all three plant had the most effective effect on L2OB cells. Isolation of individual active components from this plants and their testing for cancer cells would be of great importance for this field of research

    Reproducibility as a Mechanism for Teaching Fairness, Accountability, Confidentiality, and Transparency in Artificial Intelligence

    Get PDF
    In this work, we explain the setup for a technical, graduate-level course on Fairness, Accountability, Confidentiality, and Transparency in Artificial Intelligence (FACT-AI) at the University of Amsterdam, which teaches FACT-AI concepts through the lens of reproducibility. The focal point of the course is a group project based on reproducing existing FACT-AI algorithms from top AI conferences and writing a corresponding report. In the first iteration of the course, we created an open source repository with the code implementations from the group projects. In the second iteration, we encouraged students to submit their group projects to the Machine Learning Reproducibility Challenge, resulting in 9 reports from our course being accepted for publication in the ReScience journal. We reflect on our experience teaching the course over two years, where one year coincided with a global pandemic, and propose guidelines for teaching FACT-AI through reproducibility in graduate-level AI study programs. We hope this can be a useful resource for instructors who want to set up similar courses in the future

    A Multistakeholder Approach Towards Evaluating AI Transparency Mechanisms

    Get PDF
    Given that there are a variety of stakeholders involved in, and affected by, decisions from machine learning (ML) models, it is important to consider that different stakeholders have different transparency needs. Previous work found that the majority of deployed transparency mechanisms primarily serve technical stakeholders. In our work, we want to investigate how well transparency mechanisms might work in practice for a more diverse set of stakeholders by conducting a large-scale, mixed-methods user study across a range of organizations, within a particular industry such as health care, criminal justice, or content moderation. In this paper, we outline the setup for our study.Comment: Accepted to CHI 2021 Workshop on Operationalizing Human-Centered Perspectives in Explainable A

    CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks

    Get PDF
    Given the increasing promise of graph neural networks (GNNs) in real-world applications, several methods have been developed for explaining their predictions. Existing methods for interpreting predictions from GNNs have primarily focused on generating subgraphs that are especially relevant for a particular prediction. However, such methods are not counterfactual (CF) in nature: given a prediction, we want to understand how the prediction can be changed in order to achieve an alternative outcome. In this work, we propose a method for generating CF explanations for GNNs: the minimal perturbation to the input (graph) data such that the prediction changes. Using only edge deletions, we find that our method, CF-GNNExplainer, can generate CF explanations for the majority of instances across three widely used datasets for GNN explanations, while removing less than 3 edges on average, with at least 94\% accuracy. This indicates that CF-GNNExplainer primarily removes edges that are crucial for the original predictions, resulting in minimal CF explanations.Comment: Accepted to AISTATS 202
    • …
    corecore