11 research outputs found

    TP53 mutations in ovarian carcinomas from sporadic cases and carriers of two distinct BRCA1 founder mutations; relation to age at diagnosis and survival

    Get PDF
    BACKGROUND: Ovarian carcinomas from 30 BRCA1 germ-line carriers of two distinct high penetrant founder mutations, 20 carrying the 1675delA and 10 the 1135insA, and 100 sporadic cases were characterized for somatic mutations in the TP53 gene. We analyzed differences in relation to BRCA1 germline status, TP53 status, survival and age at diagnosis, as previous studies have not been conclusive. METHODS: DNA was extracted from paraffin embedded formalin fixed tissues for the familial cases, and from fresh frozen specimen from the sporadic cases. All cases were treated at our hospital according to protocol. Mutation analyses of exon 2 – 11 were performed using TTGE, followed by sequencing. RESULTS: Survival rates for BRCA1-familial cases with TP53 mutations were not significantly lower than for familial cases without TP53 mutations (p = 0.25, RR = 1.64, 95% CI [0.71–3.78]). Median age at diagnosis for sporadic (59 years) and familial (49 years) cases differed significantly (p < 0.001) with or without TP53 mutations. Age at diagnosis between the two types of familial carriers were not significantly different, with median age of 47 for 1675delA and 52.5 for 1135insA carriers (p = 0.245). For cases ≥50 years at diagnosis, a trend toward longer survival for sporadic over familial cases was observed (p = 0.08). The opposite trend was observed for cases <50 years at diagnosis. CONCLUSION: There do not seem to be a protective advantage for familial BRCA1 carriers without TP53 mutations over familial cases with TP53 mutations. However, there seem to be a trend towards initial advantage in survival for familial cases compared to sporadic cases diagnosed before the age of 50 both with and without TP53 mutations. However, this trend diminishes over time and for cases diagnosed ≥50 years the sporadic cases show a trend towards an advantage in survival over familial cases. Although this data set is small, if confirmed, this may be a link in the evidence that the differences in ovarian cancer survival reported, are not due to the type of BRCA1 mutation, but may be secondary to genetic factors shared. This may have clinical implications for follow-up such as prophylactic surgery within carriers of the two most frequent Norwegian BRCA1 founder mutations

    Co-Design of a Trustworthy AI System in Healthcare: Deep Learning Based Skin Lesion Classifier

    Get PDF
    This paper documents how an ethically aligned co-design methodology ensures trustworthiness in the early design phase of an artificial intelligence (AI) system component for healthcare. The system explains decisions made by deep learning networks analyzing images of skin lesions. The co-design of trustworthy AI developed here used a holistic approach rather than a static ethical checklist and required a multidisciplinary team of experts working with the AI designers and their managers. Ethical, legal, and technical issues potentially arising from the future use of the AI system were investigated. This paper is a first report on co-designing in the early design phase. Our results can also serve as guidance for other early-phase AI-similar tool developments.</jats:p

    On Assessing Trustworthy AI in Healthcare. Machine Learning as a Supportive Tool to Recognize Cardiac Arrest in Emergency Calls

    Get PDF
    Artificial Intelligence (AI) has the potential to greatly improve the delivery of healthcare and other services that advance population health and wellbeing. However, the use of AI in healthcare also brings potential risks that may cause unintended harm. To guide future developments in AI, the High-Level Expert Group on AI set up by the European Commission (EC), recently published ethics guidelines for what it terms “trustworthy” AI. These guidelines are aimed at a variety of stakeholders, especially guiding practitioners toward more ethical and more robust applications of AI. In line with efforts of the EC, AI ethics scholarship focuses increasingly on converting abstract principles into actionable recommendations. However, the interpretation, relevance, and implementation of trustworthy AI depend on the domain and the context in which the AI system is used. The main contribution of this paper is to demonstrate how to use the general AI HLEG trustworthy AI guidelines in practice in the healthcare domain. To this end, we present a best practice of assessing the use of machine learning as a supportive tool to recognize cardiac arrest in emergency calls. The AI system under assessment is currently in use in the city of Copenhagen in Denmark. The assessment is accomplished by an independent team composed of philosophers, policy makers, social scientists, technical, legal, and medical experts. By leveraging an interdisciplinary team, we aim to expose the complex trade-offs and the necessity for such thorough human review when tackling socio-technical applications of AI in healthcare. For the assessment, we use a process to assess trustworthy AI, called 1Z-Inspection® to identify specific challenges and potential ethical trade-offs when we consider AI in practice.</jats:p

    How to Assess Trustworthy AI in Practice

    No full text
    This report is a methodological reflection on Z-Inspection. Z-Inspection is a holistic process used to evaluate the trustworthiness of AI-based technologies at different stages of the AI lifecycle. It focuses, in particular, on the identification and discussion of ethical issues and tensions through the elaboration of socio-technical scenarios. It uses the general European Union's High-Level Expert Group's (EU HLEG) guidelines for trustworthy AI. This report illustrates for both AI researchers and AI practitioners how the EU HLEG guidelines for trustworthy AI can be applied in practice. We share the lessons learned from conducting a series of independent assessments to evaluate the trustworthiness of AI systems in healthcare. We also share key recommendations and practical suggestions on how to ensure a rigorous trustworthy AI assessment throughout the life-cycle of an AI system

    Co-design of a trustworthy AI system in healthcare : deep learning based skin lesion classifier

    No full text
    This paper documents how an ethically aligned co-design methodology ensures trustworthiness in the early design phase of an artificial intelligence (AI) system component for healthcare. The system explains decisions made by deep learning networks analyzing images of skin lesions. The co-design of trustworthy AI developed here used a holistic approach rather than a static ethical checklist and required a multidisciplinary team of experts working with the AI designers and their managers. Ethical, legal, and technical issues potentially arising from the future use of the AI system were investigated. This paper is a first report on co-designing in the early design phase. Our results can also serve as guidance for other early-phase AI-similar tool developments
    corecore