16 research outputs found

    Borrowed beauty? Understanding identity in Asian facial cosmetic surgery

    Get PDF
    Item does not contain fulltextThis review aims to identify (1) sources of knowledge and (2) important themes of the ethical debate related to surgical alteration of facial features in East Asians. This article integrates narrative and systematic review methods. In March 2014, we searched databases including PubMed, Philosopher's Index, Web of Science, Sociological Abstracts, and Communication Abstracts using key terms "cosmetic surgery," "ethnic*," "ethics," "Asia*," and "Western*." The study included all types of papers written in English that discuss the debate on rhinoplasty and blepharoplasty in East Asians. No limit was put on date of publication. Combining both narrative and systematic review methods, a total of 31 articles were critically appraised on their contribution to ethical reflection founded on the debates regarding the surgical alteration of Asian features. Sources of knowledge were drawn from four main disciplines, including the humanities, medicine or surgery, communications, and economics. Focusing on cosmetic surgery perceived as a westernising practice, the key debate themes included authenticity of identity, interpersonal relationships and socio-economic utility in the context of Asian culture. The study shows how cosmetic surgery of ethnic features plays an important role in understanding female identity in the Asian context. Based on the debate themes authenticity of identity, interpersonal relationships, and socio-economic utility, this article argues that identity should be understood as less individualistic and more as relational and transformational in the Asian context. In addition, this article also proposes to consider cosmetic surgery of Asian features as an interplay of cultural imperialism and cultural nationalism, which can both be a source of social pressure to modify one's appearance

    Future-ai:International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Get PDF
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI

    FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Full text link
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI

    Making decisions: Bias in artificial intelligence and data‑driven diagnostic tools

    No full text
    BACKGROUND: Although numerous studies have shown the potential of artificial intelligence (AI) systems in drastically improving clinical practice, there are concerns that these AI systems could replicate existing biases. OBJECTIVE: This paper provides a brief overview of \u27algorithmic bias\u27, which refers to the tendency of some AI systems to perform poorly for disadvantaged or marginalised groups. DISCUSSION: AI relies on data generated, collected, recorded and labelled by humans. If AI systems remain unchecked, whatever biases that exist in the real world that are embedded in data will be incorporated into the AI algorithms. Algorithmic bias can be considered as an extension, if not a new manifestation, of existing social biases, understood as negative attitudes towards or the discriminatory treatment of some groups. In medicine, algorithmic bias can compromise patient safety and risks perpetuating disparities in care and outcome. Thus, clinicians should consider the risk of bias when deploying AI-enabled tools in their practice

    Pathologizing Ugliness: A Conceptual Analysis of the Naturalist and Normativist Claims in Aesthetic Pathology

    No full text
    Pathologizing ugliness refers to the use of disease language and medical processes to foster and support the claim that undesirable features are pathological conditions requiring medical or surgical intervention. Primarily situated in cosmetic surgery, the practice appeals to the concept of aesthetic pathology , which is a medical designation for features that deviate from some designated aesthetic norms. This article offers a two-pronged conceptual analysis of aesthetic pathology. First, I argue that three sets of claims, derived from normativist and naturalistic accounts of disease, inform the framing of ugliness as a disease. These claims concern: (1) aesthetic harms, (2) aesthetic dysfunction, and (3) aesthetic deviation. Second, I introduce the notion of a hybridization loop in medicine, which merges the naturalist and normative understanding of the disease that potentially enables pathologizing practices. In the context of cosmetic surgery, the loop simultaneously promotes the framing of beauty ideals as normal biological attributes and the framing of normal appearance as an aesthetic ideal to legitimize the need for cosmetic interventions. The article thus offers an original discussion of the conceptual problems arising from a specific practice in cosmetic surgery that depicts ugliness as the disease

    Medicalisation of Asian features in cosmetic surgery

    No full text
    Theoretical thesis.Spine title: Medicalisation of Asian features in cosmetic surgery.Bibliography: pages 80-83.Introduction -- Chapter 1. Conceptual analysis of medicalisation -- Chapter 2. Empirical investigation of medicalisation -- Chapter 3. Ethical analysis of medicalised Asian features -- Conclusion.In East Asian countries, the ever-growing popularity of facial cosmetic surgery has generated various debates on the ethical implications of the practice. Ethical discussions are zooming in on the medicalisation of race-identifying facial features, such as Asian eyelids, in what has been referred to as Asian cosmetic surgery. In this study, I first posit that medicalisation in Asian cosmetic surgery can be interpreted in two forms: treatment versus enhancement forms. In the treatment form, cosmetic surgery is viewed as a remedy for "pathologised" Asian features. In the enhancement form, cosmetic surgery is seen as a form of improving the normal, albeit unwanted, racial features. Next, I present the findings from an empirical study that investigates medicalisation and its two forms in cosmetic surgery websites hosted in South Korea and Australia, as both countries are experiencing a growing number of aesthetic surgery clinics for Asians. Finally, I offer an ethical analysis of the consequences of medicalising racial features, mainly drawing from the findings of the empirical study. In particular, I describe how the practice influences individual autonomy and how it impacts on the traditional goals of medicine.Mode of access: World wide web1 online resource (ii, 83 pages) black & white illustration

    Regulation of AI in Health Care: A Cautionary Tale Considering Horses and Zebras

    No full text
    The introduction of Artificial Intelligence (AI) into health care has been accompanied by uncertainties and regulatory challenges. The establishment of a regulatory framework around AI in health is in its infancy and the way forward is unclear. There are those who argue that this represents a concerning regulatory gap, while others assert that existing regulatory frameworks, policies and guidelines are sufficient. We argue that perhaps the reality is somewhere in between, but that there is a need for engagement with principles and guidelines to inform future regulation. However, this cannot be done effectively until there is more clarity around the reality of AI in health and common misconceptions are addressed. This paper explores some of these misconceptions and argues for a principled approach to the regulation of AI in health

    Public views on ethical issues in healthcare artificial intelligence: protocol for a scoping review

    No full text
    Background: In recent years, innovations in artificial intelligence (AI) have led to the development of new healthcare AI (HCAI) technologies. Whilst some of these technologies show promise for improving the patient experience, ethicists have warned that AI can introduce and exacerbate harms and wrongs in healthcare. It is important that HCAI reflects the values that are important to people. However, involving patients and publics in research about AI ethics remains challenging due to relatively limited awareness of HCAI technologies. This scoping review aims to map how the existing literature on publics’ views on HCAI addresses key issues in AI ethics and governance. Methods: We developed a search query to conduct a comprehensive search of PubMed, Scopus, Web of Science, CINAHL, and Academic Search Complete from January 2010 onwards. We will include primary research studies which document publics’ or patients’ views on machine learning HCAI technologies. A coding framework has been designed and will be used capture qualitative and quantitative data from the articles. Two reviewers will code a proportion of the included articles and any discrepancies will be discussed amongst the team, with changes made to the coding framework accordingly. Final results will be reported quantitatively and qualitatively, examining how each AI ethics issue has been addressed by the included studies. Discussion: Consulting publics and patients about the ethics of HCAI technologies and innovations can offer important insights to those seeking to implement HCAI ethically and legitimately. This review will explore how ethical issues are addressed in literature examining publics’ and patients’ views on HCAI, with the aim of determining the extent to which publics’ views on HCAI ethics have been addressed in existing research. This has the potential to support the development of implementation processes and regulation for HCAI that incorporates publics’ values and perspectives

    Ethical Guidance for Hard Decisions: A Critical Review of Early International COVID-19 ICU Triage Guidelines

    No full text
    This article provides a critical comparative analysis of the substantive and procedural values and ethical concepts articulated in guidelines for allocating scarce resources in the COVID-19 pandemic. We identified 21 local and national guidelines written in English, Spanish, German and French; applicable to specific and identifiable jurisdictions; and providing guidance to clinicians for decision making when allocating critical care resources during the COVID-19 pandemic. US guidelines were not included, as these had recently been reviewed elsewhere. Information was extracted from each guideline on: 1) the development process; 2) the presence and nature of ethical, medical and social criteria for allocating critical care resources; and 3) the membership of and decision-making procedure of any triage committees. Results of our analysis show the majority appealed primarily to consequentialist reasoning in making allocation decisions, tempered by a largely pluralistic approach to other substantive and procedural values and ethical concepts. Medical and social criteria included medical need, co-morbidities, prognosis, age, disability and other factors, with a focus on seemingly objective medical criteria. There was little or no guidance on how to reconcile competing criteria, and little attention to internal contradictions within individual guidelines. Our analysis reveals the challenges in developing sound ethical guidance for allocating scarce medical resources, highlighting problems in operationalising ethical concepts and principles, divergence between guidelines, unresolved contradictions within the same guideline, and use of naĂŻve objectivism in employing widely used medical criteria for allocating ICU resources
    corecore