22 research outputs found
Borrowed beauty? Understanding identity in Asian facial cosmetic surgery
Item does not contain fulltextThis review aims to identify (1) sources of knowledge and (2) important themes of the ethical debate related to surgical alteration of facial features in East Asians. This article integrates narrative and systematic review methods. In March 2014, we searched databases including PubMed, Philosopher's Index, Web of Science, Sociological Abstracts, and Communication Abstracts using key terms "cosmetic surgery," "ethnic*," "ethics," "Asia*," and "Western*." The study included all types of papers written in English that discuss the debate on rhinoplasty and blepharoplasty in East Asians. No limit was put on date of publication. Combining both narrative and systematic review methods, a total of 31 articles were critically appraised on their contribution to ethical reflection founded on the debates regarding the surgical alteration of Asian features. Sources of knowledge were drawn from four main disciplines, including the humanities, medicine or surgery, communications, and economics. Focusing on cosmetic surgery perceived as a westernising practice, the key debate themes included authenticity of identity, interpersonal relationships and socio-economic utility in the context of Asian culture. The study shows how cosmetic surgery of ethnic features plays an important role in understanding female identity in the Asian context. Based on the debate themes authenticity of identity, interpersonal relationships, and socio-economic utility, this article argues that identity should be understood as less individualistic and more as relational and transformational in the Asian context. In addition, this article also proposes to consider cosmetic surgery of Asian features as an interplay of cultural imperialism and cultural nationalism, which can both be a source of social pressure to modify one's appearance
Future-ai:International consensus guideline for trustworthy and deployable artificial intelligence in healthcare
Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI
FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare
Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI
FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare
Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI
FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare
Despite major advances in artificial intelligence (AI) for medicine and
healthcare, the deployment and adoption of AI technologies remain limited in
real-world clinical practice. In recent years, concerns have been raised about
the technical, clinical, ethical and legal risks associated with medical AI. To
increase real world adoption, it is essential that medical AI tools are trusted
and accepted by patients, clinicians, health organisations and authorities.
This work describes the FUTURE-AI guideline as the first international
consensus framework for guiding the development and deployment of trustworthy
AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and
currently comprises 118 inter-disciplinary experts from 51 countries
representing all continents, including AI scientists, clinicians, ethicists,
and social scientists. Over a two-year period, the consortium defined guiding
principles and best practices for trustworthy AI through an iterative process
comprising an in-depth literature review, a modified Delphi survey, and online
consensus meetings. The FUTURE-AI framework was established based on 6 guiding
principles for trustworthy AI in healthcare, i.e. Fairness, Universality,
Traceability, Usability, Robustness and Explainability. Through consensus, a
set of 28 best practices were defined, addressing technical, clinical, legal
and socio-ethical dimensions. The recommendations cover the entire lifecycle of
medical AI, from design, development and validation to regulation, deployment,
and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which
provides a structured approach for constructing medical AI tools that will be
trusted, deployed and adopted in real-world practice. Researchers are
encouraged to take the recommendations into account in proof-of-concept stages
to facilitate future translation towards clinical practice of medical AI
Making decisions: Bias in artificial intelligence and data‑driven diagnostic tools
BACKGROUND: Although numerous studies have shown the potential of artificial intelligence (AI) systems in drastically improving clinical practice, there are concerns that these AI systems could replicate existing biases. OBJECTIVE: This paper provides a brief overview of \u27algorithmic bias\u27, which refers to the tendency of some AI systems to perform poorly for disadvantaged or marginalised groups. DISCUSSION: AI relies on data generated, collected, recorded and labelled by humans. If AI systems remain unchecked, whatever biases that exist in the real world that are embedded in data will be incorporated into the AI algorithms. Algorithmic bias can be considered as an extension, if not a new manifestation, of existing social biases, understood as negative attitudes towards or the discriminatory treatment of some groups. In medicine, algorithmic bias can compromise patient safety and risks perpetuating disparities in care and outcome. Thus, clinicians should consider the risk of bias when deploying AI-enabled tools in their practice
Pathologizing Ugliness: A Conceptual Analysis of the Naturalist and Normativist Claims in Aesthetic Pathology
Pathologizing ugliness refers to the use of disease language and medical processes to foster and support the claim that undesirable features are pathological conditions requiring medical or surgical intervention. Primarily situated in cosmetic surgery, the practice appeals to the concept of aesthetic pathology , which is a medical designation for features that deviate from some designated aesthetic norms. This article offers a two-pronged conceptual analysis of aesthetic pathology. First, I argue that three sets of claims, derived from normativist and naturalistic accounts of disease, inform the framing of ugliness as a disease. These claims concern: (1) aesthetic harms, (2) aesthetic dysfunction, and (3) aesthetic deviation. Second, I introduce the notion of a hybridization loop in medicine, which merges the naturalist and normative understanding of the disease that potentially enables pathologizing practices. In the context of cosmetic surgery, the loop simultaneously promotes the framing of beauty ideals as normal biological attributes and the framing of normal appearance as an aesthetic ideal to legitimize the need for cosmetic interventions. The article thus offers an original discussion of the conceptual problems arising from a specific practice in cosmetic surgery that depicts ugliness as the disease
Medicalisation of Asian features in cosmetic surgery
Theoretical thesis.Spine title: Medicalisation of Asian features in cosmetic surgery.Bibliography: pages 80-83.Introduction -- Chapter 1. Conceptual analysis of medicalisation -- Chapter 2. Empirical investigation of medicalisation -- Chapter 3. Ethical analysis of medicalised Asian features -- Conclusion.In East Asian countries, the ever-growing popularity of facial cosmetic surgery has generated various debates on the ethical implications of the practice. Ethical discussions are zooming in on the medicalisation of race-identifying facial features, such as Asian eyelids, in what has been referred to as Asian cosmetic surgery. In this study, I first posit that medicalisation in Asian cosmetic surgery can be interpreted in two forms: treatment versus enhancement forms. In the treatment form, cosmetic surgery is viewed as a remedy for "pathologised" Asian features. In the enhancement form, cosmetic surgery is seen as a form of improving the normal, albeit unwanted, racial features. Next, I present the findings from an empirical study that investigates medicalisation and its two forms in cosmetic surgery websites hosted in South Korea and Australia, as both countries are experiencing a growing number of aesthetic surgery clinics for Asians. Finally, I offer an ethical analysis of the consequences of medicalising racial features, mainly drawing from the findings of the empirical study. In particular, I describe how the practice influences individual autonomy and how it impacts on the traditional goals of medicine.Mode of access: World wide web1 online resource (ii, 83 pages) black & white illustration
Regulation of AI in Health Care: A Cautionary Tale Considering Horses and Zebras
The introduction of Artificial Intelligence (AI) into health care has been accompanied by uncertainties and regulatory challenges. The establishment of a regulatory framework around AI in health is in its infancy and the way forward is unclear. There are those who argue that this represents a concerning regulatory gap, while others assert that existing regulatory frameworks, policies and guidelines are sufficient. We argue that perhaps the reality is somewhere in between, but that there is a need for engagement with principles and guidelines to inform future regulation. However, this cannot be done effectively until there is more clarity around the reality of AI in health and common misconceptions are addressed. This paper explores some of these misconceptions and argues for a principled approach to the regulation of AI in health