16 research outputs found
Machine Learning and the End of Theory: Reflections on a Data-Driven Conception of Health
Taking the notion of health as a leitmotif, this paper discusses some conceptual boundaries for using machine learning - a data-driven, statistical, and computational technique in the field of artificial intelligence - for epistemic purposes and for generating knowledge about the world based solely on the statistical correlations found in data (i.e., the "End of Theory" view).The thrust of the argument is that prior theoretical conceptions, subjectivity, and values would - because of their normative power - inevitably blight any effort at knowledge-making that seeks to be exclusively driven by data and nothing else. The conclusion suggests that machine learning will neither resolve nor mitigate the serious internal contradictions found in the "biostatistical theory" of health - the most well-discussed data-driven theory of health. The definition of notions such as these is an ongoing and fraught societal dialogue where the discussion is not only about what is, but also about what should be. This dialogical engagement is a question of ethics and politics and not one of mathematics
"Big Data" i els nous mètodes de visualització de la informació
En el contexto actual, es necesario plantear nuevas formas de organizar y presentar la información al usuario, sea este un científico que trabaja con los enormes resultados de un acelerador de partículas o una ciudadano normal que se quiere informar sobre el histórico de datos del clima de su ciudad obtenidos de un gran número de sensores ambientales.In the current context we must come up with new ways of organising and displaying information for users, whether they be scientists working with the huge results of a particle accelerator, or ordinary people looking up their local climate data obtained from numerous environmetal sensors.En el context actual, cal plantejar noves formes d'organitzar i presentar la informació a l'usuari, ja sigui un científic que treballa amb els enormes resultats d'un accelerador de partícules o bé un ciutadà normal que vol informar-se sobre l'historial de dades del clima de la seva ciutat obtingudes d'un gran nombre de sensors ambientals
The Chimera of Algorithmic Objectivity: Difficulties of Machine Learning in the Development of a Non-Normative Notion of Health
Este ensayo explora si el aprendizaje automático, una subdisciplina de la inteligencia artificial, puede contribuir a desarrollar un acercamiento más objetivo al desarrollo y formulación de conceptos y descripciones, tomando como ejemplo el caso de la definición de salud. Para ello se aborda la teoría naturalista de la salud propuesta por Christopher Boorse y se la contrasta con una serie de posibilidades y problemas que pueden surgir al aplicar el aprendizaje automático a la formulación junto a esta teoría. En base al análisis se concluye que tanto el aprendizaje automático (tanto supervisado como no supervisado) arrastran elementos de normatividad y subjetividad que hacen inviable el desarrollo de conceptos y descripciones de manera neutra y objetiva. Esto no implica que el aprendizaje automático quede invalidado para el análisis evaluativo de la salud, sino que resalta y explicita los elementos subjetivos presentes en él.This essay explores whether machine learning, a sub-discipline of artificial intelligence, can contribute to developing a more objective approach to the development and formulation of concepts and descriptions. Taking as an example the case of the definition of health proposed by Christopher Boorse, the paper discusses and contrasts a series of possibilities and problems that may arise when applying machine learning to solving some of the problems encountered by this theory. Based on the analysis, the paper concludes that machine learning (both supervised and unsupervised) entail elements of normativity and subjectivity that make it unfeasible to develop concepts and descriptions in a neutral and objective manner as the theory requires. This does not imply that machine learning is invalidated for the evaluative analysis of health, but rather highlights and makes explicit the subjective elements present in it
Bias in algorithms of AI systems developed for COVID-19 : A scoping review
To analyze which ethically relevant biases have been identified by academic literature in artificial intelligence (AI) algorithms developed either for patient risk prediction and triage, or for contact tracing to deal with the COVID-19 pandemic. Additionally, to specifically investigate whether the role of social determinants of health (SDOH) have been considered in these AI developments or not. We conducted a scoping review of the literature, which covered publications from March 2020 to April 2021. Studies mentioning biases on AI algorithms developed for contact tracing and medical triage or risk prediction regarding COVID-19 were included. From 1054 identified articles, 20 studies were finally included. We propose a typology of biases identified in the literature based on bias, limitations and other ethical issues in both areas of analysis. Results on health disparities and SDOH were classified into five categories: racial disparities, biased data, socio-economic disparities, unequal accessibility and workforce, and information communication. SDOH needs to be considered in the clinical context, where they still seem underestimated. Epidemiological conditions depend on geographic location, so the use of local data in studies to develop international solutions may increase some biases. Gender bias was not specifically addressed in the articles included. The main biases are related to data collection and management. Ethical problems related to privacy, consent, and lack of regulation have been identified in contact tracing while some bias-related health inequalities have been highlighted. There is a need for further research focusing on SDOH and these specific AI apps
Ethical assessments and mitigation strategies for biases in AI-systems used during the COVID-19 pandemic
The main aim of this article is to reflect on the impact of biases related to artificial intelligence (AI) systems developed to tackle issues arising from the COVID-19 pandemic, with special focus on those developed for triage and risk prediction. A secondary aim is to review assessment tools that have been developed to prevent biases in AI systems. In addition, we provide a conceptual clarification for some terms related to biases in this particular context. We focus mainly on nonracial biases that may be less considered when addressing biases in AI systems in the existing literature. In the manuscript, we found that the existence of bias in AI systems used for COVID-19 can result in algorithmic justice and that the legal frameworks and strategies developed to prevent the apparition of bias have failed to adequately consider social determinants of health. Finally, we make some recommendations on how to include more diverse professional profiles in order to develop AI systems that increase the epistemic diversity needed to tackle AI biases during the COVID-19 pandemic and beyond
Ethical assessments and mitigation strategies for biases in AI-systems used during the COVID-19 pandemic
his article is distributed under the terms of the Creative CommonsAttribution-NonCommercial-NoDerivs 4.0 License (https://creativecommons.org/licenses/by-nc-nd/4.0/) which permits non-commercialuse, reproduction and distribution of the work as published without adaptation or alteration, without further permission provided the original workisattributed as specified on the SAGE and Open Access page (https://us.sagepub.com/en-us/nam/open-access-at-sage)The main aim of this article is to reflect on the impact of biases related to artificial intelligence (AI) systems developed to tackle issues arising from the COVID-19 pandemic, with special focus on those developed for triage and risk prediction. A secondary aim is to review assessment tools that have been developed to prevent biases in AI systems. In addition, we provide a conceptual clarification for some terms related to biases in this particular context. We focus mainly on non-racial biases that may be less considered when addressing biases in AI systems in the existing literature. In the manuscript, we found that the existence of bias in AI systems used for COVID-19 can result in algorithmic justice and that the legal frameworks and strategies developed to prevent the apparition of bias have failed to adequately consider social determinants of health. Finally, we make some recommendations on how to include more diverse professional profiles in order to develop AI systems that increase the epistemic diversity needed to tackle AI biases during the COVID-19 pandemic and beyond.This research has been funded thanks to the “Ayudas Fundación BBVA a Equipos de Investigación Científica SARS-CoV-2 y COVID-19” in HumanitiesPeer reviewe