39,219 research outputs found

    How can humans leverage machine learning? From Medical Data Wrangling to Learning to Defer to Multiple Experts

    Get PDF
    Mención Internacional en el título de doctorThe irruption of the smartphone into everyone’s life and the ease with which we digitise or record any data supposed an explosion of quantities of data. Smartphones, equipped with advanced cameras and sensors, have empowered individuals to capture moments and contribute to the growing pool of data. This data-rich landscape holds great promise for research, decision-making, and personalized applications. By carefully analyzing and interpreting this wealth of information, valuable insights, patterns, and trends can be uncovered. However, big data is worthless in a vacuum. Its potential value is unlocked only when leveraged to drive decision-making. In recent times we have been participants of the outburst of artificial intelligence: the development of computer systems and algorithms capable of perceiving, reasoning, learning, and problem-solving, emulating certain aspects of human cognitive abilities. Nevertheless, our focus tends to be limited, merely skimming the surface of the problem, while the reality is that the application of machine learning models to data introduces is usually fraught. More specifically, there are two crucial pitfalls frequently neglected in the field of machine learning: the quality of the data and the erroneous assumption that machine learning models operate autonomously. These two issues have established the foundation for the motivation driving this thesis, which strives to offer solutions to two major associated challenges: 1) dealing with irregular observations and 2) learning when and who should we trust. The first challenge originates from our observation that the majority of machine learning research primarily concentrates on handling regular observations, neglecting a crucial technological obstacle encountered in practical big-data scenarios: the aggregation and curation of heterogeneous streams of information. Before applying machine learning algorithms, it is crucial to establish robust techniques for handling big data, as this specific aspect presents a notable bottleneck in the creation of robust algorithms. Data wrangling, which encompasses the extraction, integration, and cleaning processes necessary for data analysis, plays a crucial role in this regard. Therefore, the first objective of this thesis is to tackle the frequently disregarded challenge of addressing irregularities within the context of medical data. We will focus on three specific aspects. Firstly, we will tackle the issue of missing data by developing a framework that facilitates the imputation of missing data points using relevant information derived from alternative data sources or past observations. Secondly, we will move beyond the assumption of homogeneous observations, where only one statistical data type (such as Gaussian) is considered, and instead, work with heterogeneous observations. This means that different data sources can be represented by various statistical likelihoods, such as Gaussian, Bernoulli, categorical, etc. Lastly, considering the temporal enrichment of todays collected data and our focus on medical data, we will develop a novel algorithm capable of capturing and propagating correlations among different data streams over time. All these three problems are addressed in our first contribution which involves the development of a novel method based on Deep Generative Models (DGM) using Variational Autoencoders (VAE). The proposed model, the Sequential Heterogeneous Incomplete VAE (Shi- VAE), enables the aggregation of multiple heterogeneous data streams in a modular manner, taking into consideration the presence of potential missing data. To demonstrate the feasibility of our approach, we present proof-of-concept results obtained from a real database generated through continuous passive monitoring of psychiatric patients. Our second challenge relates to the misbelief that machine learning algorithms can perform independently. However, this notion that AI systems can solely account for automated decisionmaking, especially in critical domains such as healthcare, is far from reality. Our focus now shifts towards a specific scenario where the algorithm has the ability to make predictions independently or alternatively defer the responsibility to a human expert. The purpose of including the human is not to obtain jsut better performance, but also more reliable and trustworthy predictions we can rely on. In reality, however, important decisions are not made by one person but are usually committed by an ensemble of human experts. With this in mind, two important questions arise: 1) When should the human or the machine bear responsibility and 2) among the experts, who should we trust? To answer the first question, we will employ a recent theory known as Learning to defer (L2D). In L2D we are not only interested in abstaining from prediction but also in understanding the humans confidence for making such prediction. thus deferring only when the human is more likely to be correct. The second question about who to defer among a pool of experts has not been yet answered in the L2D literature, and this is what our contributions aim to provide. First, we extend the two yet proposed consistent surrogate losses in the L2D literature to the multiple-expert setting. Second, we study the frameworks ability to estimate the probability that a given expert correctly predicts and assess whether the two surrogate losses are confidence calibrated. Finally, we propose a conformal inference technique that chooses a subset of experts to query when the system defers. Ensembling experts based on confidence levels is vital to optimize human-machine collaboration. In conclusion, this doctoral thesis has investigated two cases where humans can leverage the power of machine learning: first, as a tool to assist in data wrangling and data understanding problems and second, as a collaborative tool where decision-making can be automated by the machine or delegated to human experts, fostering more transparent and trustworthy solutions.La irrupción de los smartphones en la vida de todos y la facilidad con la que digitalizamos o registramos cualquier situación ha supuesto una explosión en la cantidad de datos. Los teléfonos, equipados con cámaras y sensores avanzados, han contribuido a que las personas puedann capturar más momentos, favoreciendo así el creciente conjunto de datos. Este panorama repleto de datos aporta un gran potencial de cara a la investigación, la toma de decisiones y las aplicaciones personalizadas. Mediante el análisis minucioso y una cuidada interpretación de esta abundante información, podemos descubrir valiosos patrones, tendencias y conclusiones Sin embargo, este gran volumen de datos no tiene valor por si solo. Su potencial se desbloquea solo cuando se aprovecha para impulsar la toma de decisiones. En tiempos recientes, hemos sido testigos del auge de la inteligencia artificial: el desarrollo de sistemas informáticos y algoritmos capaces de percibir, razonar, aprender y resolver problemas, emulando ciertos aspectos de las capacidades cognitivas humanas. No obstante, solemos centrarnos solo en la superficie del problema mientras que la realidad es que la aplicación de modelos de aprendizaje automático a los datos presenta desafíos significativos. Concretamente, se suelen pasar por alto dos problemas cruciales en el campo del aprendizaje automático: la calidad de los datos y la suposición errónea de que los modelos de aprendizaje automático pueden funcionar de manera autónoma. Estos dos problemas han sido el fundamento de la motivación que impulsa esta tesis, que se esfuerza en ofrecer soluciones a dos desafíos importantes asociados: 1) lidiar con datos irregulares y 2) aprender cuándo y en quién debemos confiar. El primer desafío surge de nuestra observación de que la mayoría de las investigaciones en aprendizaje automático se centran principalmente en manejar datos regulares, descuidando un obstáculo tecnológico crucial que se encuentra en escenarios prácticos con gran cantidad de datos: la agregación y el curado de secuencias heterogéneas. Antes de aplicar algoritmos de aprendizaje automático, es crucial establecer técnicas robustas para manejar estos datos, ya que est problemática representa un cuello de botella claro en la creación de algoritmos robustos. El procesamiento de datos (en concreto, nos centraremos en el término inglés data wrangling), que abarca los procesos de extracción, integración y limpieza necesarios para el análisis de datos, desempeña un papel crucial en este sentido. Por lo tanto, el primer objetivo de esta tesis es abordar el desafío normalmente paso por alto de tratar datos irregulare. Específicamente, bajo el contexto de datos médicos. Nos centraremos en tres aspectos principales. En primer lugar, abordaremos el problema de los datos perdidos mediante el desarrollo de un marco que facilite la imputación de estos datos perdidos utilizando información relevante obtenida de fuentes de datos de diferente naturalaeza u observaciones pasadas. En segundo lugar, iremos más allá de la suposición de lidiar con observaciones homogéneas, donde solo se considera un tipo de dato estadístico (como Gaussianos) y, en su lugar, trabajaremos con observaciones heterogéneas. Esto significa que diferentes fuentes de datos pueden estar representadas por diversas distribuciones de probabilidad, como Gaussianas, Bernoulli, categóricas, etc. Por último, teniendo en cuenta el enriquecimiento temporal de los datos hoy en día y nuestro enfoque directo sobre los datos médicos, propondremos un algoritmo innovador capaz de capturar y propagar la correlación entre diferentes flujos de datos a lo largo del tiempo. Todos estos tres problemas se abordan en nuestra primera contribución, que implica el desarrollo de un método basado en Modelos Generativos Profundos (Deep Genarative Model en inglés) utilizando Autoencoders Variacionales (Variational Autoencoders en ingés). El modelo propuesto, Sequential Heterogeneous Incomplete VAE (Shi-VAE), permite la agregación de múltiples flujos de datos heterogéneos de manera modular, teniendo en cuenta la posible presencia de datos perdidos. Para demostrar la viabilidad de nuestro enfoque, presentamos resultados de prueba de concepto obtenidos de una base de datos real generada a través del monitoreo continuo pasivo de pacientes psiquiátricos. Nuestro segundo desafío está relacionado con la creencia errónea de que los algoritmos de aprendizaje automático pueden funcionar de manera independiente. Sin embargo, esta idea de que los sistemas de inteligencia artificial pueden ser los únicos responsables en la toma de decisione, especialmente en dominios críticos como la atención médica, está lejos de la realidad. Ahora, nuestro enfoque se centra en un escenario específico donde el algoritmo tiene la capacidad de realizar predicciones de manera independiente o, alternativamente, delegar la responsabilidad en un experto humano. La inclusión del ser humano no solo tiene como objetivo obtener un mejor rendimiento, sino también obtener predicciones más transparentes y seguras en las que podamos confiar. En la realidad, sin embargo, las decisiones importantes no las toma una sola persona, sino que generalmente son el resultado de la colaboración de un conjunto de expertos. Con esto en mente, surgen dos preguntas importantes: 1) ¿Cuándo debe asumir la responsabilidad el ser humano o cuándo la máquina? y 2) de entre los expertos, ¿en quién debemos confiar? Para responder a la primera pregunta, emplearemos una nueva teoría llamada Learning to defer (L2D). En L2D, no solo estamos interesados en abstenernos de hacer predicciones, sino también en comprender cómo de seguro estará el experto para hacer dichas predicciones, diferiendo solo cuando el humano sea más probable en predecir correcatmente. La segunda pregunta sobre a quién deferir entre un conjunto de expertos aún no ha sido respondida en la literatura de L2D, y esto es precisamente lo que nuestras contribuciones pretenden proporcionar. En primer lugar, extendemos las dos primeras surrogate losses consistentes propuestas hasta ahora en la literatura de L2D al contexto de múltiples expertos. En segundo lugar, estudiamos la capacidad de estos modelos para estimar la probabilidad de que un experto dado haga predicciones correctas y evaluamos si estas surrogate losses están calibradas en términos de confianza. Finalmente, proponemos una técnica de conformal inference que elige un subconjunto de expertos para consultar cuando el sistema decide diferir. Esta combinación de expertos basada en los respectivos niveles de confianza es fundamental para optimizar la colaboración entre humanos y máquinas En conclusión, esta tesis doctoral ha investigado dos casos en los que los humanos pueden aprovechar el poder del aprendizaje automático: primero, como herramienta para ayudar en problemas de procesamiento y comprensión de datos y, segundo, como herramienta colaborativa en la que la toma de decisiones puede ser automatizada para ser realizada por la máquina o delegada a expertos humanos, fomentando soluciones más transparentes y seguras.Programa de Doctorado en Multimedia y Comunicaciones por la Universidad Carlos III de Madrid y la Universidad Rey Juan CarlosPresidente: Joaquín Míguez Arenas.- Secretario: Juan José Murillo Fuentes.- Vocal: Mélanie Natividad Fernández Pradie

    AMERICAN MUSLIM UNDERGRADUATES’ VIEWS ON EVOLUTION

    Get PDF
    Thesis (Ph.D.) - Indiana University, Education, 2016A qualitative investigation into American Muslim undergraduates' views on evolution revealed three main positions on evolution: theistic evolution, a belief in special creation of all species, and a belief in special creation of humans with evolution for all non-human species. One can conceive of the manner in which respondents chose their respective positions on evolution as a means of reconciling their religious beliefs with scientific evidence in support of current evolutionary theory. Of 19 theistic evolutionists, 18 affirmed that revelation is a source of knowledge. 74% were convinced by the scientific evidence that evolution happens and did not see evidence in the Quran that contradicts this. 37% state that it is consistent with God’s attributes that He would have created organisms to evolve. That seeking knowledge in Islam is important was mentioned by 21%. All 19 participants with a belief in special creation of humans affirmed the idea that revelation is a source of knowledge and considered scientific evidence a source of knowledge as well. Their positions on evolution can be seen as a means of reconciling their religious beliefs with scientific evidence. They found scientific evidence convincing for all non-human species. They thought that humans could not have evolved because the creation of humans is treated with more detail in the Quran than is the creation of other species. Most accepted microevolution, but not macroevolution for humans. Those with a belief in the special creation of all species found the evidence in the Quran and hadith more convincing than scientific evidence. They interpreted the Quran and hadith as indicating special creation of all species. They accommodated scientific evidence by accepting microevolution for all species. Because most respondents accepted microevolution for all species, teaching microevolution before macroevolution might be beneficial for Muslim students. Teachers helped some students navigate the relationship between science and religion to allow them to accept evolution without negating their religious beliefs. Providing role models who reconcile science and religion, Muslim evolutionary biologists, and examples of Muslim scientists from history can help accommodate acceptance of evolution by Muslims

    Irrationality and human reasoning

    Get PDF
    In his account of intentional interpretation, Donald Davidson assumes that people are mostly rational. Several psychological experiments though, reveal that human beings deviate drastically from the normative standards of rationality. Therefore, some psychologists arrive to the conclusion that humans are mostly irrational. In this thesis, I raise some objections to both points of view. On the one hand, ascribing rationality to humans in an a priori manner seems a suspicious position to adopt, considering the empirical data that show otherwise. On the other hand, the validity of the experiments and what exactly they test can also be put in question, since the position that humans are in general irrational is also unacceptable intuitively. In this thesis, I suggest that the discrepancy is due to the notion of rationality we adopt, which I bring into question. I do not find convincing reasons that humans should be thought a priori as rational and I do not also see why humans should be called irrational just because they fail certain tests. Many of the alleged irrationalities in the tests can be explained if we adopt different styles of reasoning than the traditional ones. Hence, humans can count as rational in another way. But, is this what Davidson thinks of rational, or does he think of rationality in the traditional sense? I think the type of rationality that Davidson endorses relies on Classic Logical conditions, which makes it inflexible. A type of rationality that relies on Fuzzy Logical conditions, as I claim, is more appropriate to describe human rationality

    Skeptical Theism

    Get PDF
    Skeptical theism is a family of responses to the evidential problem of evil. What unifies this family is two general claims. First, that even if God were to exist, we shouldn’t expect to see God’s reasons for permitting the suffering we observe. Second, the previous claim entails the failure of a variety of arguments from evil against the existence of God. In this essay, we identify three particular articulations of skeptical theism—three different ways of “filling in” those two claims—and describes their role in responding to evidential arguments of evil due to William Rowe and Paul Draper. But skeptical theism has been subject to a variety of criticisms, several of which raise interesting issues and puzzles not just in philosophy of religion but other areas of philosophy as well. Consequently, we discuss some of these criticisms, partly with an eye to bringing out the connections between skeptical theism and current topics in mainstream philosophy. Finally, we conclude by situating skeptical theism within our own distinctive methodology for evaluating world views, what we call “worldview theory versioning.

    Some Small Discrepancy: Jean-Christophe-Bailly's Creaturely Ontology

    Get PDF
    From Journal of Animal Ethics. Copyright 2013 by the Board of Trustees of the University of Illinois. Used with permission of the University of Illinois Press. This material cannot be reprinted, photocopied, posted online or distributed in any way without the written permission of the copyright holder.The final version of this text will be available (as pdf.) by September-October 2013.The final version of this text will be available (as pdf.) on 20 September, 2013.The final version of this text will be available (as pdf.) on 20 September, 2013.The final version of this text will be available (as pdf.) on 20 September, 2013.The final version of this text will be available (as pdf.) on 20 September, 2013.This extended review essay on Bailly's first major work in English translation, The Animal Side, situates Bailly in the continuum of Continental philosophy on the topics of animality and animal ontology, from Rilke to Heidegger, Derrida and Deleuze. Exploring Bailly's linking of thought and vision and his insistence on the pivotal role of animals in the emergence of European art and image-making, the essay argues that the political dimension, implicit in Bailly's text, nevertheless remains underdeveloped. This points to a broader concern within Continental theory: the need to connect new human and animal ontologies with ethical and political normative models for the effective articulation of post-anthropocentric collectivities

    After Humanity: Science Fiction after Extinction in Kurt Vonnegut and Clifford D. Simak

    Get PDF
    This article takes up the question of whether and to what extent humanistic values can survive confrontation with the deep time of the Anthropocene, specifically with the inevitability of human extinction. In particular, I focus on representations of human extinction and the emergence of sapient successor species in H.G. Wells\u27s The Time Machine (1895). Kurt Vonnegut\u27s Galápagos (1985). and Clifford D. Simak\u27s City (1952), identifying in the latter two submerged humanisms that belie the surface anti-humanism and cosmic pessimism of the novels
    corecore